content stringlengths 27 928k | path stringlengths 4 230 | size int64 27 928k | nl_text stringlengths 21 396k | nl_size int64 21 396k | nl_language stringlengths 2 3 | nl_language_score float64 0.04 1 |
|---|---|---|---|---|---|---|
"""mysite URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('blog/', include('blog.urls'))
]
| mysite/mysite/urls.py | 797 | mysite URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) | 622 | en | 0.617633 |
# ReID Online Upload Service | upload/__init__.py | 28 | ReID Online Upload Service | 26 | en | 0.838573 |
# Author: F. Alex Wolf (http://falexwolf.de)
"""Differential Gene Expression Analysis
This is a Beta Version of a tool for differential gene expression testing
between sets detected in previous tools. Tools such as dpt, cluster,...
"""
import numpy as np
from scipy.sparse import issparse
from .. import utils
from .. import logging as logg
from ..preprocessing import simple
def rank_genes_groups(
adata,
groupby,
groups='all',
group_reference=None,
n_genes=100,
compute_distribution=False,
only_positive=True,
copy=False):
"""Rank genes according to differential expression [Wolf17]_.
Rank genes by differential expression. By default, a t-test-like ranking is
used, in which means are normalized with variances. Soon, a Wilcoxon-rank
test and other alternatives will be provided.
Parameters
----------
adata : `AnnData`
Annotated data matrix.
groupby : `str`
The key of the sample grouping to consider.
groups : `str`, `list`, optional (default: `'all'`)
Subset of groups, e.g. `['g1', 'g2', 'g3']`, to which comparison shall
be restricted. If not passed, a ranking will be generated for all
groups.
group_reference : `str` or `None`, optional (default: `None`)
If `None`, compare each group to the union of the rest of the group. If
a group identifier, the comparison will be with respect to this group.
n_genes : `int` (default: 100)
How many genes to rank by default.
compute_distribution : `bool`
If `True`, also computes the distribution for top-ranked genes, which
can be visualized using `sc.pl.rank_genes_groups_violin(adata)`.
Returns
-------
rank_genes_groups_gene_zscores : np.ndarray of dtype float (adata.add)
Array of shape (number of comparisons) × (number of genes) storing the
zscore of the each gene for each test.
rank_genes_groups_gene_names : np.ndarray of dtype str (adata.add)
Array of shape (number of comparisons). Stores the labels for each comparison,
for example "C1 vs. C2" when comparing category 'C1' with 'C2'.
"""
logg.info('find differentially expressed genes', r=True)
adata = adata.copy() if copy else adata
n_genes_user = n_genes
utils.check_adata(adata)
# for clarity, rename variable
groups_order = groups
if isinstance(groups_order, list) and isinstance(groups_order[0], int):
groups_order = [str(n) for n in groups_order]
if group_reference is not None and group_reference not in set(groups_order):
groups_order += [group_reference]
if (group_reference is not None
and group_reference not in set(adata.add[groupby + '_order'])):
raise ValueError('group_reference = {} needs to be one of groupby = {}.'
.format(group_reference, groupby))
groups_order, groups_masks = utils.select_groups(
adata, groups_order, groupby)
adata.add['rank_genes_groups'] = groupby
adata.add['rank_genes_groups_order'] = groups_order
X = adata.X
# loop over all masks and compute means, variances and sample numbers
n_groups = groups_masks.shape[0]
n_genes = X.shape[1]
means = np.zeros((n_groups, n_genes))
vars = np.zeros((n_groups, n_genes))
ns = np.zeros(n_groups, dtype=int)
for imask, mask in enumerate(groups_masks):
means[imask], vars[imask] = simple._get_mean_var(X[mask])
ns[imask] = np.where(mask)[0].size
logg.info('... consider "{}":'.format(groupby), groups_order,
'with sample numbers', ns)
if group_reference is not None:
ireference = np.where(groups_order == group_reference)[0][0]
# test each either against the union of all other groups
# or against a specific group
rankings_gene_zscores = []
rankings_gene_names = []
reference_indices = np.arange(adata.n_vars, dtype=int)
for igroup in range(n_groups):
if group_reference is None:
mask_rest = ~groups_masks[igroup]
else:
if igroup == ireference: continue
else: mask_rest = groups_masks[ireference]
mean_rest, var_rest = simple._get_mean_var(X[mask_rest])
# Make a more conservative assumption on the variance reduction
# in the reference. Instead of this
ns_rest = np.where(mask_rest)[0].size
# use this
# ns_rest = ns[igroup]
denominator = np.sqrt(vars[igroup]/ns[igroup] + var_rest/ns_rest)
denominator[np.flatnonzero(denominator == 0)] = np.nan
zscores = (means[igroup] - mean_rest) / denominator
zscores[np.isnan(zscores)] = 0
zscores = zscores if only_positive else np.abs(zscores)
partition = np.argpartition(zscores, -n_genes_user)[-n_genes_user:]
partial_indices = np.argsort(zscores[partition])[::-1]
global_indices = reference_indices[partition][partial_indices]
rankings_gene_zscores.append(zscores[global_indices])
rankings_gene_names.append(adata.var_names[global_indices])
if compute_distribution:
mask = groups_masks[igroup]
for gene_counter in range(n_genes_user):
gene_idx = global_indices[gene_counter]
X_col = X[mask, gene_idx]
if issparse(X): X_col = X_col.toarray()[:, 0]
identifier = _build_identifier(groupby, groups_order[igroup],
gene_counter, adata.var_names[gene_idx])
full_col = np.empty(adata.n_smps)
full_col[:] = np.nan
full_col[mask] = (X_col - mean_rest[gene_idx])/denominator[gene_idx]
adata.smp[identifier] = full_col
groups_order_save = groups_order
if group_reference is not None:
groups_order_save = [g for g in groups_order if g != group_reference]
adata.add['rank_genes_groups_gene_scores'] = np.rec.fromarrays(
[n for n in rankings_gene_zscores],
dtype=[(rn, 'float32') for rn in groups_order_save])
adata.add['rank_genes_groups_gene_names'] = np.rec.fromarrays(
[n for n in rankings_gene_names],
dtype=[(rn, 'U50') for rn in groups_order_save])
logg.m(' finished', t=True, end=' ')
logg.m('and added\n'
' "rank_genes_groups_gene_names", np.recarray to be indexed by the `groups` (adata.add)\n'
' "rank_genes_groups_gene_zscores", the scores (adata.add)\n'
' "rank_genes_...", distributions of top-ranked genes (adata.smp)')
return adata if copy else None
def _build_identifier(groupby, name, gene_counter, gene_name):
return 'rank_genes_{}_{}_{}_{}'.format(
groupby, name, gene_counter, gene_name)
| scanpy/tools/rank_genes_groups.py | 6,837 | Rank genes according to differential expression [Wolf17]_.
Rank genes by differential expression. By default, a t-test-like ranking is
used, in which means are normalized with variances. Soon, a Wilcoxon-rank
test and other alternatives will be provided.
Parameters
----------
adata : `AnnData`
Annotated data matrix.
groupby : `str`
The key of the sample grouping to consider.
groups : `str`, `list`, optional (default: `'all'`)
Subset of groups, e.g. `['g1', 'g2', 'g3']`, to which comparison shall
be restricted. If not passed, a ranking will be generated for all
groups.
group_reference : `str` or `None`, optional (default: `None`)
If `None`, compare each group to the union of the rest of the group. If
a group identifier, the comparison will be with respect to this group.
n_genes : `int` (default: 100)
How many genes to rank by default.
compute_distribution : `bool`
If `True`, also computes the distribution for top-ranked genes, which
can be visualized using `sc.pl.rank_genes_groups_violin(adata)`.
Returns
-------
rank_genes_groups_gene_zscores : np.ndarray of dtype float (adata.add)
Array of shape (number of comparisons) × (number of genes) storing the
zscore of the each gene for each test.
rank_genes_groups_gene_names : np.ndarray of dtype str (adata.add)
Array of shape (number of comparisons). Stores the labels for each comparison,
for example "C1 vs. C2" when comparing category 'C1' with 'C2'.
Differential Gene Expression Analysis
This is a Beta Version of a tool for differential gene expression testing
between sets detected in previous tools. Tools such as dpt, cluster,...
Author: F. Alex Wolf (http://falexwolf.de) for clarity, rename variable loop over all masks and compute means, variances and sample numbers test each either against the union of all other groups or against a specific group Make a more conservative assumption on the variance reduction in the reference. Instead of this use this ns_rest = ns[igroup] | 2,013 | en | 0.750433 |
from __future__ import absolute_import
from django.dispatch import Signal
from functools import wraps
class BetterSignal(Signal):
def connect(self, receiver=None, **kwargs):
"""
Support decorator syntax:
>>> @signal.connect(sender=type)
>>> def my_receiver(**kwargs):
>>> pass
"""
def wrapped(func):
return super(BetterSignal, self).connect(func, **kwargs)
if receiver is None:
return wrapped
return wraps(receiver)(wrapped(receiver))
regression_signal = BetterSignal(providing_args=["instance"])
buffer_incr_complete = BetterSignal(providing_args=["model", "columns", "extra", "result"])
event_received = BetterSignal(providing_args=["ip"])
| src/sentry/signals.py | 754 | Support decorator syntax:
>>> @signal.connect(sender=type)
>>> def my_receiver(**kwargs):
>>> pass | 103 | en | 0.632808 |
# Purpose: Calculate hydrological fluxes in the canopy, unsaturated and saturated sub-domains
# Record of revisions:
# Date Programmer Description of change
# ======== ============= =====================
# 09-2020 A. Elkouk Original code
# ----------------------------------------------------------------------------------------------------------------------
# Parametrization for the fluxes in the vegetation canopy
# ----------------------------------------------------------------------------------------------------------------------
def calc_wetted_fraction(canopyStore, canopyStore_max, gamma):
""" Calculate the wetted fraction of the canopy
Parameters
----------
canopyStore : int or float
Canopy Interception storage [mm]
canopyStore_max : int or float
Maximum non-drainable canopy interception storage [mm]
gamma : float
Parameter to account for the non-linearity in the wetted fraction of the canopy
Returns
-------
wetFrac: float
Wetted fraction of the canopy
"""
if canopyStore < canopyStore_max:
wetFrac = (canopyStore / canopyStore_max) ** gamma
else:
wetFrac = 1.0
return wetFrac
def calc_canopy_evaporation(pet, wetFrac):
""" Calculate the evaporation from canopy interception storage
Parameters
----------
pet : int or float
Potential evapotranspiration [mm day^-1]
wetFrac : float
Wetted fraction of the canopy
Returns
-------
canopyEvap: float
Evaporation from canopy interception storage [mm day^-1]
"""
canopyEvap = pet * wetFrac
return canopyEvap
def calc_throughfall_flux(precip, canopyStore, canopyStore_max):
""" Calculate the throughfall flux from canopy interception storage
Parameters
----------
precip : int or float
Precipitation flux [mm day^-1]
canopyStore : int or float
Canopy Interception storage [mm]
canopyStore_max : int or float
Maximum non-drainable canopy interception storage [mm]
Returns
-------
throughfall : int or float
Throughfall flux [mm day^-1]
"""
if canopyStore < canopyStore_max:
throughfall = precip * (canopyStore / canopyStore_max)
else:
throughfall = precip
return throughfall
def calc_canopy_drainage_flux(canopyStore, canopyStore_max, k_can):
""" Calculate the canopy drainage flux from canopy interception storage
Parameters
----------
canopyStore : int or float
Canopy Interception storage [mm]
canopyStore_max : int or float
Maximum non-drainable canopy interception storage [mm]
k_can: float
Canopy drainage coecient [day^-1]
Returns
-------
canopyDrain : int or float
Canopy drainage flux [mm day^-1]
"""
if canopyStore < canopyStore_max:
canopyDrain = 0.0
else:
canopyDrain = k_can * (canopyStore - canopyStore_max)
return canopyDrain
def calc_precipitation_excess(throughfall, canopyDrain):
""" Calculate excess precipitation (the sum of throughfall and canopy drainage)
Parameters
----------
throughfall : int or float
Throughfall flux [mm day^-1]
canopyDrain : int or float
Canopy drainage flux [mm day^-1]
Returns
-------
precipExcess : int or float
Excess precipitation [mm day^-1]
"""
precipExcess = throughfall + canopyDrain
return precipExcess
# ----------------------------------------------------------------------------------------------------------------------
# Parametrization for the fluxes in the unsaturated zone
# ----------------------------------------------------------------------------------------------------------------------
def calc_saturated_fraction(unsatStore, unsatStore_max, alpha):
""" Calculate the saturated fraction of the unsaturated zone
Parameters
----------
unsatStore : int or float
Storage in the unsaturated zone [mm]
unsatStore_max : int or float
Maximum storage in the unsaturated zone [mm]
alpha : float
Parameter to account for the non-linearity in the variable source area for saturation-excess runoff
Returns
-------
satFrac: float
Saturated fraction of the unsaturated zone
"""
if unsatStore < unsatStore_max:
satFrac = 1 - (1 - (unsatStore / unsatStore_max)) ** alpha
else:
satFrac = 1
return satFrac
def calc_unsaturated_evaporation(pet, unsatStore, fieldCap, wetFrac):
""" Calculate evaporation from the unsaturated zone
Parameters
----------
pet : int or float
Potential evapotranspiration [mm day^-1]
unsatStore : int or float
Storage in the unsaturated zone [mm]
fieldCap : int or float
Field capacity [mm]
wetFrac : float
Wetted fraction of the canopy
Returns
-------
unsatEvap : float
Evaporation from the unsaturated zone [mm day^-1]
"""
if unsatStore < fieldCap:
unsatEvap = pet * (unsatStore / fieldCap) * (1 - wetFrac)
else:
unsatEvap = pet * (1 - wetFrac)
return unsatEvap
def calc_overland_flow(precipExcess, satFrac):
""" Calculate overland flow (surface runoff)
Parameters
----------
precipExcess : int or float
Excess precipitation [mm day^-1]
satFrac : float
Saturated fraction of the unsaturated zone
Returns
-------
overlandFlow : float
Overland flow (surface runoff) [mm day^-1]
"""
overlandFlow = precipExcess * satFrac
return overlandFlow
def calc_percolation_flux(unsatStore, unsatStore_max, fieldCap, k_sat, beta):
""" Calculate the percolation flux from the unsaturated to the saturated zone
Parameters
----------
unsatStore : int or float
Storage in the unsaturated zone [mm]
unsatStore_max : int or float
Maximum storage in the unsaturated zone [mm]
fieldCap : int or float
Field capacity [mm]
k_sat : int or float
Maximum percolation rate [mm day^-1]
beta : int or float
Parameter to account for percolation non-linearity
Returns
-------
percolation : int or float
Percolation flux [mm day^-1]
"""
if unsatStore < fieldCap:
percolation = 0.0
else:
percolation = k_sat * ((unsatStore - fieldCap) / (unsatStore_max - fieldCap)) ** beta
return percolation
# ----------------------------------------------------------------------------------------------------------------------
# Parametrization for the fluxes in the saturated zone
# ----------------------------------------------------------------------------------------------------------------------
def calc_baseflow(satStore, k_sz):
""" Calculate baseflow from the saturated zone
Parameters
----------
satStore : int or float
Storage in the saturated zone [mm]
k_sz : float
Runoff coefficient for the saturated zone [day^-1]
Returns
-------
baseflow : float
Baseflow from the saturated zone [mm day^-1]
"""
baseflow = satStore * k_sz
return baseflow
| flux_param.py | 7,501 | Calculate baseflow from the saturated zone
Parameters
----------
satStore : int or float
Storage in the saturated zone [mm]
k_sz : float
Runoff coefficient for the saturated zone [day^-1]
Returns
-------
baseflow : float
Baseflow from the saturated zone [mm day^-1]
Calculate the canopy drainage flux from canopy interception storage
Parameters
----------
canopyStore : int or float
Canopy Interception storage [mm]
canopyStore_max : int or float
Maximum non-drainable canopy interception storage [mm]
k_can: float
Canopy drainage coecient [day^-1]
Returns
-------
canopyDrain : int or float
Canopy drainage flux [mm day^-1]
Calculate the evaporation from canopy interception storage
Parameters
----------
pet : int or float
Potential evapotranspiration [mm day^-1]
wetFrac : float
Wetted fraction of the canopy
Returns
-------
canopyEvap: float
Evaporation from canopy interception storage [mm day^-1]
Calculate overland flow (surface runoff)
Parameters
----------
precipExcess : int or float
Excess precipitation [mm day^-1]
satFrac : float
Saturated fraction of the unsaturated zone
Returns
-------
overlandFlow : float
Overland flow (surface runoff) [mm day^-1]
Calculate the percolation flux from the unsaturated to the saturated zone
Parameters
----------
unsatStore : int or float
Storage in the unsaturated zone [mm]
unsatStore_max : int or float
Maximum storage in the unsaturated zone [mm]
fieldCap : int or float
Field capacity [mm]
k_sat : int or float
Maximum percolation rate [mm day^-1]
beta : int or float
Parameter to account for percolation non-linearity
Returns
-------
percolation : int or float
Percolation flux [mm day^-1]
Calculate excess precipitation (the sum of throughfall and canopy drainage)
Parameters
----------
throughfall : int or float
Throughfall flux [mm day^-1]
canopyDrain : int or float
Canopy drainage flux [mm day^-1]
Returns
-------
precipExcess : int or float
Excess precipitation [mm day^-1]
Calculate the saturated fraction of the unsaturated zone
Parameters
----------
unsatStore : int or float
Storage in the unsaturated zone [mm]
unsatStore_max : int or float
Maximum storage in the unsaturated zone [mm]
alpha : float
Parameter to account for the non-linearity in the variable source area for saturation-excess runoff
Returns
-------
satFrac: float
Saturated fraction of the unsaturated zone
Calculate the throughfall flux from canopy interception storage
Parameters
----------
precip : int or float
Precipitation flux [mm day^-1]
canopyStore : int or float
Canopy Interception storage [mm]
canopyStore_max : int or float
Maximum non-drainable canopy interception storage [mm]
Returns
-------
throughfall : int or float
Throughfall flux [mm day^-1]
Calculate evaporation from the unsaturated zone
Parameters
----------
pet : int or float
Potential evapotranspiration [mm day^-1]
unsatStore : int or float
Storage in the unsaturated zone [mm]
fieldCap : int or float
Field capacity [mm]
wetFrac : float
Wetted fraction of the canopy
Returns
-------
unsatEvap : float
Evaporation from the unsaturated zone [mm day^-1]
Calculate the wetted fraction of the canopy
Parameters
----------
canopyStore : int or float
Canopy Interception storage [mm]
canopyStore_max : int or float
Maximum non-drainable canopy interception storage [mm]
gamma : float
Parameter to account for the non-linearity in the wetted fraction of the canopy
Returns
-------
wetFrac: float
Wetted fraction of the canopy
Purpose: Calculate hydrological fluxes in the canopy, unsaturated and saturated sub-domains Record of revisions: Date Programmer Description of change ======== ============= ===================== 09-2020 A. Elkouk Original code ---------------------------------------------------------------------------------------------------------------------- Parametrization for the fluxes in the vegetation canopy ---------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------- Parametrization for the fluxes in the unsaturated zone ---------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------- Parametrization for the fluxes in the saturated zone ---------------------------------------------------------------------------------------------------------------------- | 4,735 | en | 0.447574 |
########################################################################
#
# Constants
#
########################################################################
RETURN_SUCCESS = 1234 # Requested command executed successfully
RETURN_FAIL = None # Requested command failed | libs/dev/consts.py | 277 | Constants Requested command executed successfully Requested command failed | 76 | en | 0.942536 |
#!/usr/bin/env python
#
# Copyright 2013 The Rust Project Developers. See the COPYRIGHT
# file at the top-level directory of this distribution and at
# http://rust-lang.org/COPYRIGHT.
#
# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
# option. This file may not be copied, modified, or distributed
# except according to those terms.
"""
This script creates a pile of compile-fail tests check that all the
derives have spans that point to the fields, rather than the
#[derive(...)] line.
sample usage: src/etc/generate-deriving-span-tests.py
"""
import sys, os, datetime, stat
TEST_DIR = os.path.abspath(
os.path.join(os.path.dirname(__file__), '../test/compile-fail'))
YEAR = datetime.datetime.now().year
TEMPLATE = """// Copyright {year} The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// This file was auto-generated using 'src/etc/generate-deriving-span-tests.py'
extern crate rand;
{error_deriving}
struct Error;
{code}
fn main() {{}}
"""
ENUM_STRING = """
#[derive({traits})]
enum Enum {{
A(
Error {errors}
)
}}
"""
ENUM_STRUCT_VARIANT_STRING = """
#[derive({traits})]
enum Enum {{
A {{
x: Error {errors}
}}
}}
"""
STRUCT_STRING = """
#[derive({traits})]
struct Struct {{
x: Error {errors}
}}
"""
STRUCT_TUPLE_STRING = """
#[derive({traits})]
struct Struct(
Error {errors}
);
"""
ENUM_TUPLE, ENUM_STRUCT, STRUCT_FIELDS, STRUCT_TUPLE = range(4)
def create_test_case(type, trait, super_traits, number_of_errors):
string = [ENUM_STRING, ENUM_STRUCT_VARIANT_STRING, STRUCT_STRING, STRUCT_TUPLE_STRING][type]
all_traits = ','.join([trait] + super_traits)
super_traits = ','.join(super_traits)
error_deriving = '#[derive(%s)]' % super_traits if super_traits else ''
errors = '\n'.join('//~%s ERROR' % ('^' * n) for n in range(error_count))
code = string.format(traits = all_traits, errors = errors)
return TEMPLATE.format(year = YEAR, error_deriving=error_deriving, code = code)
def write_file(name, string):
test_file = os.path.join(TEST_DIR, 'derives-span-%s.rs' % name)
# set write permission if file exists, so it can be changed
if os.path.exists(test_file):
os.chmod(test_file, stat.S_IWUSR)
with open(test_file, 'wt') as f:
f.write(string)
# mark file read-only
os.chmod(test_file, stat.S_IRUSR|stat.S_IRGRP|stat.S_IROTH)
ENUM = 1
STRUCT = 2
ALL = STRUCT | ENUM
traits = {
'Zero': (STRUCT, [], 1),
'Default': (STRUCT, [], 1),
'FromPrimitive': (0, [], 0), # only works for C-like enums
'Decodable': (0, [], 0), # FIXME: quoting gives horrible spans
'Encodable': (0, [], 0), # FIXME: quoting gives horrible spans
}
for (trait, supers, errs) in [('Clone', [], 1),
('PartialEq', [], 2),
('PartialOrd', ['PartialEq'], 8),
('Eq', ['PartialEq'], 1),
('Ord', ['Eq', 'PartialOrd', 'PartialEq'], 1),
('Debug', [], 1),
('Hash', [], 1)]:
traits[trait] = (ALL, supers, errs)
for (trait, (types, super_traits, error_count)) in traits.items():
mk = lambda ty: create_test_case(ty, trait, super_traits, error_count)
if types & ENUM:
write_file(trait + '-enum', mk(ENUM_TUPLE))
write_file(trait + '-enum-struct-variant', mk(ENUM_STRUCT))
if types & STRUCT:
write_file(trait + '-struct', mk(STRUCT_FIELDS))
write_file(trait + '-tuple-struct', mk(STRUCT_TUPLE))
| src/etc/generate-deriving-span-tests.py | 4,072 | This script creates a pile of compile-fail tests check that all the
derives have spans that point to the fields, rather than the
#[derive(...)] line.
sample usage: src/etc/generate-deriving-span-tests.py
!/usr/bin/env python Copyright 2013 The Rust Project Developers. See the COPYRIGHT file at the top-level directory of this distribution and at http://rust-lang.org/COPYRIGHT. Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option. This file may not be copied, modified, or distributed except according to those terms. set write permission if file exists, so it can be changed mark file read-only only works for C-like enums FIXME: quoting gives horrible spans FIXME: quoting gives horrible spans | 843 | en | 0.730305 |
#!/bin/env python
# Automatically translated python version of
# OpenSceneGraph example program "osgwidgetprogress"
# !!! This program will need manual tuning before it will work. !!!
import sys
from osgpypp import osgDB
from osgpypp import osgWidget
# Translated from file 'osgwidgetprogress.cpp'
# -*-c++-*- osgWidget - Code by: Jeremy Moles (cubicool) 2007-2008
# $Id$
#include <osgDB/ReadFile>
#include <osgWidget/Util>
#include <osgWidget/WindowManager>
#include <osgWidget/Canvas>
MASK_2D = 0xF0000000
class UpdateProgressNode (osg.NodeCallback) :
start = float()
done = float()
UpdateProgressNode():
start (0.0),
done (5.0)
virtual void operator()(osg.Node* node, osg.NodeVisitor* nv)
fs = nv.getFrameStamp()
t = fs.getSimulationTime()
if start == 0.0 : start = t
width = ((t - start) / done) * 512.0
percent = (width / 512.0) * 100.0
if width < 1.0 or width > 512.0 : return
window = dynamic_cast<osgWidget.Window*>(node)
if not window : return
w = window.getByName("pMeter")
l = dynamic_cast<osgWidget.Label*>(window.getByName("pLabel"))
if not w or not l : return
w.setWidth(width)
w.setTexCoordRegion(0.0, 0.0, width, 64.0)
ss = std.ostringstream()
ss, osg.round(percent), "% Done"
l.setLabel(ss.str())
def main(argv):
viewer = osgViewer.Viewer()
wm = osgWidget.WindowManager(
viewer,
1280.0,
1024.0,
MASK_2D,
osgWidget.WindowManager.WM_PICK_DEBUG
)
canvas = osgWidget.Canvas("canvas")
pOutline = osgWidget.Widget("pOutline", 512.0, 64.0)
pMeter = osgWidget.Widget("pMeter", 0.0, 64.0)
pLabel = osgWidget.Label("pLabel", "0% Done")
pOutline.setImage("osgWidget/progress-outline.png", True)
pOutline.setLayer(osgWidget.Widget.LAYER_MIDDLE, 2)
pMeter.setImage("osgWidget/progress-meter.png")
pMeter.setColor(0.7, 0.1, 0.1, 0.7)
pMeter.setLayer(osgWidget.Widget.LAYER_MIDDLE, 1)
pLabel.setFont("fonts/VeraMono.ttf")
pLabel.setFontSize(20)
pLabel.setFontColor(1.0, 1.0, 1.0, 1.0)
pLabel.setSize(512.0, 64.0)
pLabel.setLayer(osgWidget.Widget.LAYER_MIDDLE, 3)
canvas.setOrigin(300.0, 300.0)
canvas.addWidget(pMeter, 0.0, 0.0)
canvas.addWidget(pOutline, 0.0, 0.0)
canvas.addWidget(pLabel, 0.0, 0.0)
canvas.getBackground().setColor(0.0, 0.0, 0.0, 0.0)
canvas.setUpdateCallback(UpdateProgressNode())
wm.addChild(canvas)
return osgWidget.createExample(viewer, wm, osgDB.readNodeFile("cow.osgt"))
if __name__ == "__main__":
main(sys.argv)
| examples/rough_translated1/osgwidgetprogress.py | 2,695 | !/bin/env python Automatically translated python version of OpenSceneGraph example program "osgwidgetprogress" !!! This program will need manual tuning before it will work. !!! Translated from file 'osgwidgetprogress.cpp' -*-c++-*- osgWidget - Code by: Jeremy Moles (cubicool) 2007-2008 $Id$include <osgDB/ReadFile>include <osgWidget/Util>include <osgWidget/WindowManager>include <osgWidget/Canvas> | 399 | en | 0.725257 |
import numpy as np
import cv2
import matplotlib.pyplot as plt
#read image
img = np.array(cv2.imread('1.jpg'))
#this is mask
mask = np.zeros(img.shape[:2],np.uint8)
#this bgdModel and fgdModel is used in background
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#This is a rectangular cross section of given image where it will search for foreground
rect = (35,30,330,312)
#This is a grabcut func from opencv which is used to detect foreground
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
#here we show our image
plt.imshow(img)
plt.colorbar()
plt.show()
cv2.imshow("sdfg",img)
cv2.waitKey(0)
cv2.imwrite("foreground.jpg",img)
| code/grab_foreground.py | 807 | read imagethis is maskthis bgdModel and fgdModel is used in backgroundThis is a rectangular cross section of given image where it will search for foregroundThis is a grabcut func from opencv which is used to detect foregroundhere we show our image | 247 | en | 0.908498 |
#!/usr/bin/env python3
"""
Convert data received from alfred (ffbi format) and serve them as prometheus python client
Typical call::
alfred -r 69 -u /var/run/alfred/alfred.sock > robin.txt
./robin_prometheus.py -m robin.txt
Dependencies:
prometheus_client -> pip3 install prometheus_client
License: CC BY 4.0
Author: Jonas Hess
Strongly Inspired by map-backend of Moritz Warning and Julian Rueth
"""
import sys
import zlib
import re
import datetime
import os
import pickle
import time
import json
import jsonschema
from prometheus_client import start_http_server
from prometheus_client.core import GaugeMetricFamily, CounterMetricFamily, REGISTRY
if sys.version_info[0] < 3:
raise Exception("map-backend.py must be executed with Python 3.")
NOW_TIMESTAMP = datetime.datetime.utcnow().replace(microsecond=0)
class CustomCollector:
"""
Data Collector for serving them in prometheus client
"""
def collect(self):
"""
collectors only function called collect. and it collects data
"""
downstream = GaugeMetricFamily('node_bw_wan_bps', 'last tested wan downstream mb/s', labels=['nodeid'])
for node in GLOBAL_NODES['nodes']:
if 'downstream_mbps_wan' in node:
downstream.add_metric([node['id']], node['downstream_mbps_wan'])
yield downstream
upstream = GaugeMetricFamily('node_bw_ff_bps', 'last tested ff downstream in mb/s', labels=['nodeid'])
for node in GLOBAL_NODES['nodes']:
if 'downstream_mbps_ff' in node:
upstream.add_metric([node['id']], node['downstream_mbps_ff'])
yield upstream
ping = GaugeMetricFamily('node_gw_ping_ms', 'last tested gateway ping in ms', labels=['nodeid'])
for node in GLOBAL_NODES['nodes']:
if 'gw_ping_ms' in node:
ping.add_metric([node['id']], node['gw_ping_ms'])
yield ping
# 'test_host': self.properties['test_host'],
# 'tested_when': self.properties['tested_when'],
rx_counter = CounterMetricFamily('node_rx_bytes', 'received bytes', labels=['nodeid'])
for node in GLOBAL_NODES['nodes']:
if 'rx_bytes' in node:
rx_counter.add_metric([node['id']], int(node['rx_bytes']))
yield rx_counter
tx_counter = CounterMetricFamily('node_tx_bytes', 'transmitted bytes', labels=['nodeid'])
for node in GLOBAL_NODES['nodes']:
if 'tx_bytes' in node:
tx_counter.add_metric([node['id']], int(node['tx_bytes']))
yield tx_counter
class AlfredParser:
"""
A class providing static methods to parse and validate data reported by
nodes via alfred.
"""
MAC_RE = "^([0-9a-f]{2}:){5}[0-9a-f]{2}$"
MAC_SCHEMA = {"type": "string", "pattern": MAC_RE}
ALFRED_NODE_SCHEMA = {
"type": "object",
"additionalProperties": True,
"properties": {
'downstream_mbps_wan': {"type": "number"},
'downstream_mbps_ff': {"type": "number"},
'gw_ping_ms': {"type": "number"},
'tested_when': {"type": "string", "maxLength": 50},
'rx_bytes': {"type": "number"},
'tx_bytes': {"type": "number"},
},
"definitions": {
"MAC": MAC_SCHEMA,
}
}
@staticmethod
def _parse_string(parse_it):
"""
Strip an escaped string which is enclosed in double quotes and
unescape.
"""
if parse_it[0] != '"' or parse_it[-1] != '"':
raise ValueError("malformatted string: {0:r}".format(parse_it))
return bytes(parse_it[1:-1], 'ascii').decode('unicode-escape')
@staticmethod
def parse_line(item, nodes=None):
"""
Parse and validate a line as returned by alfred.
Such lines consist of a nodes MAC address and an escaped string of JSON
encoded data. Note that most missing fields are populated with
reasonable defaults.
"""
# parse the strange output produced by alfred { MAC, JSON },
if nodes is None:
nodes = {}
if item[-2:] != "}," or item[0] != "{":
raise ValueError("malformatted line: {0}".format(item))
mac, properties = item[1:-2].split(',', 1)
# the first part must be a valid MAC
mac = AlfredParser._parse_string(mac.strip())
jsonschema.validate(mac, AlfredParser.MAC_SCHEMA)
# the second part must conform to ALFRED_NODE_SCHEMA
properties = AlfredParser._parse_string(properties.strip())
if "\x00" in properties:
decompress = zlib.decompressobj(zlib.MAX_WBITS | 32)
# ignores any output beyond 64k (protection from zip bombs)
properties = decompress.decompress(properties.encode('raw-unicode-escape'), 64 * 1024).decode('utf-8')
else:
properties = properties.encode('latin-1').decode('utf8')
properties = json.loads(properties)
jsonschema.validate(properties, AlfredParser.ALFRED_NODE_SCHEMA)
# set some defaults for unspecified fields
properties.setdefault('downstream_mbps_wan', 0)
properties.setdefault('downstream_mbps_ff', 0)
properties.setdefault('rx_bytes', 0)
properties.setdefault('tx_bytes', 0)
if mac in nodes:
# update existing node
node = nodes[mac]
node.update_properties(properties, True)
node.online = True
node.lastseen = NOW_TIMESTAMP
else:
# create a new Node
node = Node(mac, properties, True)
nodes[mac] = node
class Node:
"""
A node in the freifunk network, identified by its primary MAC.
"""
def __init__(self, mac, properties, online):
self.mac = mac
self.properties = properties
if online:
self.lastseen = NOW_TIMESTAMP
self.firstseen = NOW_TIMESTAMP
else:
self.lastseen = None
self.firstseen = None
self.online = online
self.index = None # the index of this node in the list produced for ffmap
self.done = False
def update_properties(self, properties, force=True):
"""
Replace any properties with their respective values in ``properties``.
"""
if force:
# discard all previous properties
self.properties = dict(properties)
if 'force' in self.properties:
del self.properties['force']
else:
# add new key/value pairs only if not already set
for key, value in properties.items():
if key not in self.properties:
if key == "force":
continue
if key == "name":
value = value + "*"
self.properties[key] = value
def nodelist(self):
"""
define/load the nodelist and the properties each single node has
"""
if 'downstream_mbps_wan' not in self.properties:
self.properties['downstream_mbps_wan'] = 0
if 'downstream_mbps_ff' not in self.properties:
self.properties['downstream_mbps_ff'] = 0
obj = {
'id': re.sub('[:]', '', self.mac),
'status': {
'online': self.online,
},
'downstream_mbps_wan': self.properties['downstream_mbps_wan'],
'downstream_mbps_ff': self.properties['downstream_mbps_ff'],
'tested_when': self.properties['tested_when'],
'rx_bytes': self.properties['rx_bytes'],
'tx_bytes': self.properties['tx_bytes'],
}
if 'gw_ping_ms' in self.properties:
obj['gw_ping_ms'] = self.properties['gw_ping_ms']
if self.firstseen:
obj['firstseen'] = self.firstseen.isoformat()
if self.lastseen:
obj['status']['lastcontact'] = self.lastseen.isoformat()
return obj
def render_nodelist(nodes):
"""
render a nodelist out of all nodes found
"""
all_nodes = []
for node in nodes.values():
all_nodes.append(node.nodelist())
return {
"version": "1.0.0",
"updated_at": NOW_TIMESTAMP.isoformat(),
'nodes': all_nodes,
}
def load_nodes(path):
"""
load nodes from storage file
"""
nodes = {}
with open(path, 'rb') as file:
nodes = pickle.load(file)
for node in nodes.values():
# reset old properties
node.online = False
node.index = None
node.clientcount = 0
return nodes
def save_nodes(path, nodes):
"""
save nodes to storage file
"""
with open(path, 'wb') as file:
pickle.dump(nodes, file)
def remove_old_nodes(nodes, delta):
"""
remove nodes older than a certain limit
"""
limit = NOW_TIMESTAMP - delta
old_keys = []
for key, node in nodes.items():
if node.lastseen < limit:
old_keys.append(key)
count = 0
for key in old_keys:
del nodes[key]
count += 1
print("Removed {} old nodes".format(count))
def is_file(path):
"""
just check whether there is a file on given path
"""
return path and os.path.isfile(path)
def main():
"""
main function collecting data from input file/storage and serving prometheus data
"""
import argparse
parser = argparse.ArgumentParser(
description='Convert data received from alfred and provide them as prometheus-service')
parser.add_argument('-m', '--maps', default='robin.txt', help=r'input file containing data collected by alfred')
parser.add_argument('--storage', default='nodes_backup.bin',
help=r'store old data between calls e.g. to remember node lastseen values')
parser.add_argument('-p', '--port', default=8000, help=r'the port this service should listen to')
args = parser.parse_args()
# mac => node
nodes = {}
# load old nodes that we have stored from the last call of this script,
# that way we can show nodes that are offline
if is_file(args.storage):
nodes = load_nodes(args.storage)
remove_old_nodes(nodes, datetime.timedelta(days=7))
try:
with open(args.maps, 'r') as maps:
for line in maps.readlines():
try:
AlfredParser.parse_line(line.strip(), nodes)
except:
import traceback
# debug switch below
print(line)
traceback.print_exc()
continue
nodes_json = render_nodelist(nodes)
except IOError:
exit('File ' + args.maps + ' not accessible')
if args.storage:
save_nodes(args.storage, nodes)
global GLOBAL_NODES
GLOBAL_NODES = nodes_json
global PORT_NUMBER
try:
PORT_NUMBER = int(args.port)
except ValueError:
exit('Error: ' + args.port + ' is not a valid port-number')
if __name__ == '__main__':
main()
REGISTRY.register(CustomCollector())
# Start up the server to expose the metrics.
start_http_server(PORT_NUMBER)
# Generate some requests.
while True:
time.sleep(10)
main()
| roles/ffbsee-robin-exporter/files/robin_prometheus.py | 11,363 | A class providing static methods to parse and validate data reported by
nodes via alfred.
Data Collector for serving them in prometheus client
A node in the freifunk network, identified by its primary MAC.
Strip an escaped string which is enclosed in double quotes and
unescape.
collectors only function called collect. and it collects data
just check whether there is a file on given path
load nodes from storage file
main function collecting data from input file/storage and serving prometheus data
define/load the nodelist and the properties each single node has
Parse and validate a line as returned by alfred.
Such lines consist of a nodes MAC address and an escaped string of JSON
encoded data. Note that most missing fields are populated with
reasonable defaults.
remove nodes older than a certain limit
render a nodelist out of all nodes found
save nodes to storage file
Replace any properties with their respective values in ``properties``.
Convert data received from alfred (ffbi format) and serve them as prometheus python client
Typical call::
alfred -r 69 -u /var/run/alfred/alfred.sock > robin.txt
./robin_prometheus.py -m robin.txt
Dependencies:
prometheus_client -> pip3 install prometheus_client
License: CC BY 4.0
Author: Jonas Hess
Strongly Inspired by map-backend of Moritz Warning and Julian Rueth
!/usr/bin/env python3 'test_host': self.properties['test_host'], 'tested_when': self.properties['tested_when'], parse the strange output produced by alfred { MAC, JSON }, the first part must be a valid MAC the second part must conform to ALFRED_NODE_SCHEMA ignores any output beyond 64k (protection from zip bombs) set some defaults for unspecified fields update existing node create a new Node the index of this node in the list produced for ffmap discard all previous properties add new key/value pairs only if not already set reset old properties mac => node load old nodes that we have stored from the last call of this script, that way we can show nodes that are offline debug switch below Start up the server to expose the metrics. Generate some requests. | 2,098 | en | 0.811136 |
"""
Aaron Harkrider
November 8th, 2018
Build a trojan horse that gains access to a file on cyber.anderson.edu.
The file under attack is the grade book for this assignment.
Sneaky sneaky
"""
import os
def test_mytest():
"""
When Dr. Tarplee run's py.test on the assignment this py file will run and insert my entry into the grade book.
"""
# the entry I want to insert into the grade book
me = "Aaron Harkrider,19\n"
# path to the grade book file on cyber it is /home/kmtarplee2/grades.csv
# Note: to test it in pycharm use Trojan_Horse_Lab/home/kmtarplee2/grades.csv
path = "Trojan_Horse_Lab/home/kmtarplee2/grades.csv"
# Trojan_Horse_Lab / trojan_horse / home / kmtarplee2 / grades.csv
# Boolean to check if I have already inserted my entry into the file to avoid placing a duplicate
complete = True
with open(path, "r") as reading_grades:
if me not in reading_grades.read():
complete = False
# If This is not an entry for me then append my entry to the file
if not complete:
with open(path, "a+") as grades:
grades.write(me)
# piping the cat out from the grade book into a temp file where I can look at it
os.system("cat " + path + " > /tmp/awharkrider_files")
# Standard boilerplate to call the main function, if executed
if __name__ == '__main__':
test_mytest()
| Trojan_Horse_Lab/awharkrider_test.py | 1,387 | When Dr. Tarplee run's py.test on the assignment this py file will run and insert my entry into the grade book.
Aaron Harkrider
November 8th, 2018
Build a trojan horse that gains access to a file on cyber.anderson.edu.
The file under attack is the grade book for this assignment.
Sneaky sneaky
the entry I want to insert into the grade book path to the grade book file on cyber it is /home/kmtarplee2/grades.csv Note: to test it in pycharm use Trojan_Horse_Lab/home/kmtarplee2/grades.csv Trojan_Horse_Lab / trojan_horse / home / kmtarplee2 / grades.csv Boolean to check if I have already inserted my entry into the file to avoid placing a duplicate If This is not an entry for me then append my entry to the file piping the cat out from the grade book into a temp file where I can look at it Standard boilerplate to call the main function, if executed | 855 | en | 0.841376 |
# SPDX-License-Identifier: Apache-2.0
# Copyright 2021 Blue Cheetah Analog Design Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
import pybind11_generics_tests.cpp as pyg_test
from pybind11_generics_tests.cpp import Animal, ListHolder, get_list
from .util import do_constructor_test, do_doc_test, do_error_test
class Dog(Animal):
def __init__(self, name):
Animal.__init__(self, name)
def go(self, n_times):
raise NotImplementedError("Not implemented")
class Husky(Dog):
def __init__(self, name):
Dog.__init__(self, name)
def go(self, n_times):
return "woof " * n_times
class ChildList(pyg_test.TestList):
def __init__(self, vec1, vec2):
pyg_test.TestList.__init__(self, vec1)
self._list2 = vec2
def get_data(self):
return self._list2
def get_data_base(self):
return pyg_test.TestList.get_data(self)
test_data = [
(pyg_test.TestList, []),
(pyg_test.TestList, [1, 3, 5, 7, 6]),
(pyg_test.TestList, [2, 4, 8]),
(pyg_test.TestList, [13]),
]
fail_data = [
(pyg_test.TestList, TypeError, [1, 2, 3.5]),
]
doc_data = [
(pyg_test.TestList, "List[int]"),
]
@pytest.mark.parametrize(("cls", "data"), test_data)
def test_constructor(cls, data):
"""Check object is constructed properly."""
do_constructor_test(cls, data)
@pytest.mark.parametrize(("cls", "err", "data"), fail_data)
def test_error(cls, err, data):
"""Check object errors when input has wrong data type."""
do_error_test(cls, err, data)
@pytest.mark.parametrize(("cls", "type_str"), doc_data)
def test_doc(cls, type_str):
"""Check object has correct doc string."""
do_doc_test(cls, type_str)
def test_inheritance():
"""Test inheritance behavior."""
vec1 = [1, 2, 3, 4]
vec2 = [5, 6, 7]
obj = ChildList(vec1, vec2)
assert obj.get_data() == vec2
assert obj.get_data_base() == vec1
assert get_list(obj) == vec1
holder = ListHolder(obj)
obj_ref = holder.get_obj_ref()
obj_ptr = holder.get_obj_ptr()
assert obj_ref is obj
assert obj_ptr is obj
assert isinstance(obj_ref, ChildList)
def test_virtual():
"""Test overriding virtual methods from python."""
prime = Animal("Prime")
dog = Dog("Doggo")
lily = Husky("Lily")
assert prime.go(1) == ""
assert lily.go(2) == "woof woof "
assert prime.command(2) == "Prime: "
assert lily.command(3) == "Lily: woof woof woof "
with pytest.raises(NotImplementedError):
dog.go(3)
with pytest.raises(NotImplementedError):
dog.command(2)
| tests/test_list.py | 3,114 | Check object is constructed properly.
Check object has correct doc string.
Check object errors when input has wrong data type.
Test inheritance behavior.
Test overriding virtual methods from python.
SPDX-License-Identifier: Apache-2.0 Copyright 2021 Blue Cheetah Analog Design Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 805 | en | 0.823169 |
##############################################################################
# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/spack/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
from spack import *
class Xsetmode(AutotoolsPackage):
"""Set the mode for an X Input device."""
homepage = "http://cgit.freedesktop.org/xorg/app/xsetmode"
url = "https://www.x.org/archive/individual/app/xsetmode-1.0.0.tar.gz"
version('1.0.0', '0dc2a917138d0345c00e016ac720e085')
depends_on('libxi')
depends_on('libx11')
depends_on('pkgconfig', type='build')
depends_on('util-macros', type='build')
| package/spack-xsetmode/package.py | 1,676 | Set the mode for an X Input device.
Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC. Produced at the Lawrence Livermore National Laboratory. This file is part of Spack. Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved. LLNL-CODE-647188 For details, see https://github.com/spack/spack Please also see the NOTICE and LICENSE files for our notice and the LGPL. This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License (as published by the Free Software Foundation) version 2.1, February 1999. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and conditions of the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 1,070 | en | 0.891682 |
# Copyright 2022, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Tools for tracing memory usage at compiled time.
"""
from nuitka.Tracing import printLine
from .Utils import isMacOS, isWin32Windows
def getOwnProcessMemoryUsage():
"""Memory usage of own process in bytes."""
if isWin32Windows():
# adapted from http://code.activestate.com/recipes/578513
import ctypes.wintypes
# Lets allow this to match Windows API it reflects,
# pylint: disable=invalid-name
class PROCESS_MEMORY_COUNTERS_EX(ctypes.Structure):
_fields_ = [
("cb", ctypes.wintypes.DWORD),
("PageFaultCount", ctypes.wintypes.DWORD),
("PeakWorkingSetSize", ctypes.c_size_t),
("WorkingSetSize", ctypes.c_size_t),
("QuotaPeakPagedPoolUsage", ctypes.c_size_t),
("QuotaPagedPoolUsage", ctypes.c_size_t),
("QuotaPeakNonPagedPoolUsage", ctypes.c_size_t),
("QuotaNonPagedPoolUsage", ctypes.c_size_t),
("PagefileUsage", ctypes.c_size_t),
("PeakPagefileUsage", ctypes.c_size_t),
("PrivateUsage", ctypes.c_size_t),
]
GetProcessMemoryInfo = ctypes.windll.psapi.GetProcessMemoryInfo
GetProcessMemoryInfo.argtypes = [
ctypes.wintypes.HANDLE,
ctypes.POINTER(PROCESS_MEMORY_COUNTERS_EX),
ctypes.wintypes.DWORD,
]
GetProcessMemoryInfo.restype = ctypes.wintypes.BOOL
counters = PROCESS_MEMORY_COUNTERS_EX()
rv = GetProcessMemoryInfo(
ctypes.windll.kernel32.GetCurrentProcess(),
ctypes.byref(counters),
ctypes.sizeof(counters),
)
if not rv:
raise ctypes.WinError()
return counters.PrivateUsage
else:
import resource # Posix only code, pylint: disable=I0021,import-error
# The value is from "getrusage", which has OS dependent scaling, at least
# macOS and Linux are different. Others maybe too.
if isMacOS():
factor = 1
else:
factor = 1024
return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss * factor
def getHumanReadableProcessMemoryUsage(value=None):
if value is None:
value = getOwnProcessMemoryUsage()
if abs(value) < 1024 * 1014:
return "%.2f KB (%d bytes)" % (value / 1024.0, value)
elif abs(value) < 1024 * 1014 * 1024:
return "%.2f MB (%d bytes)" % (value / (1024 * 1024.0), value)
elif abs(value) < 1024 * 1014 * 1024 * 1024:
return "%.2f GB (%d bytes)" % (value / (1024 * 1024 * 1024.0), value)
else:
return "%d bytes" % value
class MemoryWatch(object):
def __init__(self):
self.start = getOwnProcessMemoryUsage()
self.stop = None
def finish(self):
self.stop = getOwnProcessMemoryUsage()
def asStr(self):
return getHumanReadableProcessMemoryUsage(self.stop - self.start)
def startMemoryTracing():
try:
import tracemalloc
except ImportError:
pass
else:
tracemalloc.start()
def showMemoryTrace():
try:
import tracemalloc
except ImportError:
pass
else:
snapshot = tracemalloc.take_snapshot()
stats = snapshot.statistics("lineno")
printLine("Top 50 memory allocations:")
for count, stat in enumerate(stats):
if count == 50:
break
printLine(stat)
| nuitka/utils/MemoryUsage.py | 4,269 | Memory usage of own process in bytes.
Tools for tracing memory usage at compiled time.
Copyright 2022, Kay Hayen, mailto:kay.hayen@gmail.com Part of "Nuitka", an optimizing Python compiler that is compatible and integrates with CPython, but also works on its own. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. adapted from http://code.activestate.com/recipes/578513 Lets allow this to match Windows API it reflects, pylint: disable=invalid-name Posix only code, pylint: disable=I0021,import-error The value is from "getrusage", which has OS dependent scaling, at least macOS and Linux are different. Others maybe too. | 1,142 | en | 0.88551 |
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-04-30 17:09
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('codenerix_storages', '0025_auto_20180426_1035'),
]
operations = [
migrations.RemoveField(
model_name='lineoutgoingalbaran',
name='prepare_user',
),
migrations.AddField(
model_name='outgoingalbaran',
name='inventory',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='outgoing_albarans', to='codenerix_storages.InventoryOut', verbose_name='Inventory'),
),
migrations.AlterField(
model_name='outgoingalbaran',
name='request_stock',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='outgoing_albarans', to='codenerix_storages.RequestStock', verbose_name='Request stock'),
),
]
| codenerix_storages/migrations/0026_auto_20180430_1909.py | 1,104 | -*- coding: utf-8 -*- Generated by Django 1.10.8 on 2018-04-30 17:09 | 68 | en | 0.639366 |
# -*- coding: utf-8 -*-
# File generated according to Generator/ClassesRef/Machine/Magnet.csv
# WARNING! All changes made in this file will be lost!
"""Method code available at https://github.com/Eomys/pyleecan/tree/master/pyleecan/Methods/Machine/Magnet
"""
from os import linesep
from sys import getsizeof
from logging import getLogger
from ._check import check_var, raise_
from ..Functions.get_logger import get_logger
from ..Functions.save import save
from ..Functions.copy import copy
from ..Functions.load import load_init_dict
from ..Functions.Load.import_class import import_class
from ._frozen import FrozenClass
from ._check import InitUnKnowClassError
from .Material import Material
class Magnet(FrozenClass):
"""Magnet class"""
VERSION = 1
# save and copy methods are available in all object
save = save
copy = copy
# get_logger method is available in all object
get_logger = get_logger
def __init__(
self,
mat_type=-1,
type_magnetization=0,
Lmag=0.95,
init_dict=None,
init_str=None,
):
"""Constructor of the class. Can be use in three ways :
- __init__ (arg1 = 1, arg3 = 5) every parameters have name and default values
for pyleecan type, -1 will call the default constructor
- __init__ (init_dict = d) d must be a dictionary with property names as keys
- __init__ (init_str = s) s must be a string
s is the file path to load
ndarray or list can be given for Vector and Matrix
object or dict can be given for pyleecan Object"""
if init_str is not None: # Load from a file
init_dict = load_init_dict(init_str)[1]
if init_dict is not None: # Initialisation by dict
assert type(init_dict) is dict
# Overwrite default value with init_dict content
if "mat_type" in list(init_dict.keys()):
mat_type = init_dict["mat_type"]
if "type_magnetization" in list(init_dict.keys()):
type_magnetization = init_dict["type_magnetization"]
if "Lmag" in list(init_dict.keys()):
Lmag = init_dict["Lmag"]
# Set the properties (value check and convertion are done in setter)
self.parent = None
self.mat_type = mat_type
self.type_magnetization = type_magnetization
self.Lmag = Lmag
# The class is frozen, for now it's impossible to add new properties
self._freeze()
def __str__(self):
"""Convert this object in a readeable string (for print)"""
Magnet_str = ""
if self.parent is None:
Magnet_str += "parent = None " + linesep
else:
Magnet_str += "parent = " + str(type(self.parent)) + " object" + linesep
if self.mat_type is not None:
tmp = self.mat_type.__str__().replace(linesep, linesep + "\t").rstrip("\t")
Magnet_str += "mat_type = " + tmp
else:
Magnet_str += "mat_type = None" + linesep + linesep
Magnet_str += "type_magnetization = " + str(self.type_magnetization) + linesep
Magnet_str += "Lmag = " + str(self.Lmag) + linesep
return Magnet_str
def __eq__(self, other):
"""Compare two objects (skip parent)"""
if type(other) != type(self):
return False
if other.mat_type != self.mat_type:
return False
if other.type_magnetization != self.type_magnetization:
return False
if other.Lmag != self.Lmag:
return False
return True
def compare(self, other, name="self", ignore_list=None):
"""Compare two objects and return list of differences"""
if ignore_list is None:
ignore_list = list()
if type(other) != type(self):
return ["type(" + name + ")"]
diff_list = list()
if (other.mat_type is None and self.mat_type is not None) or (
other.mat_type is not None and self.mat_type is None
):
diff_list.append(name + ".mat_type None mismatch")
elif self.mat_type is not None:
diff_list.extend(
self.mat_type.compare(other.mat_type, name=name + ".mat_type")
)
if other._type_magnetization != self._type_magnetization:
diff_list.append(name + ".type_magnetization")
if other._Lmag != self._Lmag:
diff_list.append(name + ".Lmag")
# Filter ignore differences
diff_list = list(filter(lambda x: x not in ignore_list, diff_list))
return diff_list
def __sizeof__(self):
"""Return the size in memory of the object (including all subobject)"""
S = 0 # Full size of the object
S += getsizeof(self.mat_type)
S += getsizeof(self.type_magnetization)
S += getsizeof(self.Lmag)
return S
def as_dict(self, **kwargs):
"""
Convert this object in a json serializable dict (can be use in __init__).
Optional keyword input parameter is for internal use only
and may prevent json serializability.
"""
Magnet_dict = dict()
if self.mat_type is None:
Magnet_dict["mat_type"] = None
else:
Magnet_dict["mat_type"] = self.mat_type.as_dict(**kwargs)
Magnet_dict["type_magnetization"] = self.type_magnetization
Magnet_dict["Lmag"] = self.Lmag
# The class name is added to the dict for deserialisation purpose
Magnet_dict["__class__"] = "Magnet"
return Magnet_dict
def _set_None(self):
"""Set all the properties to None (except pyleecan object)"""
if self.mat_type is not None:
self.mat_type._set_None()
self.type_magnetization = None
self.Lmag = None
def _get_mat_type(self):
"""getter of mat_type"""
return self._mat_type
def _set_mat_type(self, value):
"""setter of mat_type"""
if isinstance(value, str): # Load from file
value = load_init_dict(value)[1]
if isinstance(value, dict) and "__class__" in value:
class_obj = import_class(
"pyleecan.Classes", value.get("__class__"), "mat_type"
)
value = class_obj(init_dict=value)
elif type(value) is int and value == -1: # Default constructor
value = Material()
check_var("mat_type", value, "Material")
self._mat_type = value
if self._mat_type is not None:
self._mat_type.parent = self
mat_type = property(
fget=_get_mat_type,
fset=_set_mat_type,
doc=u"""The Magnet material
:Type: Material
""",
)
def _get_type_magnetization(self):
"""getter of type_magnetization"""
return self._type_magnetization
def _set_type_magnetization(self, value):
"""setter of type_magnetization"""
check_var("type_magnetization", value, "int", Vmin=0, Vmax=3)
self._type_magnetization = value
type_magnetization = property(
fget=_get_type_magnetization,
fset=_set_type_magnetization,
doc=u"""Permanent magnet magnetization type: 0 for radial, 1 for parallel, 2 for Hallbach, 3 Tangential
:Type: int
:min: 0
:max: 3
""",
)
def _get_Lmag(self):
"""getter of Lmag"""
return self._Lmag
def _set_Lmag(self, value):
"""setter of Lmag"""
check_var("Lmag", value, "float", Vmin=0)
self._Lmag = value
Lmag = property(
fget=_get_Lmag,
fset=_set_Lmag,
doc=u"""Magnet axial length
:Type: float
:min: 0
""",
)
| pyleecan/Classes/Magnet.py | 7,772 | Magnet class
Compare two objects (skip parent)
Constructor of the class. Can be use in three ways :
- __init__ (arg1 = 1, arg3 = 5) every parameters have name and default values
for pyleecan type, -1 will call the default constructor
- __init__ (init_dict = d) d must be a dictionary with property names as keys
- __init__ (init_str = s) s must be a string
s is the file path to load
ndarray or list can be given for Vector and Matrix
object or dict can be given for pyleecan Object
Return the size in memory of the object (including all subobject)
Convert this object in a readeable string (for print)
getter of Lmag
getter of mat_type
getter of type_magnetization
setter of Lmag
Set all the properties to None (except pyleecan object)
setter of mat_type
setter of type_magnetization
Convert this object in a json serializable dict (can be use in __init__).
Optional keyword input parameter is for internal use only
and may prevent json serializability.
Compare two objects and return list of differences
Method code available at https://github.com/Eomys/pyleecan/tree/master/pyleecan/Methods/Machine/Magnet
-*- coding: utf-8 -*- File generated according to Generator/ClassesRef/Machine/Magnet.csv WARNING! All changes made in this file will be lost! save and copy methods are available in all object get_logger method is available in all object Load from a file Initialisation by dict Overwrite default value with init_dict content Set the properties (value check and convertion are done in setter) The class is frozen, for now it's impossible to add new properties Filter ignore differences Full size of the object The class name is added to the dict for deserialisation purpose Load from file Default constructor | 1,723 | en | 0.718916 |
"""Module containing a preprocessor that removes the outputs from code cells"""
# Copyright (c) IPython Development Team.
# Distributed under the terms of the Modified BSD License.
import os
from textwrap import dedent
try:
from queue import Empty # Py 3
except ImportError:
from Queue import Empty # Py 2
from traitlets import List, Unicode, Bool
from nbformat.v4 import output_from_msg
from .base import Preprocessor
from ..utils.exceptions import ConversionException
from traitlets import Integer
class CellExecutionError(ConversionException):
"""
Custom exception to propagate exceptions that are raised during
notebook execution to the caller. This is mostly useful when
using nbconvert as a library, since it allows to deal with
failures gracefully.
"""
def __init__(self, traceback):
self.traceback = traceback
class ExecutePreprocessor(Preprocessor):
"""
Executes all the cells in a notebook
"""
timeout = Integer(30, config=True,
help="The time to wait (in seconds) for output from executions."
)
interrupt_on_timeout = Bool(
False, config=True,
help=dedent(
"""
If execution of a cell times out, interrupt the kernel and
continue executing other cells rather than throwing an error and
stopping.
"""
)
)
allow_errors = Bool(
False, config=True,
help=dedent(
"""
If `True`, a `CellExecutionError` is raised if any of the notebook
cells raises an exception during execution. Otherwise, execution
is continued and the output from the exception is included in the
cell output.
"""
)
)
extra_arguments = List(Unicode())
def preprocess(self, nb, resources):
path = resources.get('metadata', {}).get('path', '')
if path == '':
path = None
from jupyter_client.manager import start_new_kernel
kernel_name = nb.metadata.get('kernelspec', {}).get('name', 'python')
self.log.info("Executing notebook with kernel: %s" % kernel_name)
self.km, self.kc = start_new_kernel(
kernel_name=kernel_name,
extra_arguments=self.extra_arguments,
stderr=open(os.devnull, 'w'),
cwd=path)
self.kc.allow_stdin = False
try:
nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
finally:
self.kc.stop_channels()
self.km.shutdown_kernel(now=True)
return nb, resources
def preprocess_cell(self, cell, resources, cell_index):
"""
Apply a transformation on each code cell. See base.py for details.
"""
if cell.cell_type != 'code':
return cell, resources
outputs = self.run_cell(cell)
cell.outputs = outputs
if not self.allow_errors:
for out in outputs:
if out.output_type == 'error':
pattern = """\
An error occurred while executing the following cell:
------------------
{cell.source}
------------------
{out.ename}: {out.evalue}
"""
msg = dedent(pattern).format(out=out, cell=cell)
raise CellExecutionError(msg)
return cell, resources
def run_cell(self, cell):
msg_id = self.kc.execute(cell.source)
self.log.debug("Executing cell:\n%s", cell.source)
# wait for finish, with timeout
while True:
try:
msg = self.kc.shell_channel.get_msg(timeout=self.timeout)
except Empty:
self.log.error("""Timeout waiting for execute reply (%is).
If your cell should take longer than this, you can increase the timeout with:
c.ExecutePreprocessor.timeout = SECONDS
in jupyter_nbconvert_config.py
""" % self.timeout)
if self.interrupt_on_timeout:
self.log.error("Interrupting kernel")
self.km.interrupt_kernel()
break
else:
try:
exception = TimeoutError
except NameError:
exception = RuntimeError
raise exception("Cell execution timed out, see log"
" for details.")
if msg['parent_header'].get('msg_id') == msg_id:
break
else:
# not our reply
continue
outs = []
while True:
try:
msg = self.kc.iopub_channel.get_msg(timeout=self.timeout)
except Empty:
self.log.warn("Timeout waiting for IOPub output")
break
if msg['parent_header'].get('msg_id') != msg_id:
# not an output from our execution
continue
msg_type = msg['msg_type']
self.log.debug("output: %s", msg_type)
content = msg['content']
# set the prompt number for the input and the output
if 'execution_count' in content:
cell['execution_count'] = content['execution_count']
if msg_type == 'status':
if content['execution_state'] == 'idle':
break
else:
continue
elif msg_type == 'execute_input':
continue
elif msg_type == 'clear_output':
outs = []
continue
elif msg_type.startswith('comm'):
continue
try:
out = output_from_msg(msg)
except ValueError:
self.log.error("unhandled iopub msg: " + msg_type)
else:
outs.append(out)
return outs
| env/lib/python2.7/site-packages/nbconvert/preprocessors/execute.py | 6,122 | Custom exception to propagate exceptions that are raised during
notebook execution to the caller. This is mostly useful when
using nbconvert as a library, since it allows to deal with
failures gracefully.
Executes all the cells in a notebook
Apply a transformation on each code cell. See base.py for details.
Module containing a preprocessor that removes the outputs from code cells
Copyright (c) IPython Development Team. Distributed under the terms of the Modified BSD License. Py 3 Py 2 wait for finish, with timeout not our reply not an output from our execution set the prompt number for the input and the output | 619 | en | 0.89555 |
from rqalpha.interface import AbstractMod
from rqalpha.apis import *
from rqalpha.events import EVENT
from collections import defaultdict
import datetime
import os
class ForceClose(AbstractMod):
def __init__(self):
self._log_dir = None
self._log_file = defaultdict(lambda: None)
self._force_close_time = []
def start_up(self, env, mod_config):
for timespan in mod_config.force_close_time:
v = timespan.split('-')
assert len(v) == 2, "%s invalid" % mod_config.force_close_time
start_time_v = v[0].split(':')
end_time_v = v[1].split(':')
assert len(start_time_v) == 2, "%s invalid" % mod_config.force_close_time
assert len(end_time_v) == 2, "%s invalid" % mod_config.force_close_time
self._force_close_time.append({'start': {'hour': int(start_time_v[0]), 'minute': int(start_time_v[1])},
'end': {'hour': int(end_time_v[0]), 'minute': int(end_time_v[1])}})
if "log_dir" in mod_config.keys():
self._log_dir = mod_config.log_dir
if os.path.exists(self._log_dir) is False:
os.makedirs(self._log_dir)
# env.event_bus.add_listener(EVENT.BAR, self._check_force_close)
env.event_bus.prepend_listener(EVENT.BAR, self._check_force_close)
def tear_down(self, success, exception=None):
for f in self._log_file.values():
if f:
f.close()
def _check_force_close(self, event):
contract_list = list(event.bar_dict.keys())
for contract in contract_list:
event.bar_dict[contract].force_close = False
cur_time = event.calendar_dt
force_close = False
for ft in self._force_close_time:
start_time = cur_time.replace(hour=ft['start']['hour'], minute=ft['start']['minute'])
end_time = cur_time.replace(hour=ft['end']['hour'], minute=ft['end']['minute'])
if start_time <= cur_time <= end_time:
force_close = True
break
if force_close:
contract_list = list(event.bar_dict.keys())
for contract in contract_list:
long_positions = get_position(contract, POSITION_DIRECTION.LONG)
short_positions = get_position(contract, POSITION_DIRECTION.SHORT)
if long_positions.quantity == 0 and short_positions.quantity == 0:
continue
# order_to(contract, 0)
event.bar_dict[contract].force_close = True
if not self._log_dir:
continue
if not self._log_file[contract]:
path = os.path.join(self._log_dir, contract + '_force_close.csv')
self._log_file[contract] = open(path, 'w')
msg = "%s,%s" % (str(cur_time), "FORCE_CLOSE")
self._log_file[contract].write(msg + "\n")
# return True
return
# print("call _calc_flow")
# if event.bar_dict._frequency != "1m":
# return
# if len(self._kline_bar) < self._kline_bar_cnt:
# self._kline_bar.append(event.)
| rqalpha/mod/rqalpha_mod_force_close/mod.py | 3,233 | env.event_bus.add_listener(EVENT.BAR, self._check_force_close) order_to(contract, 0) return True print("call _calc_flow") if event.bar_dict._frequency != "1m": return if len(self._kline_bar) < self._kline_bar_cnt: self._kline_bar.append(event.) | 252 | en | 0.067347 |
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# vim:fenc=utf-8
#
# Copyright © 2018 damian <damian@C-DZ-E5500>
#
# Distributed under terms of the MIT license.
"""
"""
import subprocess
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import yaml
def read_file(filename):
with open(filename) as f:
data = yaml.load(f.read())
def run_command(cmd):
return subprocess.check_call(cmd, shell=True)
db = create_engine('sqlite:///somedatabase')
Session = sessionmaker(bind=db)
def get_user(uid):
session = Session()
query = "select * from user where id='%s'" % uid
return session.execute(query)
| bandit/insecureCode.py | 653 | ! /usr/bin/env python -*- coding: utf-8 -*- vim:fenc=utf-8 Copyright © 2018 damian <damian@C-DZ-E5500> Distributed under terms of the MIT license. | 146 | en | 0.500052 |
# Python program to reverse a linked list
# Time Complexity : O(n)
# Space Complexity : O(1)
# Node Class
class Node:
# Constructor to initialize the node object
def __init__(self, data):
self.data = data
self.next = None # In python, None == null
class LinkedList:
# Constructor to initialize the linked list
def __init__(self):
self.head = None
# Function to insert a new node at the beginning
def push(self, data):
node = Node(data) # This is how you create an object in python
node.next = self.head
self.head = node
# Function to insert a new node at the end
def append(self, data):
node = Node(data)
currentNode = self.head
while currentNode.next is not None:
currentNode = currentNode.next
currentNode.next = node
# Function to remove a node
def remove(self, data):
currentNode = self.head
if currentNode.data == data:
self.head = self.head.next
return
while currentNode.next is not None:
if currentNode.next.data == data:
currentNode.next = currentNode.next.next
return
currentNode = currentNode.next
# Function to search for a node
def search(self, data):
currentNode = self.head
if currentNode.data == data:
return currentNode
while currentNode.next is not Node:
if currentNode.next.data == data:
return currentNode.next
currentNode = currentNode.next
return None
# Function to reverse the linked list
def reverse(self):
temp = self.head # save the head to a temp linked list
self.head = None # reset to None
while temp is not None:
self.push(temp.data) # push to the head of the empty linked list
temp = temp.next
# Function to reverse the every k nodes in the linked list
def reverseWithK(self, head, k):
current = head
next = None
temp = None
count = 0
# Reverse first k nodes of the linked list
while(current is not None and count < k):
next = current.next
current.next = temp
temp = current
current = next
count += 1
# next is now a pointer to (k+1)th node
# recursively call for the list starting
# from current . And make rest of the list as
# next of first node (head)
if next is not None:
head.next = self.reverseWithK(next, k)
# temp is new head of the reversed list
return temp
# Utility function to print the linked list
def printList(self):
temp = self.head
while(temp):
print(temp.data),
temp = temp.next
linkedlist = LinkedList()
linkedlist.push('Siddhant')
linkedlist.push('Rahul')
linkedlist.push('DSA-Library')
linkedlist.append('XYZ')
linkedlist.remove('XYZ')
linkedlist.reverse()
linkedlist.printList()
# Driver program
llist = LinkedList()
llist.push(9)
llist.push(8)
llist.push(7)
llist.push(6)
llist.push(5)
llist.push(4)
llist.push(3)
llist.push(2)
llist.push(1)
print("\nGiven linked list")
llist.printList()
llist.head = llist.reverseWithK(llist.head, 3)
print("\nReversed Linked list with k node")
llist.printList()
"""
Output =>
Siddhant
Rahul
DSA-Library
Given linked list
1
2
3
4
5
6
7
8
9
Reversed Linked list with k node
3
2
1
6
5
4
9
8
7
"""
| Data_Structures/LinkedList/Reverse_List/reverse_LinkedList.py | 3,543 | Python program to reverse a linked list Time Complexity : O(n) Space Complexity : O(1) Node Class Constructor to initialize the node object In python, None == null Constructor to initialize the linked list Function to insert a new node at the beginning This is how you create an object in python Function to insert a new node at the end Function to remove a node Function to search for a node Function to reverse the linked list save the head to a temp linked list reset to None push to the head of the empty linked list Function to reverse the every k nodes in the linked list Reverse first k nodes of the linked list next is now a pointer to (k+1)th node recursively call for the list starting from current . And make rest of the list as next of first node (head) temp is new head of the reversed list Utility function to print the linked list Driver program | 860 | en | 0.775753 |
#
# PySNMP MIB module SENAO-ENTERPRISE-INDOOR-AP-CB-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/SENAO-ENTERPRISE-INDOOR-AP-CB-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 20:53:40 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, OctetString, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "Integer", "OctetString", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, ValueSizeConstraint, SingleValueConstraint, ConstraintsIntersection, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "ValueSizeConstraint", "SingleValueConstraint", "ConstraintsIntersection", "ValueRangeConstraint")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
MibIdentifier, iso, ObjectIdentity, NotificationType, MibScalar, MibTable, MibTableRow, MibTableColumn, TimeTicks, Gauge32, Unsigned32, IpAddress, Integer32, Counter64, Counter32, ModuleIdentity, Bits, enterprises = mibBuilder.importSymbols("SNMPv2-SMI", "MibIdentifier", "iso", "ObjectIdentity", "NotificationType", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "TimeTicks", "Gauge32", "Unsigned32", "IpAddress", "Integer32", "Counter64", "Counter32", "ModuleIdentity", "Bits", "enterprises")
MacAddress, DisplayString, TextualConvention, TruthValue = mibBuilder.importSymbols("SNMPv2-TC", "MacAddress", "DisplayString", "TextualConvention", "TruthValue")
senao = ModuleIdentity((1, 3, 6, 1, 4, 1, 14125))
if mibBuilder.loadTexts: senao.setLastUpdated('0511250000Z')
if mibBuilder.loadTexts: senao.setOrganization('Senao R&D Dept., S/W Division')
indoorWirelessDevice = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100))
entSystem = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 1))
entLAN = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 2))
entWAN = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 3))
entMacFilter = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 4))
entWlan = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 5))
entSNMP = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 6))
entWlanCommonInfo = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1))
entPassword = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 2), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 32))).setMaxAccess("writeonly")
if mibBuilder.loadTexts: entPassword.setStatus('mandatory')
entSysModel = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 3), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSysModel.setStatus('mandatory')
entSysMode = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("ap-router", 0), ("repeater", 1), ("ap-bridge", 2), ("client-bridge", 3), ("client-router", 4), ("wds-bridge", 5)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSysMode.setStatus('mandatory')
entSysUpTime = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 5), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSysUpTime.setStatus('mandatory')
entHwVersion = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 6), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entHwVersion.setStatus('mandatory')
entSN = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 7), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSN.setStatus('mandatory')
entKenelVersion = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 8), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entKenelVersion.setStatus('mandatory')
entAppVersion = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 9), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entAppVersion.setStatus('mandatory')
entReset = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 10), TruthValue()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: entReset.setStatus('mandatory')
entResetToDefault = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 11), TruthValue()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: entResetToDefault.setStatus('mandatory')
entApplyModules = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 1, 12), TruthValue()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: entApplyModules.setStatus('mandatory')
entLANIP = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 1), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entLANIP.setStatus('mandatory')
entLANSubnetMask = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 2), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entLANSubnetMask.setStatus('mandatory')
entSTPEnable = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 3), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entSTPEnable.setStatus('mandatory')
entDHCPEnable = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 4), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entDHCPEnable.setStatus('mandatory')
entIPPoolStart = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 5), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entIPPoolStart.setStatus('mandatory')
entIPPoolEnd = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 6), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entIPPoolEnd.setStatus('mandatory')
entIPLeaseTime = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 2, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6, 7, 8))).clone(namedValues=NamedValues(("half-hour", 0), ("one-hour", 1), ("two-hours", 2), ("half-day", 3), ("one-day", 4), ("two-days", 5), ("one-week", 6), ("two-weeks", 7), ("forever", 8)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entIPLeaseTime.setStatus('mandatory')
entRouterEnable = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 3, 1), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entRouterEnable.setStatus('mandatory')
entLanMacFilteringEnable = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 4, 1), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entLanMacFilteringEnable.setStatus('mandatory')
entLanMacFilteringMode = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 4, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("white-list", 0), ("black-list", 1)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entLanMacFilteringMode.setStatus('mandatory')
entLanMacFilterTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 4, 3), )
if mibBuilder.loadTexts: entLanMacFilterTable.setStatus('current')
entLanMacFilterEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 4, 3, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entMacAddressIndex"))
if mibBuilder.loadTexts: entLanMacFilterEntry.setStatus('current')
entMacAddressIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 4, 3, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entMacAddressIndex.setStatus('current')
entMacAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 4, 3, 1, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entMacAddress.setStatus('current')
entMacFilteringValid = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 4, 3, 1, 3), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entMacFilteringValid.setStatus('current')
entOpMode = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3))).clone(namedValues=NamedValues(("ap", 0), ("client-bridge", 1), ("wds-bridge", 2), ("repeater", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entOpMode.setStatus('mandatory')
entRadio = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 2), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entRadio.setStatus('mandatory')
entAPMode = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1))).clone(namedValues=NamedValues(("ap", 0), ("wds", 1)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entAPMode.setStatus('mandatory')
entBand = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 4, 6, 7, 8, 9))).clone(namedValues=NamedValues(("ieee802dot11-b-g", 0), ("ieee802dot11-b", 1), ("ieee802dot11-a", 2), ("ieee802dot11-g", 4), ("ieee802dot11-n", 6), ("ieee802dot11-g-n", 7), ("ieee802dot11-a-n", 8), ("ieee802dot11-b-g-n", 9)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entBand.setStatus('mandatory')
entESSIDNum = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 4))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entESSIDNum.setStatus('mandatory')
entChannel = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 14))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entChannel.setStatus('mandatory')
entDataRate = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 5, 11, 6, 9, 12, 18, 24, 36, 48, 54))).clone(namedValues=NamedValues(("auto", 0), ("oneMbps", 1), ("twoMbps", 2), ("fiveNhalfMbps", 5), ("elevenMbps", 11), ("sixMbps", 6), ("nineMbps", 9), ("twelveMbps", 12), ("eighteenMbps", 18), ("twentytwoMbps", 24), ("thirtysixMbps", 36), ("fortyeightMbps", 48), ("fiftyfourMbps", 54)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entDataRate.setStatus('mandatory')
entNDataRate = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 8), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entNDataRate.setStatus('mandatory')
entTxPower = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 9), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entTxPower.setStatus('mandatory')
entBeaconInterval = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(20, 1024))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entBeaconInterval.setStatus('mandatory')
entDTIMPeriod = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entDTIMPeriod.setStatus('mandatory')
entFragmentationThreshold = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 12), Integer32().subtype(subtypeSpec=ValueRangeConstraint(256, 2346))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entFragmentationThreshold.setStatus('mandatory')
entRTSThreshold = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 13), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2347))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entRTSThreshold.setStatus('mandatory')
entChannelBandwidth = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 14), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 1))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entChannelBandwidth.setStatus('mandatory')
entPreambleType = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 15), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("short", 1), ("long", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entPreambleType.setStatus('mandatory')
entCTSProtection = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 5, 1, 16), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2))).clone(namedValues=NamedValues(("auto", 0), ("always", 1), ("none", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCTSProtection.setStatus('mandatory')
entWlanESSIDInfoTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2), )
if mibBuilder.loadTexts: entWlanESSIDInfoTable.setStatus('current')
entWlanESSIDInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entWlanESSIDInfoIndex"))
if mibBuilder.loadTexts: entWlanESSIDInfoEntry.setStatus('current')
entWlanESSIDInfoIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entWlanESSIDInfoIndex.setStatus('current')
entESSID = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1, 2), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 32))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entESSID.setStatus('current')
entBroadcastESSID = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1, 3), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entBroadcastESSID.setStatus('mandatory')
entWMM = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1, 4), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWMM.setStatus('mandatory')
entEncryption = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 4))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entEncryption.setStatus('current')
entWlanAuthenticationType = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 2, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 8))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlanAuthenticationType.setStatus('current')
entWlanWepInfoTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3), )
if mibBuilder.loadTexts: entWlanWepInfoTable.setStatus('current')
entWlanWepInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entWlanESSIDIndex"))
if mibBuilder.loadTexts: entWlanWepInfoEntry.setStatus('current')
entWlanESSIDIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entWlanESSIDIndex.setStatus('current')
entWlanWepKeyID = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 4))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlanWepKeyID.setStatus('current')
entWlanWepKey1Value = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1, 3), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlanWepKey1Value.setStatus('current')
entWlanWepKey2Value = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1, 4), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlanWepKey2Value.setStatus('current')
entWlanWepKey3Value = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1, 5), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlanWepKey3Value.setStatus('current')
entWlanWepKey4Value = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 3, 1, 6), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlanWepKey4Value.setStatus('current')
entWlanWPAInfoTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 5, 4), )
if mibBuilder.loadTexts: entWlanWPAInfoTable.setStatus('current')
entWlanWPAInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 5, 4, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entWlanWPAESSIDIndex"))
if mibBuilder.loadTexts: entWlanWPAInfoEntry.setStatus('current')
entWlanWPAESSIDIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 4, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entWlanWPAESSIDIndex.setStatus('current')
entPresharedKey = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 4, 1, 2), DisplayString()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: entPresharedKey.setStatus('current')
ent802dot1xInfoTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5), )
if mibBuilder.loadTexts: ent802dot1xInfoTable.setStatus('current')
ent802dot1xInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entWlan802dot1xESSIDIndex"))
if mibBuilder.loadTexts: ent802dot1xInfoEntry.setStatus('current')
entWlan802dot1xESSIDIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entWlan802dot1xESSIDIndex.setStatus('current')
entRADIUSServerIPAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5, 1, 2), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entRADIUSServerIPAddress.setStatus('current')
entRADIUSServerPort = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entRADIUSServerPort.setStatus('current')
entRADIUSServerPassword = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5, 1, 4), DisplayString()).setMaxAccess("writeonly")
if mibBuilder.loadTexts: entRADIUSServerPassword.setStatus('current')
entWlan802dot1xEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 5, 1, 5), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entWlan802dot1xEnable.setStatus('current')
entWlanClientListInfoTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6), )
if mibBuilder.loadTexts: entWlanClientListInfoTable.setStatus('current')
entWlanClientListInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entCLInfoIndex"))
if mibBuilder.loadTexts: entWlanClientListInfoEntry.setStatus('current')
entCLInfoIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entCLInfoIndex.setStatus('current')
entCLInterface = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 2), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLInterface.setStatus('current')
entCLMAC = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 3), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLMAC.setStatus('current')
entCLRx = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 4), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLRx.setStatus('current')
entCLTx = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 5), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLTx.setStatus('current')
entCLSignal = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 6), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLSignal.setStatus('current')
entCLConnectedTime = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 7), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLConnectedTime.setStatus('current')
entCLIdleTime = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 5, 6, 1, 8), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entCLIdleTime.setStatus('current')
entSNMPStatus = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 6, 1), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entSNMPStatus.setStatus('mandatory')
entSNMPVerType = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 6, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3))).clone(namedValues=NamedValues(("all", 0), ("v1", 1), ("v2c", 2), ("v3", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entSNMPVerType.setStatus('mandatory')
entSNMPCommunityTable = MibTable((1, 3, 6, 1, 4, 1, 14125, 100, 6, 3), )
if mibBuilder.loadTexts: entSNMPCommunityTable.setStatus('current')
entSNMPCommunityEntry = MibTableRow((1, 3, 6, 1, 4, 1, 14125, 100, 6, 3, 1), ).setIndexNames((0, "SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "entSNMPCommunityIndex"))
if mibBuilder.loadTexts: entSNMPCommunityEntry.setStatus('current')
entSNMPCommunityIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 6, 3, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2))).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSNMPCommunityIndex.setStatus('current')
entSNMPCommunityName = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 6, 3, 1, 2), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(0, 32))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entSNMPCommunityName.setStatus('current')
entSNMPCommunityType = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 6, 3, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("read", 1), ("write", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSNMPCommunityType.setStatus('current')
entSNMPCommunityValid = MibTableColumn((1, 3, 6, 1, 4, 1, 14125, 100, 6, 3, 1, 4), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: entSNMPCommunityValid.setStatus('current')
entSNMPTrap = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 6, 4))
entTrapStatus = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 6, 4, 1), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entTrapStatus.setStatus('mandatory')
entTrapVer = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 6, 4, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3))).clone(namedValues=NamedValues(("all", 0), ("v1", 1), ("v2c", 2), ("v3", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entTrapVer.setStatus('mandatory')
entTrapReceiverIPAddress = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 6, 4, 3), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entTrapReceiverIPAddress.setStatus('mandatory')
entTrapReceiverCommunityName = MibScalar((1, 3, 6, 1, 4, 1, 14125, 100, 6, 4, 4), DisplayString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: entTrapReceiverCommunityName.setStatus('mandatory')
entTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 20))
entSystemTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 20, 1))
entWanTraps = MibIdentifier((1, 3, 6, 1, 4, 1, 14125, 100, 20, 2))
entSystemTrapsReboot = NotificationType((1, 3, 6, 1, 4, 1, 14125, 100, 20, 1, 1))
if mibBuilder.loadTexts: entSystemTrapsReboot.setStatus('current')
entSystemTrapsRestoreToDefault = NotificationType((1, 3, 6, 1, 4, 1, 14125, 100, 20, 1, 2))
if mibBuilder.loadTexts: entSystemTrapsRestoreToDefault.setStatus('current')
entSystemTrapsReloadModules = NotificationType((1, 3, 6, 1, 4, 1, 14125, 100, 20, 1, 3))
if mibBuilder.loadTexts: entSystemTrapsReloadModules.setStatus('current')
entWanTrapsLinkDisconnect = NotificationType((1, 3, 6, 1, 4, 1, 14125, 100, 20, 2, 1)).setObjects(("SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "ifIndex"))
if mibBuilder.loadTexts: entWanTrapsLinkDisconnect.setStatus('current')
entWanTrapsLinkRecover = NotificationType((1, 3, 6, 1, 4, 1, 14125, 100, 20, 2, 2)).setObjects(("SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", "ifIndex"))
if mibBuilder.loadTexts: entWanTrapsLinkRecover.setStatus('current')
mibBuilder.exportSymbols("SENAO-ENTERPRISE-INDOOR-AP-CB-MIB", entIPLeaseTime=entIPLeaseTime, entWlanWepKey3Value=entWlanWepKey3Value, entKenelVersion=entKenelVersion, entMacFilteringValid=entMacFilteringValid, entSystemTraps=entSystemTraps, entWlanESSIDInfoIndex=entWlanESSIDInfoIndex, entSNMPCommunityEntry=entSNMPCommunityEntry, entRouterEnable=entRouterEnable, entWlanESSIDInfoEntry=entWlanESSIDInfoEntry, entResetToDefault=entResetToDefault, entMacAddress=entMacAddress, entWlanClientListInfoTable=entWlanClientListInfoTable, entLanMacFilteringEnable=entLanMacFilteringEnable, entBeaconInterval=entBeaconInterval, entWanTrapsLinkDisconnect=entWanTrapsLinkDisconnect, ent802dot1xInfoTable=ent802dot1xInfoTable, entApplyModules=entApplyModules, entCLRx=entCLRx, entTraps=entTraps, entIPPoolEnd=entIPPoolEnd, entRadio=entRadio, entWlanAuthenticationType=entWlanAuthenticationType, entAPMode=entAPMode, entSNMPCommunityName=entSNMPCommunityName, entRADIUSServerPort=entRADIUSServerPort, entSNMPCommunityTable=entSNMPCommunityTable, entLanMacFilterEntry=entLanMacFilterEntry, entLanMacFilterTable=entLanMacFilterTable, indoorWirelessDevice=indoorWirelessDevice, entFragmentationThreshold=entFragmentationThreshold, entWanTrapsLinkRecover=entWanTrapsLinkRecover, entCLInterface=entCLInterface, entTrapReceiverCommunityName=entTrapReceiverCommunityName, entWlanWepKeyID=entWlanWepKeyID, entSNMPCommunityValid=entSNMPCommunityValid, entWlanCommonInfo=entWlanCommonInfo, entReset=entReset, entTxPower=entTxPower, entWlanClientListInfoEntry=entWlanClientListInfoEntry, entTrapVer=entTrapVer, entWlanWepInfoTable=entWlanWepInfoTable, entCLIdleTime=entCLIdleTime, senao=senao, entWAN=entWAN, entLanMacFilteringMode=entLanMacFilteringMode, entChannelBandwidth=entChannelBandwidth, ent802dot1xInfoEntry=ent802dot1xInfoEntry, PYSNMP_MODULE_ID=senao, entHwVersion=entHwVersion, entWlanWPAInfoTable=entWlanWPAInfoTable, entCLMAC=entCLMAC, entWMM=entWMM, entSystemTrapsReloadModules=entSystemTrapsReloadModules, entWlan802dot1xEnable=entWlan802dot1xEnable, entWlanWepKey4Value=entWlanWepKey4Value, entSystemTrapsReboot=entSystemTrapsReboot, entAppVersion=entAppVersion, entSystemTrapsRestoreToDefault=entSystemTrapsRestoreToDefault, entTrapReceiverIPAddress=entTrapReceiverIPAddress, entWlan802dot1xESSIDIndex=entWlan802dot1xESSIDIndex, entWlanWepInfoEntry=entWlanWepInfoEntry, entBroadcastESSID=entBroadcastESSID, entOpMode=entOpMode, entSysUpTime=entSysUpTime, entSysModel=entSysModel, entESSID=entESSID, entCLInfoIndex=entCLInfoIndex, entCTSProtection=entCTSProtection, entPreambleType=entPreambleType, entWlanWepKey2Value=entWlanWepKey2Value, entSN=entSN, entWlanWepKey1Value=entWlanWepKey1Value, entRADIUSServerPassword=entRADIUSServerPassword, entSystem=entSystem, entMacFilter=entMacFilter, entPassword=entPassword, entIPPoolStart=entIPPoolStart, entRTSThreshold=entRTSThreshold, entWlan=entWlan, entSNMPVerType=entSNMPVerType, entChannel=entChannel, entCLSignal=entCLSignal, entDHCPEnable=entDHCPEnable, entSTPEnable=entSTPEnable, entWlanWPAESSIDIndex=entWlanWPAESSIDIndex, entTrapStatus=entTrapStatus, entRADIUSServerIPAddress=entRADIUSServerIPAddress, entWlanESSIDInfoTable=entWlanESSIDInfoTable, entSNMPTrap=entSNMPTrap, entSNMPStatus=entSNMPStatus, entSNMP=entSNMP, entWlanWPAInfoEntry=entWlanWPAInfoEntry, entLANIP=entLANIP, entBand=entBand, entPresharedKey=entPresharedKey, entDataRate=entDataRate, entNDataRate=entNDataRate, entCLTx=entCLTx, entLANSubnetMask=entLANSubnetMask, entWlanESSIDIndex=entWlanESSIDIndex, entSysMode=entSysMode, entLAN=entLAN, entEncryption=entEncryption, entSNMPCommunityIndex=entSNMPCommunityIndex, entSNMPCommunityType=entSNMPCommunityType, entMacAddressIndex=entMacAddressIndex, entDTIMPeriod=entDTIMPeriod, entWanTraps=entWanTraps, entESSIDNum=entESSIDNum, entCLConnectedTime=entCLConnectedTime)
| pysnmp/SENAO-ENTERPRISE-INDOOR-AP-CB-MIB.py | 26,422 | PySNMP MIB module SENAO-ENTERPRISE-INDOOR-AP-CB-MIB (http://snmplabs.com/pysmi) ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/SENAO-ENTERPRISE-INDOOR-AP-CB-MIB Produced by pysmi-0.3.4 at Mon Apr 29 20:53:40 2019 On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4 Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15) | 360 | en | 0.372077 |
import asyncio
import dataclasses
import time
import traceback
from secrets import token_bytes
from typing import Dict, List, Optional, Tuple, Set
from blspy import AugSchemeMPL, G2Element
from chiabip158 import PyBIP158
import chia.server.ws_connection as ws
from chia.consensus.block_creation import create_unfinished_block
from chia.consensus.block_record import BlockRecord
from chia.consensus.pot_iterations import calculate_ip_iters, calculate_iterations_quality, calculate_sp_iters
from chia.full_node.bundle_tools import best_solution_generator_from_template, simple_solution_generator
from chia.full_node.full_node import FullNode
from chia.full_node.mempool_check_conditions import get_puzzle_and_solution_for_coin
from chia.full_node.signage_point import SignagePoint
from chia.protocols import farmer_protocol, full_node_protocol, introducer_protocol, timelord_protocol, wallet_protocol
from chia.protocols.full_node_protocol import RejectBlock, RejectBlocks
from chia.protocols.protocol_message_types import ProtocolMessageTypes
from chia.protocols.wallet_protocol import (
PuzzleSolutionResponse,
RejectHeaderBlocks,
RejectHeaderRequest,
CoinState,
RespondSESInfo,
)
from chia.server.outbound_message import Message, make_msg
from chia.types.blockchain_format.coin import Coin, hash_coin_list
from chia.types.blockchain_format.pool_target import PoolTarget
from chia.types.blockchain_format.program import Program
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.types.blockchain_format.sub_epoch_summary import SubEpochSummary
from chia.types.coin_record import CoinRecord
from chia.types.end_of_slot_bundle import EndOfSubSlotBundle
from chia.types.full_block import FullBlock
from chia.types.generator_types import BlockGenerator
from chia.types.mempool_inclusion_status import MempoolInclusionStatus
from chia.types.mempool_item import MempoolItem
from chia.types.peer_info import PeerInfo
from chia.types.transaction_queue_entry import TransactionQueueEntry
from chia.types.unfinished_block import UnfinishedBlock
from chia.util.api_decorators import api_request, peer_required, bytes_required, execute_task, reply_type
from chia.util.generator_tools import get_block_header
from chia.util.hash import std_hash
from chia.util.ints import uint8, uint32, uint64, uint128
from chia.util.merkle_set import MerkleSet
class FullNodeAPI:
full_node: FullNode
def __init__(self, full_node) -> None:
self.full_node = full_node
@property
def server(self):
return self.full_node.server
@property
def log(self):
return self.full_node.log
@property
def api_ready(self):
return self.full_node.initialized
@peer_required
@api_request
@reply_type([ProtocolMessageTypes.respond_peers])
async def request_peers(self, _request: full_node_protocol.RequestPeers, peer: ws.WSChiaConnection):
if peer.peer_server_port is None:
return None
peer_info = PeerInfo(peer.peer_host, peer.peer_server_port)
if self.full_node.full_node_peers is not None:
msg = await self.full_node.full_node_peers.request_peers(peer_info)
return msg
@peer_required
@api_request
async def respond_peers(
self, request: full_node_protocol.RespondPeers, peer: ws.WSChiaConnection
) -> Optional[Message]:
self.log.debug(f"Received {len(request.peer_list)} peers")
if self.full_node.full_node_peers is not None:
await self.full_node.full_node_peers.respond_peers(request, peer.get_peer_info(), True)
return None
@peer_required
@api_request
async def respond_peers_introducer(
self, request: introducer_protocol.RespondPeersIntroducer, peer: ws.WSChiaConnection
) -> Optional[Message]:
self.log.debug(f"Received {len(request.peer_list)} peers from introducer")
if self.full_node.full_node_peers is not None:
await self.full_node.full_node_peers.respond_peers(request, peer.get_peer_info(), False)
await peer.close()
return None
@execute_task
@peer_required
@api_request
async def new_peak(self, request: full_node_protocol.NewPeak, peer: ws.WSChiaConnection) -> Optional[Message]:
"""
A peer notifies us that they have added a new peak to their blockchain. If we don't have it,
we can ask for it.
"""
# this semaphore limits the number of tasks that can call new_peak() at
# the same time, since it can be expensive
waiter_count = len(self.full_node.new_peak_sem._waiters)
if waiter_count > 0:
self.full_node.log.debug(f"new_peak Waiters: {waiter_count}")
if waiter_count > 20:
return None
async with self.full_node.new_peak_sem:
return await self.full_node.new_peak(request, peer)
@peer_required
@api_request
async def new_transaction(
self, transaction: full_node_protocol.NewTransaction, peer: ws.WSChiaConnection
) -> Optional[Message]:
"""
A peer notifies us of a new transaction.
Requests a full transaction if we haven't seen it previously, and if the fees are enough.
"""
# Ignore if syncing
if self.full_node.sync_store.get_sync_mode():
return None
if not (await self.full_node.synced()):
return None
# Ignore if already seen
if self.full_node.mempool_manager.seen(transaction.transaction_id):
return None
if self.full_node.mempool_manager.is_fee_enough(transaction.fees, transaction.cost):
# If there's current pending request just add this peer to the set of peers that have this tx
if transaction.transaction_id in self.full_node.full_node_store.pending_tx_request:
if transaction.transaction_id in self.full_node.full_node_store.peers_with_tx:
current_set = self.full_node.full_node_store.peers_with_tx[transaction.transaction_id]
if peer.peer_node_id in current_set:
return None
current_set.add(peer.peer_node_id)
return None
else:
new_set = set()
new_set.add(peer.peer_node_id)
self.full_node.full_node_store.peers_with_tx[transaction.transaction_id] = new_set
return None
self.full_node.full_node_store.pending_tx_request[transaction.transaction_id] = peer.peer_node_id
new_set = set()
new_set.add(peer.peer_node_id)
self.full_node.full_node_store.peers_with_tx[transaction.transaction_id] = new_set
async def tx_request_and_timeout(full_node: FullNode, transaction_id, task_id):
counter = 0
try:
while True:
# Limit to asking to a few peers, it's possible that this tx got included on chain already
# Highly unlikely that the peers that advertised a tx don't respond to a request. Also, if we
# drop some transactions, we don't want to refetch too many times
if counter == 5:
break
if transaction_id not in full_node.full_node_store.peers_with_tx:
break
peers_with_tx: Set = full_node.full_node_store.peers_with_tx[transaction_id]
if len(peers_with_tx) == 0:
break
peer_id = peers_with_tx.pop()
assert full_node.server is not None
if peer_id not in full_node.server.all_connections:
continue
peer = full_node.server.all_connections[peer_id]
request_tx = full_node_protocol.RequestTransaction(transaction.transaction_id)
msg = make_msg(ProtocolMessageTypes.request_transaction, request_tx)
await peer.send_message(msg)
await asyncio.sleep(5)
counter += 1
if full_node.mempool_manager.seen(transaction_id):
break
except asyncio.CancelledError:
pass
finally:
# Always Cleanup
if transaction_id in full_node.full_node_store.peers_with_tx:
full_node.full_node_store.peers_with_tx.pop(transaction_id)
if transaction_id in full_node.full_node_store.pending_tx_request:
full_node.full_node_store.pending_tx_request.pop(transaction_id)
if task_id in full_node.full_node_store.tx_fetch_tasks:
full_node.full_node_store.tx_fetch_tasks.pop(task_id)
task_id: bytes32 = bytes32(token_bytes(32))
fetch_task = asyncio.create_task(
tx_request_and_timeout(self.full_node, transaction.transaction_id, task_id)
)
self.full_node.full_node_store.tx_fetch_tasks[task_id] = fetch_task
return None
return None
@api_request
@reply_type([ProtocolMessageTypes.respond_transaction])
async def request_transaction(self, request: full_node_protocol.RequestTransaction) -> Optional[Message]:
"""Peer has requested a full transaction from us."""
# Ignore if syncing
if self.full_node.sync_store.get_sync_mode():
return None
spend_bundle = self.full_node.mempool_manager.get_spendbundle(request.transaction_id)
if spend_bundle is None:
return None
transaction = full_node_protocol.RespondTransaction(spend_bundle)
msg = make_msg(ProtocolMessageTypes.respond_transaction, transaction)
return msg
@peer_required
@api_request
@bytes_required
async def respond_transaction(
self,
tx: full_node_protocol.RespondTransaction,
peer: ws.WSChiaConnection,
tx_bytes: bytes = b"",
test: bool = False,
) -> Optional[Message]:
"""
Receives a full transaction from peer.
If tx is added to mempool, send tx_id to others. (new_transaction)
"""
assert tx_bytes != b""
spend_name = std_hash(tx_bytes)
if spend_name in self.full_node.full_node_store.pending_tx_request:
self.full_node.full_node_store.pending_tx_request.pop(spend_name)
if spend_name in self.full_node.full_node_store.peers_with_tx:
self.full_node.full_node_store.peers_with_tx.pop(spend_name)
if self.full_node.transaction_queue.qsize() % 100 == 0 and not self.full_node.transaction_queue.empty():
self.full_node.log.debug(f"respond_transaction Waiters: {self.full_node.transaction_queue.qsize()}")
if self.full_node.transaction_queue.full():
self.full_node.dropped_tx.add(spend_name)
return None
# Higher fee means priority is a smaller number, which means it will be handled earlier
await self.full_node.transaction_queue.put(
(0, TransactionQueueEntry(tx.transaction, tx_bytes, spend_name, peer, test))
)
return None
@api_request
@reply_type([ProtocolMessageTypes.respond_proof_of_weight])
async def request_proof_of_weight(self, request: full_node_protocol.RequestProofOfWeight) -> Optional[Message]:
if self.full_node.weight_proof_handler is None:
return None
if not self.full_node.blockchain.contains_block(request.tip):
self.log.error(f"got weight proof request for unknown peak {request.tip}")
return None
if request.tip in self.full_node.pow_creation:
event = self.full_node.pow_creation[request.tip]
await event.wait()
wp = await self.full_node.weight_proof_handler.get_proof_of_weight(request.tip)
else:
event = asyncio.Event()
self.full_node.pow_creation[request.tip] = event
wp = await self.full_node.weight_proof_handler.get_proof_of_weight(request.tip)
event.set()
tips = list(self.full_node.pow_creation.keys())
if len(tips) > 4:
# Remove old from cache
for i in range(0, 4):
self.full_node.pow_creation.pop(tips[i])
if wp is None:
self.log.error(f"failed creating weight proof for peak {request.tip}")
return None
# Serialization of wp is slow
if (
self.full_node.full_node_store.serialized_wp_message_tip is not None
and self.full_node.full_node_store.serialized_wp_message_tip == request.tip
):
return self.full_node.full_node_store.serialized_wp_message
message = make_msg(
ProtocolMessageTypes.respond_proof_of_weight, full_node_protocol.RespondProofOfWeight(wp, request.tip)
)
self.full_node.full_node_store.serialized_wp_message_tip = request.tip
self.full_node.full_node_store.serialized_wp_message = message
return message
@api_request
async def respond_proof_of_weight(self, request: full_node_protocol.RespondProofOfWeight) -> Optional[Message]:
self.log.warning("Received proof of weight too late.")
return None
@api_request
@reply_type([ProtocolMessageTypes.respond_block, ProtocolMessageTypes.reject_block])
async def request_block(self, request: full_node_protocol.RequestBlock) -> Optional[Message]:
if not self.full_node.blockchain.contains_height(request.height):
reject = RejectBlock(request.height)
msg = make_msg(ProtocolMessageTypes.reject_block, reject)
return msg
header_hash: Optional[bytes32] = self.full_node.blockchain.height_to_hash(request.height)
if header_hash is None:
return make_msg(ProtocolMessageTypes.reject_block, RejectBlock(request.height))
block: Optional[FullBlock] = await self.full_node.block_store.get_full_block(header_hash)
if block is not None:
if not request.include_transaction_block and block.transactions_generator is not None:
block = dataclasses.replace(block, transactions_generator=None)
return make_msg(ProtocolMessageTypes.respond_block, full_node_protocol.RespondBlock(block))
return make_msg(ProtocolMessageTypes.reject_block, RejectBlock(request.height))
@api_request
@reply_type([ProtocolMessageTypes.respond_blocks, ProtocolMessageTypes.reject_blocks])
async def request_blocks(self, request: full_node_protocol.RequestBlocks) -> Optional[Message]:
if request.end_height < request.start_height or request.end_height - request.start_height > 32:
reject = RejectBlocks(request.start_height, request.end_height)
msg: Message = make_msg(ProtocolMessageTypes.reject_blocks, reject)
return msg
for i in range(request.start_height, request.end_height + 1):
if not self.full_node.blockchain.contains_height(uint32(i)):
reject = RejectBlocks(request.start_height, request.end_height)
msg = make_msg(ProtocolMessageTypes.reject_blocks, reject)
return msg
if not request.include_transaction_block:
blocks: List[FullBlock] = []
for i in range(request.start_height, request.end_height + 1):
header_hash_i: Optional[bytes32] = self.full_node.blockchain.height_to_hash(uint32(i))
if header_hash_i is None:
reject = RejectBlocks(request.start_height, request.end_height)
return make_msg(ProtocolMessageTypes.reject_blocks, reject)
block: Optional[FullBlock] = await self.full_node.block_store.get_full_block(header_hash_i)
if block is None:
reject = RejectBlocks(request.start_height, request.end_height)
return make_msg(ProtocolMessageTypes.reject_blocks, reject)
block = dataclasses.replace(block, transactions_generator=None)
blocks.append(block)
msg = make_msg(
ProtocolMessageTypes.respond_blocks,
full_node_protocol.RespondBlocks(request.start_height, request.end_height, blocks),
)
else:
blocks_bytes: List[bytes] = []
for i in range(request.start_height, request.end_height + 1):
header_hash_i = self.full_node.blockchain.height_to_hash(uint32(i))
if header_hash_i is None:
reject = RejectBlocks(request.start_height, request.end_height)
return make_msg(ProtocolMessageTypes.reject_blocks, reject)
block_bytes: Optional[bytes] = await self.full_node.block_store.get_full_block_bytes(header_hash_i)
if block_bytes is None:
reject = RejectBlocks(request.start_height, request.end_height)
msg = make_msg(ProtocolMessageTypes.reject_blocks, reject)
return msg
blocks_bytes.append(block_bytes)
respond_blocks_manually_streamed: bytes = (
bytes(uint32(request.start_height))
+ bytes(uint32(request.end_height))
+ len(blocks_bytes).to_bytes(4, "big", signed=False)
)
for block_bytes in blocks_bytes:
respond_blocks_manually_streamed += block_bytes
msg = make_msg(ProtocolMessageTypes.respond_blocks, respond_blocks_manually_streamed)
return msg
@api_request
async def reject_block(self, request: full_node_protocol.RejectBlock):
self.log.debug(f"reject_block {request.height}")
@api_request
async def reject_blocks(self, request: full_node_protocol.RejectBlocks):
self.log.debug(f"reject_blocks {request.start_height} {request.end_height}")
@api_request
async def respond_blocks(self, request: full_node_protocol.RespondBlocks) -> None:
self.log.warning("Received unsolicited/late blocks")
return None
@api_request
@peer_required
async def respond_block(
self,
respond_block: full_node_protocol.RespondBlock,
peer: ws.WSChiaConnection,
) -> Optional[Message]:
"""
Receive a full block from a peer full node (or ourselves).
"""
self.log.warning(f"Received unsolicited/late block from peer {peer.get_peer_logging()}")
return None
@api_request
async def new_unfinished_block(
self, new_unfinished_block: full_node_protocol.NewUnfinishedBlock
) -> Optional[Message]:
# Ignore if syncing
if self.full_node.sync_store.get_sync_mode():
return None
block_hash = new_unfinished_block.unfinished_reward_hash
if self.full_node.full_node_store.get_unfinished_block(block_hash) is not None:
return None
# This prevents us from downloading the same block from many peers
if block_hash in self.full_node.full_node_store.requesting_unfinished_blocks:
return None
msg = make_msg(
ProtocolMessageTypes.request_unfinished_block,
full_node_protocol.RequestUnfinishedBlock(block_hash),
)
self.full_node.full_node_store.requesting_unfinished_blocks.add(block_hash)
# However, we want to eventually download from other peers, if this peer does not respond
# Todo: keep track of who it was
async def eventually_clear():
await asyncio.sleep(5)
if block_hash in self.full_node.full_node_store.requesting_unfinished_blocks:
self.full_node.full_node_store.requesting_unfinished_blocks.remove(block_hash)
asyncio.create_task(eventually_clear())
return msg
@api_request
@reply_type([ProtocolMessageTypes.respond_unfinished_block])
async def request_unfinished_block(
self, request_unfinished_block: full_node_protocol.RequestUnfinishedBlock
) -> Optional[Message]:
unfinished_block: Optional[UnfinishedBlock] = self.full_node.full_node_store.get_unfinished_block(
request_unfinished_block.unfinished_reward_hash
)
if unfinished_block is not None:
msg = make_msg(
ProtocolMessageTypes.respond_unfinished_block,
full_node_protocol.RespondUnfinishedBlock(unfinished_block),
)
return msg
return None
@peer_required
@api_request
@bytes_required
async def respond_unfinished_block(
self,
respond_unfinished_block: full_node_protocol.RespondUnfinishedBlock,
peer: ws.WSChiaConnection,
respond_unfinished_block_bytes: bytes = b"",
) -> Optional[Message]:
if self.full_node.sync_store.get_sync_mode():
return None
await self.full_node.respond_unfinished_block(
respond_unfinished_block, peer, block_bytes=respond_unfinished_block_bytes
)
return None
@api_request
@peer_required
async def new_signage_point_or_end_of_sub_slot(
self, new_sp: full_node_protocol.NewSignagePointOrEndOfSubSlot, peer: ws.WSChiaConnection
) -> Optional[Message]:
# Ignore if syncing
if self.full_node.sync_store.get_sync_mode():
return None
if (
self.full_node.full_node_store.get_signage_point_by_index(
new_sp.challenge_hash,
new_sp.index_from_challenge,
new_sp.last_rc_infusion,
)
is not None
):
return None
if self.full_node.full_node_store.have_newer_signage_point(
new_sp.challenge_hash, new_sp.index_from_challenge, new_sp.last_rc_infusion
):
return None
if new_sp.index_from_challenge == 0 and new_sp.prev_challenge_hash is not None:
if self.full_node.full_node_store.get_sub_slot(new_sp.prev_challenge_hash) is None:
collected_eos = []
challenge_hash_to_request = new_sp.challenge_hash
last_rc = new_sp.last_rc_infusion
num_non_empty_sub_slots_seen = 0
for _ in range(30):
if num_non_empty_sub_slots_seen >= 3:
self.log.debug("Diverged from peer. Don't have the same blocks")
return None
# If this is an end of sub slot, and we don't have the prev, request the prev instead
# We want to catch up to the latest slot so we can receive signage points
full_node_request = full_node_protocol.RequestSignagePointOrEndOfSubSlot(
challenge_hash_to_request, uint8(0), last_rc
)
response = await peer.request_signage_point_or_end_of_sub_slot(full_node_request, timeout=10)
if not isinstance(response, full_node_protocol.RespondEndOfSubSlot):
self.full_node.log.debug(f"Invalid response for slot {response}")
return None
collected_eos.append(response)
if (
self.full_node.full_node_store.get_sub_slot(
response.end_of_slot_bundle.challenge_chain.challenge_chain_end_of_slot_vdf.challenge
)
is not None
or response.end_of_slot_bundle.challenge_chain.challenge_chain_end_of_slot_vdf.challenge
== self.full_node.constants.GENESIS_CHALLENGE
):
for eos in reversed(collected_eos):
await self.respond_end_of_sub_slot(eos, peer)
return None
if (
response.end_of_slot_bundle.challenge_chain.challenge_chain_end_of_slot_vdf.number_of_iterations
!= response.end_of_slot_bundle.reward_chain.end_of_slot_vdf.number_of_iterations
):
num_non_empty_sub_slots_seen += 1
challenge_hash_to_request = (
response.end_of_slot_bundle.challenge_chain.challenge_chain_end_of_slot_vdf.challenge
)
last_rc = response.end_of_slot_bundle.reward_chain.end_of_slot_vdf.challenge
self.full_node.log.warning("Failed to catch up in sub-slots")
return None
if new_sp.index_from_challenge > 0:
if (
new_sp.challenge_hash != self.full_node.constants.GENESIS_CHALLENGE
and self.full_node.full_node_store.get_sub_slot(new_sp.challenge_hash) is None
):
# If this is a normal signage point,, and we don't have the end of sub slot, request the end of sub slot
full_node_request = full_node_protocol.RequestSignagePointOrEndOfSubSlot(
new_sp.challenge_hash, uint8(0), new_sp.last_rc_infusion
)
return make_msg(ProtocolMessageTypes.request_signage_point_or_end_of_sub_slot, full_node_request)
# Otherwise (we have the prev or the end of sub slot), request it normally
full_node_request = full_node_protocol.RequestSignagePointOrEndOfSubSlot(
new_sp.challenge_hash, new_sp.index_from_challenge, new_sp.last_rc_infusion
)
return make_msg(ProtocolMessageTypes.request_signage_point_or_end_of_sub_slot, full_node_request)
@api_request
@reply_type([ProtocolMessageTypes.respond_signage_point, ProtocolMessageTypes.respond_end_of_sub_slot])
async def request_signage_point_or_end_of_sub_slot(
self, request: full_node_protocol.RequestSignagePointOrEndOfSubSlot
) -> Optional[Message]:
if request.index_from_challenge == 0:
sub_slot: Optional[Tuple[EndOfSubSlotBundle, int, uint128]] = self.full_node.full_node_store.get_sub_slot(
request.challenge_hash
)
if sub_slot is not None:
return make_msg(
ProtocolMessageTypes.respond_end_of_sub_slot,
full_node_protocol.RespondEndOfSubSlot(sub_slot[0]),
)
else:
if self.full_node.full_node_store.get_sub_slot(request.challenge_hash) is None:
if request.challenge_hash != self.full_node.constants.GENESIS_CHALLENGE:
self.log.info(f"Don't have challenge hash {request.challenge_hash}")
sp: Optional[SignagePoint] = self.full_node.full_node_store.get_signage_point_by_index(
request.challenge_hash,
request.index_from_challenge,
request.last_rc_infusion,
)
if sp is not None:
assert (
sp.cc_vdf is not None
and sp.cc_proof is not None
and sp.rc_vdf is not None
and sp.rc_proof is not None
)
full_node_response = full_node_protocol.RespondSignagePoint(
request.index_from_challenge,
sp.cc_vdf,
sp.cc_proof,
sp.rc_vdf,
sp.rc_proof,
)
return make_msg(ProtocolMessageTypes.respond_signage_point, full_node_response)
else:
self.log.info(f"Don't have signage point {request}")
return None
@peer_required
@api_request
async def respond_signage_point(
self, request: full_node_protocol.RespondSignagePoint, peer: ws.WSChiaConnection
) -> Optional[Message]:
if self.full_node.sync_store.get_sync_mode():
return None
async with self.full_node.timelord_lock:
# Already have signage point
if self.full_node.full_node_store.have_newer_signage_point(
request.challenge_chain_vdf.challenge,
request.index_from_challenge,
request.reward_chain_vdf.challenge,
):
return None
existing_sp = self.full_node.full_node_store.get_signage_point(
request.challenge_chain_vdf.output.get_hash()
)
if existing_sp is not None and existing_sp.rc_vdf == request.reward_chain_vdf:
return None
peak = self.full_node.blockchain.get_peak()
if peak is not None and peak.height > self.full_node.constants.MAX_SUB_SLOT_BLOCKS:
next_sub_slot_iters = self.full_node.blockchain.get_next_slot_iters(peak.header_hash, True)
sub_slots_for_peak = await self.full_node.blockchain.get_sp_and_ip_sub_slots(peak.header_hash)
assert sub_slots_for_peak is not None
ip_sub_slot: Optional[EndOfSubSlotBundle] = sub_slots_for_peak[1]
else:
sub_slot_iters = self.full_node.constants.SUB_SLOT_ITERS_STARTING
next_sub_slot_iters = sub_slot_iters
ip_sub_slot = None
added = self.full_node.full_node_store.new_signage_point(
request.index_from_challenge,
self.full_node.blockchain,
self.full_node.blockchain.get_peak(),
next_sub_slot_iters,
SignagePoint(
request.challenge_chain_vdf,
request.challenge_chain_proof,
request.reward_chain_vdf,
request.reward_chain_proof,
),
)
if added:
await self.full_node.signage_point_post_processing(request, peer, ip_sub_slot)
else:
self.log.debug(
f"Signage point {request.index_from_challenge} not added, CC challenge: "
f"{request.challenge_chain_vdf.challenge}, RC challenge: {request.reward_chain_vdf.challenge}"
)
return None
@peer_required
@api_request
async def respond_end_of_sub_slot(
self, request: full_node_protocol.RespondEndOfSubSlot, peer: ws.WSChiaConnection
) -> Optional[Message]:
if self.full_node.sync_store.get_sync_mode():
return None
msg, _ = await self.full_node.respond_end_of_sub_slot(request, peer)
return msg
@peer_required
@api_request
async def request_mempool_transactions(
self,
request: full_node_protocol.RequestMempoolTransactions,
peer: ws.WSChiaConnection,
) -> Optional[Message]:
received_filter = PyBIP158(bytearray(request.filter))
items: List[MempoolItem] = await self.full_node.mempool_manager.get_items_not_in_filter(received_filter)
for item in items:
transaction = full_node_protocol.RespondTransaction(item.spend_bundle)
msg = make_msg(ProtocolMessageTypes.respond_transaction, transaction)
await peer.send_message(msg)
return None
# FARMER PROTOCOL
@api_request
@peer_required
async def declare_proof_of_space(
self, request: farmer_protocol.DeclareProofOfSpace, peer: ws.WSChiaConnection
) -> Optional[Message]:
"""
Creates a block body and header, with the proof of space, coinbase, and fee targets provided
by the farmer, and sends the hash of the header data back to the farmer.
"""
if self.full_node.sync_store.get_sync_mode():
return None
async with self.full_node.timelord_lock:
sp_vdfs: Optional[SignagePoint] = self.full_node.full_node_store.get_signage_point(
request.challenge_chain_sp
)
if sp_vdfs is None:
self.log.warning(f"Received proof of space for an unknown signage point {request.challenge_chain_sp}")
return None
if request.signage_point_index > 0:
assert sp_vdfs.rc_vdf is not None
if sp_vdfs.rc_vdf.output.get_hash() != request.reward_chain_sp:
self.log.debug(
f"Received proof of space for a potentially old signage point {request.challenge_chain_sp}. "
f"Current sp: {sp_vdfs.rc_vdf.output.get_hash()}"
)
return None
if request.signage_point_index == 0:
cc_challenge_hash: bytes32 = request.challenge_chain_sp
else:
assert sp_vdfs.cc_vdf is not None
cc_challenge_hash = sp_vdfs.cc_vdf.challenge
pos_sub_slot: Optional[Tuple[EndOfSubSlotBundle, int, uint128]] = None
if request.challenge_hash != self.full_node.constants.GENESIS_CHALLENGE:
# Checks that the proof of space is a response to a recent challenge and valid SP
pos_sub_slot = self.full_node.full_node_store.get_sub_slot(cc_challenge_hash)
if pos_sub_slot is None:
self.log.warning(f"Received proof of space for an unknown sub slot: {request}")
return None
total_iters_pos_slot: uint128 = pos_sub_slot[2]
else:
total_iters_pos_slot = uint128(0)
assert cc_challenge_hash == request.challenge_hash
# Now we know that the proof of space has a signage point either:
# 1. In the previous sub-slot of the peak (overflow)
# 2. In the same sub-slot as the peak
# 3. In a future sub-slot that we already know of
# Checks that the proof of space is valid
quality_string: Optional[bytes32] = request.proof_of_space.verify_and_get_quality_string(
self.full_node.constants, cc_challenge_hash, request.challenge_chain_sp
)
assert quality_string is not None and len(quality_string) == 32
# Grab best transactions from Mempool for given tip target
aggregate_signature: G2Element = G2Element()
block_generator: Optional[BlockGenerator] = None
additions: Optional[List[Coin]] = []
removals: Optional[List[Coin]] = []
async with self.full_node._blockchain_lock_high_priority:
peak: Optional[BlockRecord] = self.full_node.blockchain.get_peak()
if peak is not None:
# Finds the last transaction block before this one
curr_l_tb: BlockRecord = peak
while not curr_l_tb.is_transaction_block:
curr_l_tb = self.full_node.blockchain.block_record(curr_l_tb.prev_hash)
try:
mempool_bundle = await self.full_node.mempool_manager.create_bundle_from_mempool(
curr_l_tb.header_hash
)
except Exception as e:
self.log.error(f"Traceback: {traceback.format_exc()}")
self.full_node.log.error(f"Error making spend bundle {e} peak: {peak}")
mempool_bundle = None
if mempool_bundle is not None:
spend_bundle = mempool_bundle[0]
additions = mempool_bundle[1]
removals = mempool_bundle[2]
self.full_node.log.info(f"Add rem: {len(additions)} {len(removals)}")
aggregate_signature = spend_bundle.aggregated_signature
if self.full_node.full_node_store.previous_generator is not None:
self.log.info(
f"Using previous generator for height "
f"{self.full_node.full_node_store.previous_generator}"
)
block_generator = best_solution_generator_from_template(
self.full_node.full_node_store.previous_generator, spend_bundle
)
else:
block_generator = simple_solution_generator(spend_bundle)
def get_plot_sig(to_sign, _) -> G2Element:
if to_sign == request.challenge_chain_sp:
return request.challenge_chain_sp_signature
elif to_sign == request.reward_chain_sp:
return request.reward_chain_sp_signature
return G2Element()
def get_pool_sig(_1, _2) -> Optional[G2Element]:
return request.pool_signature
prev_b: Optional[BlockRecord] = self.full_node.blockchain.get_peak()
# Finds the previous block from the signage point, ensuring that the reward chain VDF is correct
if prev_b is not None:
if request.signage_point_index == 0:
if pos_sub_slot is None:
self.log.warning("Pos sub slot is None")
return None
rc_challenge = pos_sub_slot[0].reward_chain.end_of_slot_vdf.challenge
else:
assert sp_vdfs.rc_vdf is not None
rc_challenge = sp_vdfs.rc_vdf.challenge
# Backtrack through empty sub-slots
for eos, _, _ in reversed(self.full_node.full_node_store.finished_sub_slots):
if eos is not None and eos.reward_chain.get_hash() == rc_challenge:
rc_challenge = eos.reward_chain.end_of_slot_vdf.challenge
found = False
attempts = 0
while prev_b is not None and attempts < 10:
if prev_b.reward_infusion_new_challenge == rc_challenge:
found = True
break
if prev_b.finished_reward_slot_hashes is not None and len(prev_b.finished_reward_slot_hashes) > 0:
if prev_b.finished_reward_slot_hashes[-1] == rc_challenge:
# This block includes a sub-slot which is where our SP vdf starts. Go back one more
# to find the prev block
prev_b = self.full_node.blockchain.try_block_record(prev_b.prev_hash)
found = True
break
prev_b = self.full_node.blockchain.try_block_record(prev_b.prev_hash)
attempts += 1
if not found:
self.log.warning("Did not find a previous block with the correct reward chain hash")
return None
try:
finished_sub_slots: Optional[
List[EndOfSubSlotBundle]
] = self.full_node.full_node_store.get_finished_sub_slots(
self.full_node.blockchain, prev_b, cc_challenge_hash
)
if finished_sub_slots is None:
return None
if (
len(finished_sub_slots) > 0
and pos_sub_slot is not None
and finished_sub_slots[-1] != pos_sub_slot[0]
):
self.log.error("Have different sub-slots than is required to farm this block")
return None
except ValueError as e:
self.log.warning(f"Value Error: {e}")
return None
if prev_b is None:
pool_target = PoolTarget(
self.full_node.constants.GENESIS_PRE_FARM_POOL_PUZZLE_HASH,
uint32(0),
)
farmer_ph = self.full_node.constants.GENESIS_PRE_FARM_FARMER_PUZZLE_HASH
else:
farmer_ph = request.farmer_puzzle_hash
if request.proof_of_space.pool_contract_puzzle_hash is not None:
pool_target = PoolTarget(request.proof_of_space.pool_contract_puzzle_hash, uint32(0))
else:
assert request.pool_target is not None
pool_target = request.pool_target
if peak is None or peak.height <= self.full_node.constants.MAX_SUB_SLOT_BLOCKS:
difficulty = self.full_node.constants.DIFFICULTY_STARTING
sub_slot_iters = self.full_node.constants.SUB_SLOT_ITERS_STARTING
else:
difficulty = uint64(peak.weight - self.full_node.blockchain.block_record(peak.prev_hash).weight)
sub_slot_iters = peak.sub_slot_iters
for sub_slot in finished_sub_slots:
if sub_slot.challenge_chain.new_difficulty is not None:
difficulty = sub_slot.challenge_chain.new_difficulty
if sub_slot.challenge_chain.new_sub_slot_iters is not None:
sub_slot_iters = sub_slot.challenge_chain.new_sub_slot_iters
required_iters: uint64 = calculate_iterations_quality(
self.full_node.constants.DIFFICULTY_CONSTANT_FACTOR,
quality_string,
request.proof_of_space.size,
difficulty,
request.challenge_chain_sp,
)
sp_iters: uint64 = calculate_sp_iters(self.full_node.constants, sub_slot_iters, request.signage_point_index)
ip_iters: uint64 = calculate_ip_iters(
self.full_node.constants,
sub_slot_iters,
request.signage_point_index,
required_iters,
)
# The block's timestamp must be greater than the previous transaction block's timestamp
timestamp = uint64(int(time.time()))
curr: Optional[BlockRecord] = prev_b
while curr is not None and not curr.is_transaction_block and curr.height != 0:
curr = self.full_node.blockchain.try_block_record(curr.prev_hash)
if curr is not None:
assert curr.timestamp is not None
if timestamp <= curr.timestamp:
timestamp = uint64(int(curr.timestamp + 1))
self.log.info("Starting to make the unfinished block")
unfinished_block: UnfinishedBlock = create_unfinished_block(
self.full_node.constants,
total_iters_pos_slot,
sub_slot_iters,
request.signage_point_index,
sp_iters,
ip_iters,
request.proof_of_space,
cc_challenge_hash,
farmer_ph,
pool_target,
get_plot_sig,
get_pool_sig,
sp_vdfs,
timestamp,
self.full_node.blockchain,
b"",
block_generator,
aggregate_signature,
additions,
removals,
prev_b,
finished_sub_slots,
)
self.log.info("Made the unfinished block")
if prev_b is not None:
height: uint32 = uint32(prev_b.height + 1)
else:
height = uint32(0)
self.full_node.full_node_store.add_candidate_block(quality_string, height, unfinished_block)
foliage_sb_data_hash = unfinished_block.foliage.foliage_block_data.get_hash()
if unfinished_block.is_transaction_block():
foliage_transaction_block_hash = unfinished_block.foliage.foliage_transaction_block_hash
else:
foliage_transaction_block_hash = bytes32([0] * 32)
assert foliage_transaction_block_hash is not None
message = farmer_protocol.RequestSignedValues(
quality_string,
foliage_sb_data_hash,
foliage_transaction_block_hash,
)
await peer.send_message(make_msg(ProtocolMessageTypes.request_signed_values, message))
# Adds backup in case the first one fails
if unfinished_block.is_transaction_block() and unfinished_block.transactions_generator is not None:
unfinished_block_backup = create_unfinished_block(
self.full_node.constants,
total_iters_pos_slot,
sub_slot_iters,
request.signage_point_index,
sp_iters,
ip_iters,
request.proof_of_space,
cc_challenge_hash,
farmer_ph,
pool_target,
get_plot_sig,
get_pool_sig,
sp_vdfs,
timestamp,
self.full_node.blockchain,
b"",
None,
G2Element(),
None,
None,
prev_b,
finished_sub_slots,
)
self.full_node.full_node_store.add_candidate_block(
quality_string, height, unfinished_block_backup, backup=True
)
return None
@api_request
@peer_required
async def signed_values(
self, farmer_request: farmer_protocol.SignedValues, peer: ws.WSChiaConnection
) -> Optional[Message]:
"""
Signature of header hash, by the harvester. This is enough to create an unfinished
block, which only needs a Proof of Time to be finished. If the signature is valid,
we call the unfinished_block routine.
"""
candidate_tuple: Optional[Tuple[uint32, UnfinishedBlock]] = self.full_node.full_node_store.get_candidate_block(
farmer_request.quality_string
)
if candidate_tuple is None:
self.log.warning(f"Quality string {farmer_request.quality_string} not found in database")
return None
height, candidate = candidate_tuple
if not AugSchemeMPL.verify(
candidate.reward_chain_block.proof_of_space.plot_public_key,
candidate.foliage.foliage_block_data.get_hash(),
farmer_request.foliage_block_data_signature,
):
self.log.warning("Signature not valid. There might be a collision in plots. Ignore this during tests.")
return None
fsb2 = dataclasses.replace(
candidate.foliage,
foliage_block_data_signature=farmer_request.foliage_block_data_signature,
)
if candidate.is_transaction_block():
fsb2 = dataclasses.replace(
fsb2, foliage_transaction_block_signature=farmer_request.foliage_transaction_block_signature
)
new_candidate = dataclasses.replace(candidate, foliage=fsb2)
if not self.full_node.has_valid_pool_sig(new_candidate):
self.log.warning("Trying to make a pre-farm block but height is not 0")
return None
# Propagate to ourselves (which validates and does further propagations)
request = full_node_protocol.RespondUnfinishedBlock(new_candidate)
try:
await self.full_node.respond_unfinished_block(request, None, True)
except Exception as e:
# If we have an error with this block, try making an empty block
self.full_node.log.error(f"Error farming block {e} {request}")
candidate_tuple = self.full_node.full_node_store.get_candidate_block(
farmer_request.quality_string, backup=True
)
if candidate_tuple is not None:
height, unfinished_block = candidate_tuple
self.full_node.full_node_store.add_candidate_block(
farmer_request.quality_string, height, unfinished_block, False
)
# All unfinished blocks that we create will have the foliage transaction block and hash
assert unfinished_block.foliage.foliage_transaction_block_hash is not None
message = farmer_protocol.RequestSignedValues(
farmer_request.quality_string,
unfinished_block.foliage.foliage_block_data.get_hash(),
unfinished_block.foliage.foliage_transaction_block_hash,
)
await peer.send_message(make_msg(ProtocolMessageTypes.request_signed_values, message))
return None
# TIMELORD PROTOCOL
@peer_required
@api_request
async def new_infusion_point_vdf(
self, request: timelord_protocol.NewInfusionPointVDF, peer: ws.WSChiaConnection
) -> Optional[Message]:
if self.full_node.sync_store.get_sync_mode():
return None
# Lookup unfinished blocks
async with self.full_node.timelord_lock:
return await self.full_node.new_infusion_point_vdf(request, peer)
@peer_required
@api_request
async def new_signage_point_vdf(
self, request: timelord_protocol.NewSignagePointVDF, peer: ws.WSChiaConnection
) -> None:
if self.full_node.sync_store.get_sync_mode():
return None
full_node_message = full_node_protocol.RespondSignagePoint(
request.index_from_challenge,
request.challenge_chain_sp_vdf,
request.challenge_chain_sp_proof,
request.reward_chain_sp_vdf,
request.reward_chain_sp_proof,
)
await self.respond_signage_point(full_node_message, peer)
@peer_required
@api_request
async def new_end_of_sub_slot_vdf(
self, request: timelord_protocol.NewEndOfSubSlotVDF, peer: ws.WSChiaConnection
) -> Optional[Message]:
if self.full_node.sync_store.get_sync_mode():
return None
if (
self.full_node.full_node_store.get_sub_slot(request.end_of_sub_slot_bundle.challenge_chain.get_hash())
is not None
):
return None
# Calls our own internal message to handle the end of sub slot, and potentially broadcasts to other peers.
full_node_message = full_node_protocol.RespondEndOfSubSlot(request.end_of_sub_slot_bundle)
msg, added = await self.full_node.respond_end_of_sub_slot(full_node_message, peer)
if not added:
self.log.error(
f"Was not able to add end of sub-slot: "
f"{request.end_of_sub_slot_bundle.challenge_chain.challenge_chain_end_of_slot_vdf.challenge}. "
f"Re-sending new-peak to timelord"
)
await self.full_node.send_peak_to_timelords(peer=peer)
return None
else:
return msg
@api_request
async def request_block_header(self, request: wallet_protocol.RequestBlockHeader) -> Optional[Message]:
header_hash = self.full_node.blockchain.height_to_hash(request.height)
if header_hash is None:
msg = make_msg(ProtocolMessageTypes.reject_header_request, RejectHeaderRequest(request.height))
return msg
block: Optional[FullBlock] = await self.full_node.block_store.get_full_block(header_hash)
if block is not None:
tx_removals, tx_additions, _ = await self.full_node.blockchain.get_tx_removals_and_additions(block)
header_block = get_block_header(block, tx_additions, tx_removals)
msg = make_msg(
ProtocolMessageTypes.respond_block_header,
wallet_protocol.RespondBlockHeader(header_block),
)
return msg
return None
@api_request
async def request_additions(self, request: wallet_protocol.RequestAdditions) -> Optional[Message]:
if request.header_hash is None:
header_hash: Optional[bytes32] = self.full_node.blockchain.height_to_hash(request.height)
else:
header_hash = request.header_hash
if header_hash is None:
raise ValueError(f"Block at height {request.height} not found")
block: Optional[FullBlock] = await self.full_node.block_store.get_full_block(header_hash)
# We lock so that the coin store does not get modified
if (
block is None
or block.is_transaction_block() is False
or self.full_node.blockchain.height_to_hash(block.height) != request.header_hash
):
reject = wallet_protocol.RejectAdditionsRequest(request.height, header_hash)
msg = make_msg(ProtocolMessageTypes.reject_additions_request, reject)
return msg
assert block is not None and block.foliage_transaction_block is not None
# Note: this might return bad data if there is a reorg in this time
additions = await self.full_node.coin_store.get_coins_added_at_height(block.height)
if self.full_node.blockchain.height_to_hash(block.height) != request.header_hash:
raise ValueError(f"Block {block.header_hash} no longer in chain")
puzzlehash_coins_map: Dict[bytes32, List[Coin]] = {}
for coin_record in additions:
if coin_record.coin.puzzle_hash in puzzlehash_coins_map:
puzzlehash_coins_map[coin_record.coin.puzzle_hash].append(coin_record.coin)
else:
puzzlehash_coins_map[coin_record.coin.puzzle_hash] = [coin_record.coin]
coins_map: List[Tuple[bytes32, List[Coin]]] = []
proofs_map: List[Tuple[bytes32, bytes, Optional[bytes]]] = []
if request.puzzle_hashes is None:
for puzzle_hash, coins in puzzlehash_coins_map.items():
coins_map.append((puzzle_hash, coins))
response = wallet_protocol.RespondAdditions(block.height, block.header_hash, coins_map, None)
else:
# Create addition Merkle set
addition_merkle_set = MerkleSet()
# Addition Merkle set contains puzzlehash and hash of all coins with that puzzlehash
for puzzle, coins in puzzlehash_coins_map.items():
addition_merkle_set.add_already_hashed(puzzle)
addition_merkle_set.add_already_hashed(hash_coin_list(coins))
assert addition_merkle_set.get_root() == block.foliage_transaction_block.additions_root
for puzzle_hash in request.puzzle_hashes:
result, proof = addition_merkle_set.is_included_already_hashed(puzzle_hash)
if puzzle_hash in puzzlehash_coins_map:
coins_map.append((puzzle_hash, puzzlehash_coins_map[puzzle_hash]))
hash_coin_str = hash_coin_list(puzzlehash_coins_map[puzzle_hash])
result_2, proof_2 = addition_merkle_set.is_included_already_hashed(hash_coin_str)
assert result
assert result_2
proofs_map.append((puzzle_hash, proof, proof_2))
else:
coins_map.append((puzzle_hash, []))
assert not result
proofs_map.append((puzzle_hash, proof, None))
response = wallet_protocol.RespondAdditions(block.height, block.header_hash, coins_map, proofs_map)
msg = make_msg(ProtocolMessageTypes.respond_additions, response)
return msg
@api_request
async def request_removals(self, request: wallet_protocol.RequestRemovals) -> Optional[Message]:
block: Optional[FullBlock] = await self.full_node.block_store.get_full_block(request.header_hash)
# We lock so that the coin store does not get modified
peak_height = self.full_node.blockchain.get_peak_height()
if (
block is None
or block.is_transaction_block() is False
or block.height != request.height
or (peak_height is not None and block.height > peak_height)
or self.full_node.blockchain.height_to_hash(block.height) != request.header_hash
):
reject = wallet_protocol.RejectRemovalsRequest(request.height, request.header_hash)
msg = make_msg(ProtocolMessageTypes.reject_removals_request, reject)
return msg
assert block is not None and block.foliage_transaction_block is not None
# Note: this might return bad data if there is a reorg in this time
all_removals: List[CoinRecord] = await self.full_node.coin_store.get_coins_removed_at_height(block.height)
if self.full_node.blockchain.height_to_hash(block.height) != request.header_hash:
raise ValueError(f"Block {block.header_hash} no longer in chain")
all_removals_dict: Dict[bytes32, Coin] = {}
for coin_record in all_removals:
all_removals_dict[coin_record.coin.name()] = coin_record.coin
coins_map: List[Tuple[bytes32, Optional[Coin]]] = []
proofs_map: List[Tuple[bytes32, bytes]] = []
# If there are no transactions, respond with empty lists
if block.transactions_generator is None:
proofs: Optional[List]
if request.coin_names is None:
proofs = None
else:
proofs = []
response = wallet_protocol.RespondRemovals(block.height, block.header_hash, [], proofs)
elif request.coin_names is None or len(request.coin_names) == 0:
for removed_name, removed_coin in all_removals_dict.items():
coins_map.append((removed_name, removed_coin))
response = wallet_protocol.RespondRemovals(block.height, block.header_hash, coins_map, None)
else:
assert block.transactions_generator
removal_merkle_set = MerkleSet()
for removed_name, removed_coin in all_removals_dict.items():
removal_merkle_set.add_already_hashed(removed_name)
assert removal_merkle_set.get_root() == block.foliage_transaction_block.removals_root
for coin_name in request.coin_names:
result, proof = removal_merkle_set.is_included_already_hashed(coin_name)
proofs_map.append((coin_name, proof))
if coin_name in all_removals_dict:
removed_coin = all_removals_dict[coin_name]
coins_map.append((coin_name, removed_coin))
assert result
else:
coins_map.append((coin_name, None))
assert not result
response = wallet_protocol.RespondRemovals(block.height, block.header_hash, coins_map, proofs_map)
msg = make_msg(ProtocolMessageTypes.respond_removals, response)
return msg
@api_request
async def send_transaction(self, request: wallet_protocol.SendTransaction, *, test=False) -> Optional[Message]:
spend_name = request.transaction.name()
await self.full_node.transaction_queue.put(
(0, TransactionQueueEntry(request.transaction, None, spend_name, None, test))
)
# Waits for the transaction to go into the mempool, times out after 45 seconds.
status, error = None, None
sleep_time = 0.01
for i in range(int(45 / sleep_time)):
await asyncio.sleep(sleep_time)
for potential_name, potential_status, potential_error in self.full_node.transaction_responses:
if spend_name == potential_name:
status = potential_status
error = potential_error
break
if status is not None:
break
if status is None:
response = wallet_protocol.TransactionAck(spend_name, uint8(MempoolInclusionStatus.PENDING), None)
else:
error_name = error.name if error is not None else None
if status == MempoolInclusionStatus.SUCCESS:
response = wallet_protocol.TransactionAck(spend_name, uint8(status.value), error_name)
else:
# If if failed/pending, but it previously succeeded (in mempool), this is idempotence, return SUCCESS
if self.full_node.mempool_manager.get_spendbundle(spend_name) is not None:
response = wallet_protocol.TransactionAck(
spend_name, uint8(MempoolInclusionStatus.SUCCESS.value), None
)
else:
response = wallet_protocol.TransactionAck(spend_name, uint8(status.value), error_name)
msg = make_msg(ProtocolMessageTypes.transaction_ack, response)
return msg
@api_request
async def request_puzzle_solution(self, request: wallet_protocol.RequestPuzzleSolution) -> Optional[Message]:
coin_name = request.coin_name
height = request.height
coin_record = await self.full_node.coin_store.get_coin_record(coin_name)
reject = wallet_protocol.RejectPuzzleSolution(coin_name, height)
reject_msg = make_msg(ProtocolMessageTypes.reject_puzzle_solution, reject)
if coin_record is None or coin_record.spent_block_index != height:
return reject_msg
header_hash: Optional[bytes32] = self.full_node.blockchain.height_to_hash(height)
if header_hash is None:
return reject_msg
block: Optional[FullBlock] = await self.full_node.block_store.get_full_block(header_hash)
if block is None or block.transactions_generator is None:
return reject_msg
block_generator: Optional[BlockGenerator] = await self.full_node.blockchain.get_block_generator(block)
assert block_generator is not None
error, puzzle, solution = get_puzzle_and_solution_for_coin(
block_generator, coin_name, self.full_node.constants.MAX_BLOCK_COST_CLVM
)
if error is not None:
return reject_msg
pz = Program.to(puzzle)
sol = Program.to(solution)
wrapper = PuzzleSolutionResponse(coin_name, height, pz, sol)
response = wallet_protocol.RespondPuzzleSolution(wrapper)
response_msg = make_msg(ProtocolMessageTypes.respond_puzzle_solution, response)
return response_msg
@api_request
async def request_header_blocks(self, request: wallet_protocol.RequestHeaderBlocks) -> Optional[Message]:
if request.end_height < request.start_height or request.end_height - request.start_height > 32:
return None
header_hashes: List[bytes32] = []
for i in range(request.start_height, request.end_height + 1):
header_hash: Optional[bytes32] = self.full_node.blockchain.height_to_hash(uint32(i))
if header_hash is None:
reject = RejectHeaderBlocks(request.start_height, request.end_height)
msg = make_msg(ProtocolMessageTypes.reject_header_blocks, reject)
return msg
header_hashes.append(header_hash)
blocks: List[FullBlock] = await self.full_node.block_store.get_blocks_by_hash(header_hashes)
header_blocks = []
for block in blocks:
added_coins_records = await self.full_node.coin_store.get_coins_added_at_height(block.height)
removed_coins_records = await self.full_node.coin_store.get_coins_removed_at_height(block.height)
added_coins = [record.coin for record in added_coins_records if not record.coinbase]
removal_names = [record.coin.name() for record in removed_coins_records]
header_block = get_block_header(block, added_coins, removal_names)
header_blocks.append(header_block)
msg = make_msg(
ProtocolMessageTypes.respond_header_blocks,
wallet_protocol.RespondHeaderBlocks(request.start_height, request.end_height, header_blocks),
)
return msg
@api_request
async def respond_compact_proof_of_time(self, request: timelord_protocol.RespondCompactProofOfTime):
if self.full_node.sync_store.get_sync_mode():
return None
await self.full_node.respond_compact_proof_of_time(request)
@execute_task
@peer_required
@api_request
@bytes_required
async def new_compact_vdf(
self, request: full_node_protocol.NewCompactVDF, peer: ws.WSChiaConnection, request_bytes: bytes = b""
):
if self.full_node.sync_store.get_sync_mode():
return None
if len(self.full_node.compact_vdf_sem._waiters) > 20:
self.log.debug(f"Ignoring NewCompactVDF: {request}, _waiters")
return
name = std_hash(request_bytes)
if name in self.full_node.compact_vdf_requests:
self.log.debug(f"Ignoring NewCompactVDF: {request}, already requested")
return
self.full_node.compact_vdf_requests.add(name)
# this semaphore will only allow a limited number of tasks call
# new_compact_vdf() at a time, since it can be expensive
async with self.full_node.compact_vdf_sem:
try:
await self.full_node.new_compact_vdf(request, peer)
finally:
self.full_node.compact_vdf_requests.remove(name)
@peer_required
@api_request
@reply_type([ProtocolMessageTypes.respond_compact_vdf])
async def request_compact_vdf(self, request: full_node_protocol.RequestCompactVDF, peer: ws.WSChiaConnection):
if self.full_node.sync_store.get_sync_mode():
return None
await self.full_node.request_compact_vdf(request, peer)
@peer_required
@api_request
async def respond_compact_vdf(self, request: full_node_protocol.RespondCompactVDF, peer: ws.WSChiaConnection):
if self.full_node.sync_store.get_sync_mode():
return None
await self.full_node.respond_compact_vdf(request, peer)
@peer_required
@api_request
async def register_interest_in_puzzle_hash(
self, request: wallet_protocol.RegisterForPhUpdates, peer: ws.WSChiaConnection
):
if peer.peer_node_id not in self.full_node.peer_puzzle_hash:
self.full_node.peer_puzzle_hash[peer.peer_node_id] = set()
if peer.peer_node_id not in self.full_node.peer_sub_counter:
self.full_node.peer_sub_counter[peer.peer_node_id] = 0
hint_coin_ids = []
# Add peer to the "Subscribed" dictionary
max_items = self.full_node.config.get("max_subscribe_items", 200000)
for puzzle_hash in request.puzzle_hashes:
ph_hint_coins = await self.full_node.hint_store.get_coin_ids(puzzle_hash)
hint_coin_ids.extend(ph_hint_coins)
if puzzle_hash not in self.full_node.ph_subscriptions:
self.full_node.ph_subscriptions[puzzle_hash] = set()
if (
peer.peer_node_id not in self.full_node.ph_subscriptions[puzzle_hash]
and self.full_node.peer_sub_counter[peer.peer_node_id] < max_items
):
self.full_node.ph_subscriptions[puzzle_hash].add(peer.peer_node_id)
self.full_node.peer_puzzle_hash[peer.peer_node_id].add(puzzle_hash)
self.full_node.peer_sub_counter[peer.peer_node_id] += 1
# Send all coins with requested puzzle hash that have been created after the specified height
states: List[CoinState] = await self.full_node.coin_store.get_coin_states_by_puzzle_hashes(
include_spent_coins=True, puzzle_hashes=request.puzzle_hashes, min_height=request.min_height
)
if len(hint_coin_ids) > 0:
hint_states = await self.full_node.coin_store.get_coin_states_by_ids(
include_spent_coins=True, coin_ids=hint_coin_ids, min_height=request.min_height
)
states.extend(hint_states)
response = wallet_protocol.RespondToPhUpdates(request.puzzle_hashes, request.min_height, states)
msg = make_msg(ProtocolMessageTypes.respond_to_ph_update, response)
return msg
@peer_required
@api_request
async def register_interest_in_coin(
self, request: wallet_protocol.RegisterForCoinUpdates, peer: ws.WSChiaConnection
):
if peer.peer_node_id not in self.full_node.peer_coin_ids:
self.full_node.peer_coin_ids[peer.peer_node_id] = set()
if peer.peer_node_id not in self.full_node.peer_sub_counter:
self.full_node.peer_sub_counter[peer.peer_node_id] = 0
max_items = self.full_node.config.get("max_subscribe_items", 200000)
for coin_id in request.coin_ids:
if coin_id not in self.full_node.coin_subscriptions:
self.full_node.coin_subscriptions[coin_id] = set()
if (
peer.peer_node_id not in self.full_node.coin_subscriptions[coin_id]
and self.full_node.peer_sub_counter[peer.peer_node_id] < max_items
):
self.full_node.coin_subscriptions[coin_id].add(peer.peer_node_id)
self.full_node.peer_coin_ids[peer.peer_node_id].add(coin_id)
self.full_node.peer_sub_counter[peer.peer_node_id] += 1
states: List[CoinState] = await self.full_node.coin_store.get_coin_states_by_ids(
include_spent_coins=True, coin_ids=request.coin_ids, min_height=request.min_height
)
response = wallet_protocol.RespondToCoinUpdates(request.coin_ids, request.min_height, states)
msg = make_msg(ProtocolMessageTypes.respond_to_coin_update, response)
return msg
@api_request
async def request_children(self, request: wallet_protocol.RequestChildren) -> Optional[Message]:
coin_records: List[CoinRecord] = await self.full_node.coin_store.get_coin_records_by_parent_ids(
True, [request.coin_name]
)
states = [record.coin_state for record in coin_records]
response = wallet_protocol.RespondChildren(states)
msg = make_msg(ProtocolMessageTypes.respond_children, response)
return msg
@api_request
async def request_ses_hashes(self, request: wallet_protocol.RequestSESInfo):
"""Returns the start and end height of a sub-epoch for the height specified in request"""
ses_height = self.full_node.blockchain.get_ses_heights()
start_height = request.start_height
end_height = request.end_height
ses_hash_heights = []
ses_reward_hashes = []
for idx, ses_start_height in enumerate(ses_height):
if idx == len(ses_height) - 1:
break
next_ses_height = ses_height[idx + 1]
# start_ses_hash
if ses_start_height <= start_height < next_ses_height:
ses_hash_heights.append([ses_start_height, next_ses_height])
ses: SubEpochSummary = self.full_node.blockchain.get_ses(ses_start_height)
ses_reward_hashes.append(ses.reward_chain_hash)
if ses_start_height < end_height < next_ses_height:
break
else:
if idx == len(ses_height) - 2:
break
# else add extra ses as request start <-> end spans two ses
next_next_height = ses_height[idx + 2]
ses_hash_heights.append([next_ses_height, next_next_height])
nex_ses: SubEpochSummary = self.full_node.blockchain.get_ses(next_ses_height)
ses_reward_hashes.append(nex_ses.reward_chain_hash)
break
response = RespondSESInfo(ses_reward_hashes, ses_hash_heights)
msg = make_msg(ProtocolMessageTypes.respond_ses_hashes, response)
return msg
| chia/full_node/full_node_api.py | 72,148 | this semaphore limits the number of tasks that can call new_peak() at the same time, since it can be expensive Ignore if syncing Ignore if already seen If there's current pending request just add this peer to the set of peers that have this tx Limit to asking to a few peers, it's possible that this tx got included on chain already Highly unlikely that the peers that advertised a tx don't respond to a request. Also, if we drop some transactions, we don't want to refetch too many times Always Cleanup Ignore if syncing Higher fee means priority is a smaller number, which means it will be handled earlier Remove old from cache Serialization of wp is slow Ignore if syncing This prevents us from downloading the same block from many peers However, we want to eventually download from other peers, if this peer does not respond Todo: keep track of who it was Ignore if syncing If this is an end of sub slot, and we don't have the prev, request the prev instead We want to catch up to the latest slot so we can receive signage points If this is a normal signage point,, and we don't have the end of sub slot, request the end of sub slot Otherwise (we have the prev or the end of sub slot), request it normally Already have signage point FARMER PROTOCOL Checks that the proof of space is a response to a recent challenge and valid SP Now we know that the proof of space has a signage point either: 1. In the previous sub-slot of the peak (overflow) 2. In the same sub-slot as the peak 3. In a future sub-slot that we already know of Checks that the proof of space is valid Grab best transactions from Mempool for given tip target Finds the last transaction block before this one Finds the previous block from the signage point, ensuring that the reward chain VDF is correct Backtrack through empty sub-slots This block includes a sub-slot which is where our SP vdf starts. Go back one more to find the prev block The block's timestamp must be greater than the previous transaction block's timestamp Adds backup in case the first one fails Propagate to ourselves (which validates and does further propagations) If we have an error with this block, try making an empty block All unfinished blocks that we create will have the foliage transaction block and hash TIMELORD PROTOCOL Lookup unfinished blocks Calls our own internal message to handle the end of sub slot, and potentially broadcasts to other peers. We lock so that the coin store does not get modified Note: this might return bad data if there is a reorg in this time Create addition Merkle set Addition Merkle set contains puzzlehash and hash of all coins with that puzzlehash We lock so that the coin store does not get modified Note: this might return bad data if there is a reorg in this time If there are no transactions, respond with empty lists Waits for the transaction to go into the mempool, times out after 45 seconds. If if failed/pending, but it previously succeeded (in mempool), this is idempotence, return SUCCESS this semaphore will only allow a limited number of tasks call new_compact_vdf() at a time, since it can be expensive Add peer to the "Subscribed" dictionary Send all coins with requested puzzle hash that have been created after the specified height start_ses_hash else add extra ses as request start <-> end spans two ses | 3,308 | en | 0.922043 |
"""
Provide a mock switch platform.
Call init before using it in your tests to ensure clean test data.
"""
from homeassistant.const import STATE_ON, STATE_OFF
from tests.common import MockToggleDevice
DEVICES = []
def init(empty=False):
"""Initialize the platform with devices."""
global DEVICES
DEVICES = [] if empty else [
MockToggleDevice('AC', STATE_ON),
MockToggleDevice('AC', STATE_OFF),
MockToggleDevice(None, STATE_OFF)
]
async def async_setup_platform(hass, config, async_add_devices_callback,
discovery_info=None):
"""Find and return test switches."""
async_add_devices_callback(DEVICES)
| tests/testing_config/custom_components/switch/test.py | 685 | Initialize the platform with devices.
Provide a mock switch platform.
Call init before using it in your tests to ensure clean test data. | 137 | en | 0.759445 |
#AST transform that puts programs in SSA form
import collections
from translate import *
class SSAVisitor(Visitor):
def __init__(self):
# Number of static assignments to that variable seen so far.
self.definition_counter = collections.defaultdict(int)
# Name of the live definition of each variable before a node.
self.prev_definition = collections.defaultdict(dict)
# Name of the last definition of each variable in a node.
self.last_definition = collections.defaultdict(dict)
# Node in SSA form.
self.ssa_node = {}
def format_name(self, name, definition_id):
return "{}_{}".format(name, definition_id)
def visit(self, node, is_leaving):
if isinstance(node, Node) and not is_leaving:
if node.kind == NT.IF:
self.prev_definition[node] = dict(self.definition_counter)
self.prev_definition[node.args[1]] = self.prev_definition[node]
if len(node.args) == 3:
self.prev_definition[node.args[2]] = self.prev_definition[node]
# The if branches have their prev_definition set by the parent,
# so they don't redefine it here.
elif node not in self.prev_definition:
self.prev_definition[node] = dict(self.definition_counter)
elif isinstance(node, Node) and is_leaving:
if node.kind == NT.IF:
then_stmts = self.ssa_node[node.args[1]].args
has_else = len(node.args) == 3
if has_else:
else_stmt = self.ssa_node[node.args[2]]
for name, last_name in self.last_definition[node.args[1]].items():
c = ASTConcretizer(last_name,
Name(
self.format_name(name,
self.prev_definition[node][name] - 1)))
walk(else_stmt, c)
else_stmt = c.modified_node[else_stmt]
else_stmts = else_stmt.args if has_else else []
assigned_variables = set(self.last_definition[node.args[1]].keys())
if has_else:
assigned_variables.update(self.last_definition[node.args[2]].keys())
phis = []
for v in assigned_variables:
then_name = (self.last_definition[node.args[1]].get(v) or
self.format_name(v, self.prev_definition[node][v]))
else_name = ((has_else and self.last_definition[node.args[2]].get(v)) or
self.format_name(v, self.prev_definition[node][v] - 1))
phi_name = self.format_name(v, self.definition_counter[v])
phis.append(Node(NT.ASSIGNMENT, [
Name(phi_name),
Node(NT.PHI, [
self.ssa_node[node.args[0]],
Name(then_name),
Name(else_name),
])
]))
self.definition_counter[v] += 1
self.last_definition[node][v] = phi_name
self.ssa_node[node] = Node(NT.STMTLIST, then_stmts + else_stmts + phis)
elif node.kind == NT.ASSIGNMENT:
new_name = self.format_name(
node.args[0].name,
self.definition_counter[node.args[0].name])
self.ssa_node[node] = Node(NT.ASSIGNMENT, [
Name(new_name),
self.ssa_node[node.args[1]]
])
self.last_definition[node][node.args[0].name] = new_name
self.definition_counter[node.args[0].name] += 1
elif node.kind == NT.PARAMLIST:
names = []
for name in node.args:
self.last_definition[node][name.name] = self.format_name(name.name, 0)
self.definition_counter[name.name] = 1
names.append(Name(self.last_definition[node][name.name]))
self.ssa_node[node] = Node(NT.PARAMLIST, names)
else:
children = []
for a in node.args:
children.append(self.ssa_node[a])
for k, v in self.last_definition[a].items():
self.last_definition[node][k] = v
self.ssa_node[node] = Node(node.kind, children)
elif isinstance(node, Name):
self.ssa_node[node] = Name(self.format_name(
node.name,
self.definition_counter[node.name] - 1))
else:
self.ssa_node[node] = node
class FlattenVisitor(Visitor):
def __init__(self):
self.flat_node = {}
def visit(self, node, is_leaving):
if not is_leaving:
self.flat_node[node] = node
return
if isinstance(node, Node) and node.kind == NT.STMTLIST:
children = []
for a in node.args:
c = self.flat_node[a]
if c.kind == NT.STMTLIST:
children.extend(c.args)
else:
children.append(c)
self.flat_node[node] = Node(node.kind, children)
elif isinstance(node, Node):
children = []
for a in node.args:
children.append(self.flat_node[a])
self.flat_node[node] = Node(node.kind, children)
@staticmethod
def flatten(node):
v = FlattenVisitor()
walk(node, v)
return v.flat_node[node]
def ssa(node):
unroller = ASTUnroller()
walk(node, unroller)
node = unroller.unrolled_node[node]
v = SSAVisitor()
walk(node, v)
return FlattenVisitor.flatten(v.ssa_node[node])
| ssa.py | 5,949 | AST transform that puts programs in SSA form Number of static assignments to that variable seen so far. Name of the live definition of each variable before a node. Name of the last definition of each variable in a node. Node in SSA form. The if branches have their prev_definition set by the parent, so they don't redefine it here. | 331 | en | 0.908039 |
from __future__ import absolute_import
from __future__ import unicode_literals
import docker
import pytest
from docker.constants import DEFAULT_DOCKER_API_VERSION
from docker.errors import APIError
from docker.errors import NotFound
from .. import mock
from .. import unittest
from compose.config.errors import DependencyError
from compose.config.types import MountSpec
from compose.config.types import ServicePort
from compose.config.types import ServiceSecret
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import API_VERSIONS
from compose.const import LABEL_CONFIG_HASH
from compose.const import LABEL_ONE_OFF
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.const import SECRETS_PATH
from compose.container import Container
from compose.errors import OperationFailedError
from compose.parallel import ParallelStreamWriter
from compose.project import OneOffFilter
from compose.service import build_ulimits
from compose.service import build_volume_binding
from compose.service import BuildAction
from compose.service import ContainerNetworkMode
from compose.service import format_environment
from compose.service import formatted_ports
from compose.service import get_container_data_volumes
from compose.service import ImageType
from compose.service import merge_volume_bindings
from compose.service import NeedsBuildError
from compose.service import NetworkMode
from compose.service import NoSuchImageError
from compose.service import parse_repository_tag
from compose.service import Service
from compose.service import ServiceNetworkMode
from compose.service import warn_on_masked_volume
class ServiceTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_containers(self):
service = Service('db', self.mock_client, 'myproject', image='foo')
self.mock_client.containers.return_value = []
assert list(service.containers()) == []
def test_containers_with_containers(self):
self.mock_client.containers.return_value = [
dict(Name=str(i), Image='foo', Id=i) for i in range(3)
]
service = Service('db', self.mock_client, 'myproject', image='foo')
assert [c.id for c in service.containers()] == list(range(3))
expected_labels = [
'{0}=myproject'.format(LABEL_PROJECT),
'{0}=db'.format(LABEL_SERVICE),
'{0}=False'.format(LABEL_ONE_OFF),
]
self.mock_client.containers.assert_called_once_with(
all=False,
filters={'label': expected_labels})
def test_container_without_name(self):
self.mock_client.containers.return_value = [
{'Image': 'foo', 'Id': '1', 'Name': '1'},
{'Image': 'foo', 'Id': '2', 'Name': None},
{'Image': 'foo', 'Id': '3'},
]
service = Service('db', self.mock_client, 'myproject', image='foo')
assert [c.id for c in service.containers()] == ['1']
assert service._next_container_number() == 2
assert service.get_container(1).id == '1'
def test_get_volumes_from_container(self):
container_id = 'aabbccddee'
service = Service(
'test',
image='foo',
volumes_from=[
VolumeFromSpec(
mock.Mock(id=container_id, spec=Container),
'rw',
'container')])
assert service._get_volumes_from() == [container_id + ':rw']
def test_get_volumes_from_container_read_only(self):
container_id = 'aabbccddee'
service = Service(
'test',
image='foo',
volumes_from=[
VolumeFromSpec(
mock.Mock(id=container_id, spec=Container),
'ro',
'container')])
assert service._get_volumes_from() == [container_id + ':ro']
def test_get_volumes_from_service_container_exists(self):
container_ids = ['aabbccddee', '12345']
from_service = mock.create_autospec(Service)
from_service.containers.return_value = [
mock.Mock(id=container_id, spec=Container)
for container_id in container_ids
]
service = Service(
'test',
volumes_from=[VolumeFromSpec(from_service, 'rw', 'service')],
image='foo')
assert service._get_volumes_from() == [container_ids[0] + ":rw"]
def test_get_volumes_from_service_container_exists_with_flags(self):
for mode in ['ro', 'rw', 'z', 'rw,z', 'z,rw']:
container_ids = ['aabbccddee:' + mode, '12345:' + mode]
from_service = mock.create_autospec(Service)
from_service.containers.return_value = [
mock.Mock(id=container_id.split(':')[0], spec=Container)
for container_id in container_ids
]
service = Service(
'test',
volumes_from=[VolumeFromSpec(from_service, mode, 'service')],
image='foo')
assert service._get_volumes_from() == [container_ids[0]]
def test_get_volumes_from_service_no_container(self):
container_id = 'abababab'
from_service = mock.create_autospec(Service)
from_service.containers.return_value = []
from_service.create_container.return_value = mock.Mock(
id=container_id,
spec=Container)
service = Service(
'test',
image='foo',
volumes_from=[VolumeFromSpec(from_service, 'rw', 'service')])
assert service._get_volumes_from() == [container_id + ':rw']
from_service.create_container.assert_called_once_with()
def test_memory_swap_limit(self):
self.mock_client.create_host_config.return_value = {}
service = Service(
name='foo',
image='foo',
hostname='name',
client=self.mock_client,
mem_limit=1000000000,
memswap_limit=2000000000)
service._get_container_create_options({'some': 'overrides'}, 1)
assert self.mock_client.create_host_config.called
assert self.mock_client.create_host_config.call_args[1]['mem_limit'] == 1000000000
assert self.mock_client.create_host_config.call_args[1]['memswap_limit'] == 2000000000
def test_self_reference_external_link(self):
service = Service(
name='foo',
external_links=['default_foo_1']
)
with pytest.raises(DependencyError):
service.get_container_name('foo', 1)
def test_mem_reservation(self):
self.mock_client.create_host_config.return_value = {}
service = Service(
name='foo',
image='foo',
hostname='name',
client=self.mock_client,
mem_reservation='512m'
)
service._get_container_create_options({'some': 'overrides'}, 1)
assert self.mock_client.create_host_config.called is True
assert self.mock_client.create_host_config.call_args[1]['mem_reservation'] == '512m'
def test_cgroup_parent(self):
self.mock_client.create_host_config.return_value = {}
service = Service(
name='foo',
image='foo',
hostname='name',
client=self.mock_client,
cgroup_parent='test')
service._get_container_create_options({'some': 'overrides'}, 1)
assert self.mock_client.create_host_config.called
assert self.mock_client.create_host_config.call_args[1]['cgroup_parent'] == 'test'
def test_log_opt(self):
self.mock_client.create_host_config.return_value = {}
log_opt = {'syslog-address': 'tcp://192.168.0.42:123'}
logging = {'driver': 'syslog', 'options': log_opt}
service = Service(
name='foo',
image='foo',
hostname='name',
client=self.mock_client,
log_driver='syslog',
logging=logging)
service._get_container_create_options({'some': 'overrides'}, 1)
assert self.mock_client.create_host_config.called
assert self.mock_client.create_host_config.call_args[1]['log_config'] == {
'Type': 'syslog', 'Config': {'syslog-address': 'tcp://192.168.0.42:123'}
}
def test_stop_grace_period(self):
self.mock_client.api_version = '1.25'
self.mock_client.create_host_config.return_value = {}
service = Service(
'foo',
image='foo',
client=self.mock_client,
stop_grace_period="1m35s")
opts = service._get_container_create_options({'image': 'foo'}, 1)
assert opts['stop_timeout'] == 95
def test_split_domainname_none(self):
service = Service(
'foo',
image='foo',
hostname='name.domain.tld',
client=self.mock_client)
opts = service._get_container_create_options({'image': 'foo'}, 1)
assert opts['hostname'] == 'name.domain.tld', 'hostname'
assert not ('domainname' in opts), 'domainname'
def test_split_domainname_fqdn(self):
self.mock_client.api_version = '1.22'
service = Service(
'foo',
hostname='name.domain.tld',
image='foo',
client=self.mock_client)
opts = service._get_container_create_options({'image': 'foo'}, 1)
assert opts['hostname'] == 'name', 'hostname'
assert opts['domainname'] == 'domain.tld', 'domainname'
def test_split_domainname_both(self):
self.mock_client.api_version = '1.22'
service = Service(
'foo',
hostname='name',
image='foo',
domainname='domain.tld',
client=self.mock_client)
opts = service._get_container_create_options({'image': 'foo'}, 1)
assert opts['hostname'] == 'name', 'hostname'
assert opts['domainname'] == 'domain.tld', 'domainname'
def test_split_domainname_weird(self):
self.mock_client.api_version = '1.22'
service = Service(
'foo',
hostname='name.sub',
domainname='domain.tld',
image='foo',
client=self.mock_client)
opts = service._get_container_create_options({'image': 'foo'}, 1)
assert opts['hostname'] == 'name.sub', 'hostname'
assert opts['domainname'] == 'domain.tld', 'domainname'
def test_no_default_hostname_when_not_using_networking(self):
service = Service(
'foo',
image='foo',
use_networking=False,
client=self.mock_client,
)
opts = service._get_container_create_options({'image': 'foo'}, 1)
assert opts.get('hostname') is None
def test_get_container_create_options_with_name_option(self):
service = Service(
'foo',
image='foo',
client=self.mock_client,
container_name='foo1')
name = 'the_new_name'
opts = service._get_container_create_options(
{'name': name},
1,
one_off=OneOffFilter.only)
assert opts['name'] == name
def test_get_container_create_options_does_not_mutate_options(self):
labels = {'thing': 'real'}
environment = {'also': 'real'}
service = Service(
'foo',
image='foo',
labels=dict(labels),
client=self.mock_client,
environment=dict(environment),
)
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
prev_container = mock.Mock(
id='ababab',
image_config={'ContainerConfig': {}})
prev_container.get.return_value = None
opts = service._get_container_create_options(
{},
1,
previous_container=prev_container)
assert service.options['labels'] == labels
assert service.options['environment'] == environment
assert opts['labels'][LABEL_CONFIG_HASH] == \
'2524a06fcb3d781aa2c981fc40bcfa08013bb318e4273bfa388df22023e6f2aa'
assert opts['environment'] == ['also=real']
def test_get_container_create_options_sets_affinity_with_binds(self):
service = Service(
'foo',
image='foo',
client=self.mock_client,
)
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
prev_container = mock.Mock(
id='ababab',
image_config={'ContainerConfig': {'Volumes': ['/data']}})
def container_get(key):
return {
'Mounts': [
{
'Destination': '/data',
'Source': '/some/path',
'Name': 'abab1234',
},
]
}.get(key, None)
prev_container.get.side_effect = container_get
opts = service._get_container_create_options(
{},
1,
previous_container=prev_container)
assert opts['environment'] == ['affinity:container==ababab']
def test_get_container_create_options_no_affinity_without_binds(self):
service = Service('foo', image='foo', client=self.mock_client)
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
prev_container = mock.Mock(
id='ababab',
image_config={'ContainerConfig': {}})
prev_container.get.return_value = None
opts = service._get_container_create_options(
{},
1,
previous_container=prev_container)
assert opts['environment'] == []
def test_get_container_not_found(self):
self.mock_client.containers.return_value = []
service = Service('foo', client=self.mock_client, image='foo')
with pytest.raises(ValueError):
service.get_container()
@mock.patch('compose.service.Container', autospec=True)
def test_get_container(self, mock_container_class):
container_dict = dict(Name='default_foo_2')
self.mock_client.containers.return_value = [container_dict]
service = Service('foo', image='foo', client=self.mock_client)
container = service.get_container(number=2)
assert container == mock_container_class.from_ps.return_value
mock_container_class.from_ps.assert_called_once_with(
self.mock_client, container_dict)
@mock.patch('compose.service.log', autospec=True)
def test_pull_image(self, mock_log):
service = Service('foo', client=self.mock_client, image='someimage:sometag')
service.pull()
self.mock_client.pull.assert_called_once_with(
'someimage',
tag='sometag',
stream=True,
platform=None)
mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...')
def test_pull_image_no_tag(self):
service = Service('foo', client=self.mock_client, image='ababab')
service.pull()
self.mock_client.pull.assert_called_once_with(
'ababab',
tag='latest',
stream=True,
platform=None)
@mock.patch('compose.service.log', autospec=True)
def test_pull_image_digest(self, mock_log):
service = Service('foo', client=self.mock_client, image='someimage@sha256:1234')
service.pull()
self.mock_client.pull.assert_called_once_with(
'someimage',
tag='sha256:1234',
stream=True,
platform=None)
mock_log.info.assert_called_once_with('Pulling foo (someimage@sha256:1234)...')
@mock.patch('compose.service.log', autospec=True)
def test_pull_image_with_platform(self, mock_log):
self.mock_client.api_version = '1.35'
service = Service(
'foo', client=self.mock_client, image='someimage:sometag', platform='windows/x86_64'
)
service.pull()
assert self.mock_client.pull.call_count == 1
call_args = self.mock_client.pull.call_args
assert call_args[1]['platform'] == 'windows/x86_64'
@mock.patch('compose.service.log', autospec=True)
def test_pull_image_with_platform_unsupported_api(self, mock_log):
self.mock_client.api_version = '1.33'
service = Service(
'foo', client=self.mock_client, image='someimage:sometag', platform='linux/arm'
)
with pytest.raises(OperationFailedError):
service.pull()
def test_pull_image_with_default_platform(self):
self.mock_client.api_version = '1.35'
service = Service(
'foo', client=self.mock_client, image='someimage:sometag',
default_platform='linux'
)
assert service.platform == 'linux'
service.pull()
assert self.mock_client.pull.call_count == 1
call_args = self.mock_client.pull.call_args
assert call_args[1]['platform'] == 'linux'
@mock.patch('compose.service.Container', autospec=True)
def test_recreate_container(self, _):
mock_container = mock.create_autospec(Container)
service = Service('foo', client=self.mock_client, image='someimage')
service.image = lambda: {'Id': 'abc123'}
new_container = service.recreate_container(mock_container)
mock_container.stop.assert_called_once_with(timeout=10)
mock_container.rename_to_tmp_name.assert_called_once_with()
new_container.start.assert_called_once_with()
mock_container.remove.assert_called_once_with()
@mock.patch('compose.service.Container', autospec=True)
def test_recreate_container_with_timeout(self, _):
mock_container = mock.create_autospec(Container)
self.mock_client.inspect_image.return_value = {'Id': 'abc123'}
service = Service('foo', client=self.mock_client, image='someimage')
service.recreate_container(mock_container, timeout=1)
mock_container.stop.assert_called_once_with(timeout=1)
def test_parse_repository_tag(self):
assert parse_repository_tag("root") == ("root", "", ":")
assert parse_repository_tag("root:tag") == ("root", "tag", ":")
assert parse_repository_tag("user/repo") == ("user/repo", "", ":")
assert parse_repository_tag("user/repo:tag") == ("user/repo", "tag", ":")
assert parse_repository_tag("url:5000/repo") == ("url:5000/repo", "", ":")
assert parse_repository_tag("url:5000/repo:tag") == ("url:5000/repo", "tag", ":")
assert parse_repository_tag("root@sha256:digest") == ("root", "sha256:digest", "@")
assert parse_repository_tag("user/repo@sha256:digest") == ("user/repo", "sha256:digest", "@")
assert parse_repository_tag("url:5000/repo@sha256:digest") == (
"url:5000/repo", "sha256:digest", "@"
)
def test_create_container(self):
service = Service('foo', client=self.mock_client, build={'context': '.'})
self.mock_client.inspect_image.side_effect = [
NoSuchImageError,
{'Id': 'abc123'},
]
self.mock_client.build.return_value = [
'{"stream": "Successfully built abcd"}',
]
with mock.patch('compose.service.log', autospec=True) as mock_log:
service.create_container()
assert mock_log.warn.called
_, args, _ = mock_log.warn.mock_calls[0]
assert 'was built because it did not already exist' in args[0]
assert self.mock_client.build.call_count == 1
self.mock_client.build.call_args[1]['tag'] == 'default_foo'
def test_ensure_image_exists_no_build(self):
service = Service('foo', client=self.mock_client, build={'context': '.'})
self.mock_client.inspect_image.return_value = {'Id': 'abc123'}
service.ensure_image_exists(do_build=BuildAction.skip)
assert not self.mock_client.build.called
def test_ensure_image_exists_no_build_but_needs_build(self):
service = Service('foo', client=self.mock_client, build={'context': '.'})
self.mock_client.inspect_image.side_effect = NoSuchImageError
with pytest.raises(NeedsBuildError):
service.ensure_image_exists(do_build=BuildAction.skip)
def test_ensure_image_exists_force_build(self):
service = Service('foo', client=self.mock_client, build={'context': '.'})
self.mock_client.inspect_image.return_value = {'Id': 'abc123'}
self.mock_client.build.return_value = [
'{"stream": "Successfully built abcd"}',
]
with mock.patch('compose.service.log', autospec=True) as mock_log:
service.ensure_image_exists(do_build=BuildAction.force)
assert not mock_log.warn.called
assert self.mock_client.build.call_count == 1
self.mock_client.build.call_args[1]['tag'] == 'default_foo'
def test_build_does_not_pull(self):
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service('foo', client=self.mock_client, build={'context': '.'})
service.build()
assert self.mock_client.build.call_count == 1
assert not self.mock_client.build.call_args[1]['pull']
def test_build_with_platform(self):
self.mock_client.api_version = '1.35'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service('foo', client=self.mock_client, build={'context': '.'}, platform='linux')
service.build()
assert self.mock_client.build.call_count == 1
call_args = self.mock_client.build.call_args
assert call_args[1]['platform'] == 'linux'
def test_build_with_default_platform(self):
self.mock_client.api_version = '1.35'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service(
'foo', client=self.mock_client, build={'context': '.'},
default_platform='linux'
)
assert service.platform == 'linux'
service.build()
assert self.mock_client.build.call_count == 1
call_args = self.mock_client.build.call_args
assert call_args[1]['platform'] == 'linux'
def test_service_platform_precedence(self):
self.mock_client.api_version = '1.35'
service = Service(
'foo', client=self.mock_client, platform='linux/arm',
default_platform='osx'
)
assert service.platform == 'linux/arm'
def test_service_ignore_default_platform_with_unsupported_api(self):
self.mock_client.api_version = '1.32'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service(
'foo', client=self.mock_client, default_platform='windows', build={'context': '.'}
)
assert service.platform is None
service.build()
assert self.mock_client.build.call_count == 1
call_args = self.mock_client.build.call_args
assert call_args[1]['platform'] is None
def test_build_with_override_build_args(self):
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
build_args = {
'arg1': 'arg1_new_value',
}
service = Service('foo', client=self.mock_client,
build={'context': '.', 'args': {'arg1': 'arg1', 'arg2': 'arg2'}})
service.build(build_args_override=build_args)
called_build_args = self.mock_client.build.call_args[1]['buildargs']
assert called_build_args['arg1'] == build_args['arg1']
assert called_build_args['arg2'] == 'arg2'
def test_build_with_isolation_from_service_config(self):
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service('foo', client=self.mock_client, build={'context': '.'}, isolation='hyperv')
service.build()
assert self.mock_client.build.call_count == 1
called_build_args = self.mock_client.build.call_args[1]
assert called_build_args['isolation'] == 'hyperv'
def test_build_isolation_from_build_override_service_config(self):
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service(
'foo', client=self.mock_client, build={'context': '.', 'isolation': 'default'},
isolation='hyperv'
)
service.build()
assert self.mock_client.build.call_count == 1
called_build_args = self.mock_client.build.call_args[1]
assert called_build_args['isolation'] == 'default'
def test_config_dict(self):
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
service = Service(
'foo',
image='example.com/foo',
client=self.mock_client,
network_mode=ServiceNetworkMode(Service('other')),
networks={'default': None},
links=[(Service('one'), 'one')],
volumes_from=[VolumeFromSpec(Service('two'), 'rw', 'service')])
config_dict = service.config_dict()
expected = {
'image_id': 'abcd',
'options': {'image': 'example.com/foo'},
'links': [('one', 'one')],
'net': 'other',
'networks': {'default': None},
'volumes_from': [('two', 'rw')],
}
assert config_dict == expected
def test_config_dict_with_network_mode_from_container(self):
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
container = Container(
self.mock_client,
{'Id': 'aaabbb', 'Name': '/foo_1'})
service = Service(
'foo',
image='example.com/foo',
client=self.mock_client,
network_mode=ContainerNetworkMode(container))
config_dict = service.config_dict()
expected = {
'image_id': 'abcd',
'options': {'image': 'example.com/foo'},
'links': [],
'networks': {},
'net': 'aaabbb',
'volumes_from': [],
}
assert config_dict == expected
def test_config_hash_matches_label(self):
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
service = Service(
'foo',
image='example.com/foo',
client=self.mock_client,
network_mode=NetworkMode('bridge'),
networks={'bridge': {}, 'net2': {}},
links=[(Service('one', client=self.mock_client), 'one')],
volumes_from=[VolumeFromSpec(Service('two', client=self.mock_client), 'rw', 'service')],
volumes=[VolumeSpec('/ext', '/int', 'ro')],
build={'context': 'some/random/path'},
)
config_hash = service.config_hash
for api_version in set(API_VERSIONS.values()):
self.mock_client.api_version = api_version
assert service._get_container_create_options({}, 1)['labels'][LABEL_CONFIG_HASH] == (
config_hash
)
def test_remove_image_none(self):
web = Service('web', image='example', client=self.mock_client)
assert not web.remove_image(ImageType.none)
assert not self.mock_client.remove_image.called
def test_remove_image_local_with_image_name_doesnt_remove(self):
web = Service('web', image='example', client=self.mock_client)
assert not web.remove_image(ImageType.local)
assert not self.mock_client.remove_image.called
def test_remove_image_local_without_image_name_does_remove(self):
web = Service('web', build='.', client=self.mock_client)
assert web.remove_image(ImageType.local)
self.mock_client.remove_image.assert_called_once_with(web.image_name)
def test_remove_image_all_does_remove(self):
web = Service('web', image='example', client=self.mock_client)
assert web.remove_image(ImageType.all)
self.mock_client.remove_image.assert_called_once_with(web.image_name)
def test_remove_image_with_error(self):
self.mock_client.remove_image.side_effect = error = APIError(
message="testing",
response={},
explanation="Boom")
web = Service('web', image='example', client=self.mock_client)
with mock.patch('compose.service.log', autospec=True) as mock_log:
assert not web.remove_image(ImageType.all)
mock_log.error.assert_called_once_with(
"Failed to remove image for service %s: %s", web.name, error)
def test_specifies_host_port_with_no_ports(self):
service = Service(
'foo',
image='foo')
assert not service.specifies_host_port()
def test_specifies_host_port_with_container_port(self):
service = Service(
'foo',
image='foo',
ports=["2000"])
assert not service.specifies_host_port()
def test_specifies_host_port_with_host_port(self):
service = Service(
'foo',
image='foo',
ports=["1000:2000"])
assert service.specifies_host_port()
def test_specifies_host_port_with_host_ip_no_port(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1::2000"])
assert not service.specifies_host_port()
def test_specifies_host_port_with_host_ip_and_port(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1:1000:2000"])
assert service.specifies_host_port()
def test_specifies_host_port_with_container_port_range(self):
service = Service(
'foo',
image='foo',
ports=["2000-3000"])
assert not service.specifies_host_port()
def test_specifies_host_port_with_host_port_range(self):
service = Service(
'foo',
image='foo',
ports=["1000-2000:2000-3000"])
assert service.specifies_host_port()
def test_specifies_host_port_with_host_ip_no_port_range(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1::2000-3000"])
assert not service.specifies_host_port()
def test_specifies_host_port_with_host_ip_and_port_range(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1:1000-2000:2000-3000"])
assert service.specifies_host_port()
def test_image_name_from_config(self):
image_name = 'example/web:latest'
service = Service('foo', image=image_name)
assert service.image_name == image_name
def test_image_name_default(self):
service = Service('foo', project='testing')
assert service.image_name == 'testing_foo'
@mock.patch('compose.service.log', autospec=True)
def test_only_log_warning_when_host_ports_clash(self, mock_log):
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
ParallelStreamWriter.instance = None
name = 'foo'
service = Service(
name,
client=self.mock_client,
ports=["8080:80"])
service.scale(0)
assert not mock_log.warn.called
service.scale(1)
assert not mock_log.warn.called
service.scale(2)
mock_log.warn.assert_called_once_with(
'The "{}" service specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'.format(name))
def test_parse_proxy_config(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.mycorp.com:3129',
'ftpProxy': 'http://ftpproxy.mycorp.com:21',
'noProxy': '*.intra.mycorp.com',
}
self.mock_client.base_url = 'http+docker://localunixsocket'
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
}
}
service = Service('foo', client=self.mock_client)
assert service._parse_proxy_config() == {
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': default_proxy_config['httpsProxy'],
'https_proxy': default_proxy_config['httpsProxy'],
'FTP_PROXY': default_proxy_config['ftpProxy'],
'ftp_proxy': default_proxy_config['ftpProxy'],
'NO_PROXY': default_proxy_config['noProxy'],
'no_proxy': default_proxy_config['noProxy'],
}
def test_parse_proxy_config_per_host(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.mycorp.com:3129',
'ftpProxy': 'http://ftpproxy.mycorp.com:21',
'noProxy': '*.intra.mycorp.com',
}
host_specific_proxy_config = {
'httpProxy': 'http://proxy.example.com:3128',
'httpsProxy': 'https://user:password@proxy.example.com:3129',
'ftpProxy': 'http://ftpproxy.example.com:21',
'noProxy': '*.intra.example.com'
}
self.mock_client.base_url = 'http+docker://localunixsocket'
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
'tcp://example.docker.com:2376': host_specific_proxy_config,
}
}
service = Service('foo', client=self.mock_client)
assert service._parse_proxy_config() == {
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': default_proxy_config['httpsProxy'],
'https_proxy': default_proxy_config['httpsProxy'],
'FTP_PROXY': default_proxy_config['ftpProxy'],
'ftp_proxy': default_proxy_config['ftpProxy'],
'NO_PROXY': default_proxy_config['noProxy'],
'no_proxy': default_proxy_config['noProxy'],
}
self.mock_client._original_base_url = 'tcp://example.docker.com:2376'
assert service._parse_proxy_config() == {
'HTTP_PROXY': host_specific_proxy_config['httpProxy'],
'http_proxy': host_specific_proxy_config['httpProxy'],
'HTTPS_PROXY': host_specific_proxy_config['httpsProxy'],
'https_proxy': host_specific_proxy_config['httpsProxy'],
'FTP_PROXY': host_specific_proxy_config['ftpProxy'],
'ftp_proxy': host_specific_proxy_config['ftpProxy'],
'NO_PROXY': host_specific_proxy_config['noProxy'],
'no_proxy': host_specific_proxy_config['noProxy'],
}
def test_build_service_with_proxy_config(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.example.com:3129',
}
buildargs = {
'HTTPS_PROXY': 'https://rdcf.th08.jp:8911',
'https_proxy': 'https://rdcf.th08.jp:8911',
}
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
}
}
self.mock_client.base_url = 'http+docker://localunixsocket'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service('foo', client=self.mock_client, build={'context': '.', 'args': buildargs})
service.build()
assert self.mock_client.build.call_count == 1
assert self.mock_client.build.call_args[1]['buildargs'] == {
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': buildargs['HTTPS_PROXY'],
'https_proxy': buildargs['HTTPS_PROXY'],
}
def test_get_create_options_with_proxy_config(self):
default_proxy_config = {
'httpProxy': 'http://proxy.mycorp.com:3128',
'httpsProxy': 'https://user:password@proxy.mycorp.com:3129',
'ftpProxy': 'http://ftpproxy.mycorp.com:21',
}
self.mock_client._general_configs = {
'proxies': {
'default': default_proxy_config,
}
}
self.mock_client.base_url = 'http+docker://localunixsocket'
override_options = {
'environment': {
'FTP_PROXY': 'ftp://xdge.exo.au:21',
'ftp_proxy': 'ftp://xdge.exo.au:21',
}
}
environment = {
'HTTPS_PROXY': 'https://rdcf.th08.jp:8911',
'https_proxy': 'https://rdcf.th08.jp:8911',
}
service = Service('foo', client=self.mock_client, environment=environment)
create_opts = service._get_container_create_options(override_options, 1)
assert set(create_opts['environment']) == set(format_environment({
'HTTP_PROXY': default_proxy_config['httpProxy'],
'http_proxy': default_proxy_config['httpProxy'],
'HTTPS_PROXY': environment['HTTPS_PROXY'],
'https_proxy': environment['HTTPS_PROXY'],
'FTP_PROXY': override_options['environment']['FTP_PROXY'],
'ftp_proxy': override_options['environment']['FTP_PROXY'],
}))
def test_create_when_removed_containers_are_listed(self):
# This is aimed at simulating a race between the API call to list the
# containers, and the ones to inspect each of the listed containers.
# It can happen that a container has been removed after we listed it.
# containers() returns a container that is about to be removed
self.mock_client.containers.return_value = [
{'Id': 'rm_cont_id', 'Name': 'rm_cont', 'Image': 'img_id'},
]
# inspect_container() will raise a NotFound when trying to inspect
# rm_cont_id, which at this point has been removed
def inspect(name):
if name == 'rm_cont_id':
raise NotFound(message='Not Found')
if name == 'new_cont_id':
return {'Id': 'new_cont_id'}
raise NotImplementedError("incomplete mock")
self.mock_client.inspect_container.side_effect = inspect
self.mock_client.inspect_image.return_value = {'Id': 'imageid'}
self.mock_client.create_container.return_value = {'Id': 'new_cont_id'}
# We should nonetheless be able to create a new container
service = Service('foo', client=self.mock_client)
assert service.create_container().id == 'new_cont_id'
def test_build_volume_options_duplicate_binds(self):
self.mock_client.api_version = '1.29' # Trigger 3.2 format workaround
service = Service('foo', client=self.mock_client)
ctnr_opts, override_opts = service._build_container_volume_options(
previous_container=None,
container_options={
'volumes': [
MountSpec.parse({'source': 'vol', 'target': '/data', 'type': 'volume'}),
VolumeSpec.parse('vol:/data:rw'),
],
'environment': {},
},
override_options={},
)
assert 'binds' in override_opts
assert len(override_opts['binds']) == 1
assert override_opts['binds'][0] == 'vol:/data:rw'
class TestServiceNetwork(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_connect_container_to_networks_short_aliase_exists(self):
service = Service(
'db',
self.mock_client,
'myproject',
image='foo',
networks={'project_default': {}})
container = Container(
None,
{
'Id': 'abcdef',
'NetworkSettings': {
'Networks': {
'project_default': {
'Aliases': ['analias', 'abcdef'],
},
},
},
},
True)
service.connect_container_to_networks(container)
assert not self.mock_client.disconnect_container_from_network.call_count
assert not self.mock_client.connect_container_to_network.call_count
def sort_by_name(dictionary_list):
return sorted(dictionary_list, key=lambda k: k['name'])
class BuildUlimitsTestCase(unittest.TestCase):
def test_build_ulimits_with_dict(self):
ulimits = build_ulimits(
{
'nofile': {'soft': 10000, 'hard': 20000},
'nproc': {'soft': 65535, 'hard': 65535}
}
)
expected = [
{'name': 'nofile', 'soft': 10000, 'hard': 20000},
{'name': 'nproc', 'soft': 65535, 'hard': 65535}
]
assert sort_by_name(ulimits) == sort_by_name(expected)
def test_build_ulimits_with_ints(self):
ulimits = build_ulimits({'nofile': 20000, 'nproc': 65535})
expected = [
{'name': 'nofile', 'soft': 20000, 'hard': 20000},
{'name': 'nproc', 'soft': 65535, 'hard': 65535}
]
assert sort_by_name(ulimits) == sort_by_name(expected)
def test_build_ulimits_with_integers_and_dicts(self):
ulimits = build_ulimits(
{
'nproc': 65535,
'nofile': {'soft': 10000, 'hard': 20000}
}
)
expected = [
{'name': 'nofile', 'soft': 10000, 'hard': 20000},
{'name': 'nproc', 'soft': 65535, 'hard': 65535}
]
assert sort_by_name(ulimits) == sort_by_name(expected)
class NetTestCase(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_network_mode(self):
network_mode = NetworkMode('host')
assert network_mode.id == 'host'
assert network_mode.mode == 'host'
assert network_mode.service_name is None
def test_network_mode_container(self):
container_id = 'abcd'
network_mode = ContainerNetworkMode(Container(None, {'Id': container_id}))
assert network_mode.id == container_id
assert network_mode.mode == 'container:' + container_id
assert network_mode.service_name is None
def test_network_mode_service(self):
container_id = 'bbbb'
service_name = 'web'
self.mock_client.containers.return_value = [
{'Id': container_id, 'Name': container_id, 'Image': 'abcd'},
]
service = Service(name=service_name, client=self.mock_client)
network_mode = ServiceNetworkMode(service)
assert network_mode.id == service_name
assert network_mode.mode == 'container:' + container_id
assert network_mode.service_name == service_name
def test_network_mode_service_no_containers(self):
service_name = 'web'
self.mock_client.containers.return_value = []
service = Service(name=service_name, client=self.mock_client)
network_mode = ServiceNetworkMode(service)
assert network_mode.id == service_name
assert network_mode.mode is None
assert network_mode.service_name == service_name
class ServicePortsTest(unittest.TestCase):
def test_formatted_ports(self):
ports = [
'3000',
'0.0.0.0:4025-4030:23000-23005',
ServicePort(6000, None, None, None, None),
ServicePort(8080, 8080, None, None, None),
ServicePort('20000', '20000', 'udp', 'ingress', None),
ServicePort(30000, '30000', 'tcp', None, '127.0.0.1'),
]
formatted = formatted_ports(ports)
assert ports[0] in formatted
assert ports[1] in formatted
assert '6000/tcp' in formatted
assert '8080:8080/tcp' in formatted
assert '20000:20000/udp' in formatted
assert '127.0.0.1:30000:30000/tcp' in formatted
def build_mount(destination, source, mode='rw'):
return {'Source': source, 'Destination': destination, 'Mode': mode}
class ServiceVolumesTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_build_volume_binding(self):
binding = build_volume_binding(VolumeSpec.parse('/outside:/inside', True))
assert binding == ('/inside', '/outside:/inside:rw')
def test_get_container_data_volumes(self):
options = [VolumeSpec.parse(v) for v in [
'/host/volume:/host/volume:ro',
'/new/volume',
'/existing/volume',
'named:/named/vol',
'/dev/tmpfs'
]]
self.mock_client.inspect_image.return_value = {
'ContainerConfig': {
'Volumes': {
'/mnt/image/data': {},
}
}
}
container = Container(self.mock_client, {
'Image': 'ababab',
'Mounts': [
{
'Source': '/host/volume',
'Destination': '/host/volume',
'Mode': '',
'RW': True,
'Name': 'hostvolume',
}, {
'Source': '/var/lib/docker/aaaaaaaa',
'Destination': '/existing/volume',
'Mode': '',
'RW': True,
'Name': 'existingvolume',
}, {
'Source': '/var/lib/docker/bbbbbbbb',
'Destination': '/removed/volume',
'Mode': '',
'RW': True,
'Name': 'removedvolume',
}, {
'Source': '/var/lib/docker/cccccccc',
'Destination': '/mnt/image/data',
'Mode': '',
'RW': True,
'Name': 'imagedata',
},
]
}, has_been_inspected=True)
expected = [
VolumeSpec.parse('existingvolume:/existing/volume:rw'),
VolumeSpec.parse('imagedata:/mnt/image/data:rw'),
]
volumes, _ = get_container_data_volumes(container, options, ['/dev/tmpfs'], [])
assert sorted(volumes) == sorted(expected)
def test_merge_volume_bindings(self):
options = [
VolumeSpec.parse(v, True) for v in [
'/host/volume:/host/volume:ro',
'/host/rw/volume:/host/rw/volume',
'/new/volume',
'/existing/volume',
'/dev/tmpfs'
]
]
self.mock_client.inspect_image.return_value = {
'ContainerConfig': {'Volumes': {}}
}
previous_container = Container(self.mock_client, {
'Id': 'cdefab',
'Image': 'ababab',
'Mounts': [{
'Source': '/var/lib/docker/aaaaaaaa',
'Destination': '/existing/volume',
'Mode': '',
'RW': True,
'Name': 'existingvolume',
}],
}, has_been_inspected=True)
expected = [
'/host/volume:/host/volume:ro',
'/host/rw/volume:/host/rw/volume:rw',
'existingvolume:/existing/volume:rw',
]
binds, affinity = merge_volume_bindings(options, ['/dev/tmpfs'], previous_container, [])
assert sorted(binds) == sorted(expected)
assert affinity == {'affinity:container': '=cdefab'}
def test_mount_same_host_path_to_two_volumes(self):
service = Service(
'web',
image='busybox',
volumes=[
VolumeSpec.parse('/host/path:/data1', True),
VolumeSpec.parse('/host/path:/data2', True),
],
client=self.mock_client,
)
self.mock_client.inspect_image.return_value = {
'Id': 'ababab',
'ContainerConfig': {
'Volumes': {}
}
}
service._get_container_create_options(
override_options={},
number=1,
)
assert set(self.mock_client.create_host_config.call_args[1]['binds']) == set([
'/host/path:/data1:rw',
'/host/path:/data2:rw',
])
def test_get_container_create_options_with_different_host_path_in_container_json(self):
service = Service(
'web',
image='busybox',
volumes=[VolumeSpec.parse('/host/path:/data')],
client=self.mock_client,
)
volume_name = 'abcdefff1234'
self.mock_client.inspect_image.return_value = {
'Id': 'ababab',
'ContainerConfig': {
'Volumes': {
'/data': {},
}
}
}
self.mock_client.inspect_container.return_value = {
'Id': '123123123',
'Image': 'ababab',
'Mounts': [
{
'Destination': '/data',
'Source': '/mnt/sda1/host/path',
'Mode': '',
'RW': True,
'Driver': 'local',
'Name': volume_name,
},
]
}
service._get_container_create_options(
override_options={},
number=1,
previous_container=Container(self.mock_client, {'Id': '123123123'}),
)
assert (
self.mock_client.create_host_config.call_args[1]['binds'] ==
['{}:/data:rw'.format(volume_name)]
)
def test_warn_on_masked_volume_no_warning_when_no_container_volumes(self):
volumes_option = [VolumeSpec('/home/user', '/path', 'rw')]
container_volumes = []
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
assert not mock_log.warn.called
def test_warn_on_masked_volume_when_masked(self):
volumes_option = [VolumeSpec('/home/user', '/path', 'rw')]
container_volumes = [
VolumeSpec('/var/lib/docker/path', '/path', 'rw'),
VolumeSpec('/var/lib/docker/path', '/other', 'rw'),
]
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
mock_log.warn.assert_called_once_with(mock.ANY)
def test_warn_on_masked_no_warning_with_same_path(self):
volumes_option = [VolumeSpec('/home/user', '/path', 'rw')]
container_volumes = [VolumeSpec('/home/user', '/path', 'rw')]
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
assert not mock_log.warn.called
def test_warn_on_masked_no_warning_with_container_only_option(self):
volumes_option = [VolumeSpec(None, '/path', 'rw')]
container_volumes = [
VolumeSpec('/var/lib/docker/volume/path', '/path', 'rw')
]
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
assert not mock_log.warn.called
def test_create_with_special_volume_mode(self):
self.mock_client.inspect_image.return_value = {'Id': 'imageid'}
self.mock_client.create_container.return_value = {'Id': 'containerid'}
volume = '/tmp:/foo:z'
Service(
'web',
client=self.mock_client,
image='busybox',
volumes=[VolumeSpec.parse(volume, True)],
).create_container()
assert self.mock_client.create_container.call_count == 1
assert self.mock_client.create_host_config.call_args[1]['binds'] == [volume]
class ServiceSecretTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client.api_version = DEFAULT_DOCKER_API_VERSION
self.mock_client._general_configs = {}
def test_get_secret_volumes(self):
secret1 = {
'secret': ServiceSecret.parse({'source': 'secret1', 'target': 'b.txt'}),
'file': 'a.txt'
}
service = Service(
'web',
client=self.mock_client,
image='busybox',
secrets=[secret1]
)
volumes = service.get_secret_volumes()
assert volumes[0].source == secret1['file']
assert volumes[0].target == '{}/{}'.format(SECRETS_PATH, secret1['secret'].target)
def test_get_secret_volumes_abspath(self):
secret1 = {
'secret': ServiceSecret.parse({'source': 'secret1', 'target': '/d.txt'}),
'file': 'c.txt'
}
service = Service(
'web',
client=self.mock_client,
image='busybox',
secrets=[secret1]
)
volumes = service.get_secret_volumes()
assert volumes[0].source == secret1['file']
assert volumes[0].target == secret1['secret'].target
def test_get_secret_volumes_no_target(self):
secret1 = {
'secret': ServiceSecret.parse({'source': 'secret1'}),
'file': 'c.txt'
}
service = Service(
'web',
client=self.mock_client,
image='busybox',
secrets=[secret1]
)
volumes = service.get_secret_volumes()
assert volumes[0].source == secret1['file']
assert volumes[0].target == '{}/{}'.format(SECRETS_PATH, secret1['secret'].source)
| tests/unit/service_test.py | 55,021 | This is aimed at simulating a race between the API call to list the containers, and the ones to inspect each of the listed containers. It can happen that a container has been removed after we listed it. containers() returns a container that is about to be removed inspect_container() will raise a NotFound when trying to inspect rm_cont_id, which at this point has been removed We should nonetheless be able to create a new container Trigger 3.2 format workaround | 463 | en | 0.95192 |
import subprocess
import pymongo
class ReportsDataBase:
def __init__(self, url, db_name):
self.db_name = db_name
self.db = pymongo.MongoClient(url)[self.db_name]
self.db['reports'].create_index('group')
self.db['reports'].create_index('author')
self.db['reports'].create_index('title')
self.db['reports'].create_index([
('group', pymongo.ASCENDING),
('author', pymongo.ASCENDING)
])
self.db['reports'].create_index([
('group', pymongo.ASCENDING),
('faculty', pymongo.ASCENDING),
('department', pymongo.ASCENDING)
])
def export_reports_collection(self, file_name):
result = subprocess.run(['mongoexport',
f'--host=mongodb',
'--pretty',
'--jsonArray',
f'--db={self.db_name}',
'--collection=reports',
f'--out={file_name}.json'])
if result.returncode == 0:
return f'{file_name}.json'
else:
raise ChildProcessError(f'mongoexport error return code [{result.returncode}]')
def import_reports_collection(self, file_name):
result = subprocess.run(['mongoimport',
f'--host=mongodb',
'--jsonArray',
'--mode=merge',
f'--db={self.db_name}',
'--collection=reports',
f'--file={file_name}.json'])
if result.returncode != 0:
raise ChildProcessError(f'mongoimport error return code [{result.returncode}]')
def _drop_reports(self):
self.db['reports'].drop()
def save_report(self, report):
insert_result = self.db['reports'].insert_one(report.serialize_db())
inserted_id = insert_result.inserted_id
return inserted_id
def save_reports(self, reports):
reports_to_insert = map(lambda report: report.serialize_db(), reports)
insert_result = self.db['reports'].insert_many(reports_to_insert)
insterted_ids = insert_result.insterted_ids
return insterted_ids
def update_report(self, report_id, update_dict):
self.db['reports'].update_one({'_id': report_id}, {'$set': update_dict})
def get_all_faculties(self):
return sorted(self.db['reports'].distinct('faculty'))
def get_all_courses(self):
return sorted(self.db['reports'].distinct('course'))
def get_all_departments(self):
return sorted(self.db['reports'].distinct('department'))
def get_report_by_id(self, report_id):
return self.db['reports'].find_one({'_id': report_id})
def get_report_stat_by_id(self, report_id):
return self.db['reports'].find_one({'_id': report_id},
{'text': 0, 'words.unique_words': 0})
def get_report_top_words_by_id(self, report_id, num_words):
report = self.db['reports'].find_one({'_id': report_id},
{'words.most_popular_words': 1})
if len(report['words']['most_popular_words']) < num_words:
return report['words']['most_popular_words']
else:
return report['words']['most_popular_words'][:num_words - 1]
def get_reports_by_author(self, author, group):
for report in self.db['reports'].find({'author': author, 'group': group}).sort('title'):
yield report
def get_reports_by_group(self, group):
for report in self.db['reports'].find({'group': group}).sort('author'):
yield report
def get_reports_by_faculty(self, faculty):
for report in self.db['reports'].find({'faculty': faculty}):
yield report
def get_reports_by_course(self, course):
for report in self.db['reports'].find({'course': course}):
yield report
def get_reports_by_department(self, department):
for report in self.db['reports'].find({'department': department}):
yield report
def get_stat_of_author(self, author):
cur = self.db['reports'].aggregate([
{'$match': {'author': author}},
{'$group': {
'_id': None,
'avg_total_words': {'$avg': '$words.total_words'},
'avg_unique_words': {'$avg': '$words.total_unique_words'},
'avg_persent_unique_words': {'$avg': '$words.persent_unique_words'},
'unique_words': {'$addToSet': '$words.unique_words'},
'avg_total_raw_symbols': {'$avg': '$symbols.total_raw_symbols'},
'avg_total_clean_symbols': {'$avg': '$symbols.total_clean_symbols'},
'total_reports_loaded': {'$sum': 1},
}
},
{'$addFields': {
'unique_words': {
'$reduce': {
'input': '$unique_words',
'initialValue': [],
'in': {'$setUnion': ['$$value', '$$this']}
}
}
}
},
{'$addFields': {'total_unique_words': {'$size': '$unique_words'}}}
])
return cur.next()
def get_stat_of_group(self, group):
return self.db['reports'].aggregate([
{'$match': {'group': group}},
{'$group': {
'_id': '$author',
'avg_total_words': {'$avg': '$words.total_words'},
'avg_unique_words': {'$avg': '$words.total_unique_words'},
'avg_persent_unique_words': {'$avg': '$words.persent_unique_words'},
'unique_words': {'$addToSet': '$words.unique_words'},
'avg_total_raw_symbols': {'$avg': '$symbols.total_raw_symbols'},
'avg_total_clean_symbols': {'$avg': '$symbols.total_clean_symbols'},
'total_reports_loaded': {'$sum': 1}
}
},
{'$addFields': {
'unique_words': {
'$reduce': {
'input': '$unique_words',
'initialValue': [],
'in': {'$setUnion': ['$$value', '$$this']}
}
}
}
},
{'$addFields': {'total_unique_words': {'$size': '$unique_words'}}},
{'$sort': {'_id': 1, 'total_unique_words': -1}}
])
def get_stat_by_groups(self, course=None, faculty=None, department=None):
group = {
'$group': {
'_id': '$group',
'avg_total_words': {'$avg': '$words.total_words'},
'avg_unique_words': {'$avg': '$words.total_unique_words'},
'avg_persent_unique_words': {'$avg': '$words.persent_unique_words'},
'total_reports_loaded': {'$sum': 1}
}}
sort = {'$sort': {'_id': 1}}
if not course and not faculty and not department:
return self.db['reports'].aggregate([
group,
sort
])
if course and not faculty and not department:
match = {'$match': {'course': course}}
elif faculty and not course and not department:
match = {'$match': {'faculty': faculty}}
sort['$sort']['faculty'] = 1
elif department and not course and not faculty:
match = {'$match': {'department': department}}
sort['$sort']['department'] = 1
elif course and faculty or course and department or faculty and department:
match_list = []
if course:
match_list.append({'course': course})
if faculty:
match_list.append({'faculty': faculty})
sort['$sort']['faculty'] = 1
if department:
match_list.append({'department': department})
sort['$sort']['department'] = 1
match = {'$match': {'$and': match_list}}
return self.db['reports'].aggregate([
match,
group,
sort
])
def get_words_compare(self, authors, group):
match_list = []
for author in authors:
match_list.append({'author': author})
match = {
'$match': {
'$and': [
{'group': group},
{'$or': match_list}
]
}
}
query = self.db['reports'].aggregate([
match,
{'$group': {
'_id': '$author',
'unique_words': {'$addToSet': '$words.unique_words'}
}},
{'$addFields': {
'unique_words': {
'$reduce': {
'input': '$unique_words',
'initialValue': [],
'in': {'$setUnion': ['$$value', '$$this']}
}
}
}
},
{'$sort': {'_id': 1}}
])
authors = list(query)
compare = {}
words_intersections = []
for author in authors:
compare[author['_id']] = dict()
for other_author in authors:
if other_author['_id'] == author['_id']:
compare[author['_id']][author['_id']] = float('nan')
else:
author_unique_words = set(author['unique_words'])
other_author_unique_words = set(other_author['unique_words'])
author_num_unique_words = len(author_unique_words)
other_author_num_unique_words = len(other_author_unique_words)
words_intersection = author_unique_words.intersection(other_author_unique_words)
compare[author['_id']][other_author['_id']] = len(words_intersection) \
/ min(author_num_unique_words, other_author_num_unique_words) * 100.0
words_intersections.append((
author['_id'],
other_author['_id'],
words_intersection
))
# words_intersections = [ (author_name, other_author_name, ['word1', 'word2', 'word3', ...]), .... ]
return compare, words_intersections | src/database/reports_data_base.py | 10,239 | words_intersections = [ (author_name, other_author_name, ['word1', 'word2', 'word3', ...]), .... ] | 98 | en | 0.393872 |
# coding: utf-8
"""
Velo Payments APIs
## Terms and Definitions Throughout this document and the Velo platform the following terms are used: * **Payor.** An entity (typically a corporation) which wishes to pay funds to one or more payees via a payout. * **Payee.** The recipient of funds paid out by a payor. * **Payment.** A single transfer of funds from a payor to a payee. * **Payout.** A batch of Payments, typically used by a payor to logically group payments (e.g. by business day). Technically there need be no relationship between the payments in a payout - a single payout can contain payments to multiple payees and/or multiple payments to a single payee. * **Sandbox.** An integration environment provided by Velo Payments which offers a similar API experience to the production environment, but all funding and payment events are simulated, along with many other services such as OFAC sanctions list checking. ## Overview The Velo Payments API allows a payor to perform a number of operations. The following is a list of the main capabilities in a natural order of execution: * Authenticate with the Velo platform * Maintain a collection of payees * Query the payor’s current balance of funds within the platform and perform additional funding * Issue payments to payees * Query the platform for a history of those payments This document describes the main concepts and APIs required to get up and running with the Velo Payments platform. It is not an exhaustive API reference. For that, please see the separate Velo Payments API Reference. ## API Considerations The Velo Payments API is REST based and uses the JSON format for requests and responses. Most calls are secured using OAuth 2 security and require a valid authentication access token for successful operation. See the Authentication section for details. Where a dynamic value is required in the examples below, the {token} format is used, suggesting that the caller needs to supply the appropriate value of the token in question (without including the { or } characters). Where curl examples are given, the –d @filename.json approach is used, indicating that the request body should be placed into a file named filename.json in the current directory. Each of the curl examples in this document should be considered a single line on the command-line, regardless of how they appear in print. ## Authenticating with the Velo Platform Once Velo backoffice staff have added your organization as a payor within the Velo platform sandbox, they will create you a payor Id, an API key and an API secret and share these with you in a secure manner. You will need to use these values to authenticate with the Velo platform in order to gain access to the APIs. The steps to take are explained in the following: create a string comprising the API key (e.g. 44a9537d-d55d-4b47-8082-14061c2bcdd8) and API secret (e.g. c396b26b-137a-44fd-87f5-34631f8fd529) with a colon between them. E.g. 44a9537d-d55d-4b47-8082-14061c2bcdd8:c396b26b-137a-44fd-87f5-34631f8fd529 base64 encode this string. E.g.: NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ== create an HTTP **Authorization** header with the value set to e.g. Basic NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ== perform the Velo authentication REST call using the HTTP header created above e.g. via curl: ``` curl -X POST \\ -H \"Content-Type: application/json\" \\ -H \"Authorization: Basic NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ==\" \\ 'https://api.sandbox.velopayments.com/v1/authenticate?grant_type=client_credentials' ``` If successful, this call will result in a **200** HTTP status code and a response body such as: ``` { \"access_token\":\"19f6bafd-93fd-4747-b229-00507bbc991f\", \"token_type\":\"bearer\", \"expires_in\":1799, \"scope\":\"...\" } ``` ## API access following authentication Following successful authentication, the value of the access_token field in the response (indicated in green above) should then be presented with all subsequent API calls to allow the Velo platform to validate that the caller is authenticated. This is achieved by setting the HTTP Authorization header with the value set to e.g. Bearer 19f6bafd-93fd-4747-b229-00507bbc991f such as the curl example below: ``` -H \"Authorization: Bearer 19f6bafd-93fd-4747-b229-00507bbc991f \" ``` If you make other Velo API calls which require authorization but the Authorization header is missing or invalid then you will get a **401** HTTP status response. # noqa: E501
The version of the OpenAPI document: 2.26.124
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
class Name2(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'title': 'str',
'first_name': 'str',
'other_names': 'str',
'last_name': 'str'
}
attribute_map = {
'title': 'title',
'first_name': 'firstName',
'other_names': 'otherNames',
'last_name': 'lastName'
}
def __init__(self, title=None, first_name=None, other_names=None, last_name=None): # noqa: E501
"""Name2 - a model defined in OpenAPI""" # noqa: E501
self._title = None
self._first_name = None
self._other_names = None
self._last_name = None
self.discriminator = None
if title is not None:
self.title = title
if first_name is not None:
self.first_name = first_name
if other_names is not None:
self.other_names = other_names
if last_name is not None:
self.last_name = last_name
@property
def title(self):
"""Gets the title of this Name2. # noqa: E501
:return: The title of this Name2. # noqa: E501
:rtype: str
"""
return self._title
@title.setter
def title(self, title):
"""Sets the title of this Name2.
:param title: The title of this Name2. # noqa: E501
:type: str
"""
if title is not None and len(title) > 10:
raise ValueError("Invalid value for `title`, length must be less than or equal to `10`") # noqa: E501
if title is not None and len(title) < 1:
raise ValueError("Invalid value for `title`, length must be greater than or equal to `1`") # noqa: E501
self._title = title
@property
def first_name(self):
"""Gets the first_name of this Name2. # noqa: E501
:return: The first_name of this Name2. # noqa: E501
:rtype: str
"""
return self._first_name
@first_name.setter
def first_name(self, first_name):
"""Sets the first_name of this Name2.
:param first_name: The first_name of this Name2. # noqa: E501
:type: str
"""
if first_name is not None and len(first_name) > 40:
raise ValueError("Invalid value for `first_name`, length must be less than or equal to `40`") # noqa: E501
if first_name is not None and len(first_name) < 1:
raise ValueError("Invalid value for `first_name`, length must be greater than or equal to `1`") # noqa: E501
self._first_name = first_name
@property
def other_names(self):
"""Gets the other_names of this Name2. # noqa: E501
:return: The other_names of this Name2. # noqa: E501
:rtype: str
"""
return self._other_names
@other_names.setter
def other_names(self, other_names):
"""Sets the other_names of this Name2.
:param other_names: The other_names of this Name2. # noqa: E501
:type: str
"""
if other_names is not None and len(other_names) > 40:
raise ValueError("Invalid value for `other_names`, length must be less than or equal to `40`") # noqa: E501
if other_names is not None and len(other_names) < 1:
raise ValueError("Invalid value for `other_names`, length must be greater than or equal to `1`") # noqa: E501
self._other_names = other_names
@property
def last_name(self):
"""Gets the last_name of this Name2. # noqa: E501
:return: The last_name of this Name2. # noqa: E501
:rtype: str
"""
return self._last_name
@last_name.setter
def last_name(self, last_name):
"""Sets the last_name of this Name2.
:param last_name: The last_name of this Name2. # noqa: E501
:type: str
"""
if last_name is not None and len(last_name) > 40:
raise ValueError("Invalid value for `last_name`, length must be less than or equal to `40`") # noqa: E501
if last_name is not None and len(last_name) < 1:
raise ValueError("Invalid value for `last_name`, length must be greater than or equal to `1`") # noqa: E501
self._last_name = last_name
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, Name2):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| velo_payments/models/name2.py | 10,846 | NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Returns true if both objects are equal
Name2 - a model defined in OpenAPI
Returns true if both objects are not equal
For `print` and `pprint`
Gets the first_name of this Name2. # noqa: E501
:return: The first_name of this Name2. # noqa: E501
:rtype: str
Sets the first_name of this Name2.
:param first_name: The first_name of this Name2. # noqa: E501
:type: str
Gets the last_name of this Name2. # noqa: E501
:return: The last_name of this Name2. # noqa: E501
:rtype: str
Sets the last_name of this Name2.
:param last_name: The last_name of this Name2. # noqa: E501
:type: str
Gets the other_names of this Name2. # noqa: E501
:return: The other_names of this Name2. # noqa: E501
:rtype: str
Sets the other_names of this Name2.
:param other_names: The other_names of this Name2. # noqa: E501
:type: str
Gets the title of this Name2. # noqa: E501
:return: The title of this Name2. # noqa: E501
:rtype: str
Sets the title of this Name2.
:param title: The title of this Name2. # noqa: E501
:type: str
Returns the model properties as a dict
Returns the string representation of the model
Velo Payments APIs
## Terms and Definitions Throughout this document and the Velo platform the following terms are used: * **Payor.** An entity (typically a corporation) which wishes to pay funds to one or more payees via a payout. * **Payee.** The recipient of funds paid out by a payor. * **Payment.** A single transfer of funds from a payor to a payee. * **Payout.** A batch of Payments, typically used by a payor to logically group payments (e.g. by business day). Technically there need be no relationship between the payments in a payout - a single payout can contain payments to multiple payees and/or multiple payments to a single payee. * **Sandbox.** An integration environment provided by Velo Payments which offers a similar API experience to the production environment, but all funding and payment events are simulated, along with many other services such as OFAC sanctions list checking. ## Overview The Velo Payments API allows a payor to perform a number of operations. The following is a list of the main capabilities in a natural order of execution: * Authenticate with the Velo platform * Maintain a collection of payees * Query the payor’s current balance of funds within the platform and perform additional funding * Issue payments to payees * Query the platform for a history of those payments This document describes the main concepts and APIs required to get up and running with the Velo Payments platform. It is not an exhaustive API reference. For that, please see the separate Velo Payments API Reference. ## API Considerations The Velo Payments API is REST based and uses the JSON format for requests and responses. Most calls are secured using OAuth 2 security and require a valid authentication access token for successful operation. See the Authentication section for details. Where a dynamic value is required in the examples below, the {token} format is used, suggesting that the caller needs to supply the appropriate value of the token in question (without including the { or } characters). Where curl examples are given, the –d @filename.json approach is used, indicating that the request body should be placed into a file named filename.json in the current directory. Each of the curl examples in this document should be considered a single line on the command-line, regardless of how they appear in print. ## Authenticating with the Velo Platform Once Velo backoffice staff have added your organization as a payor within the Velo platform sandbox, they will create you a payor Id, an API key and an API secret and share these with you in a secure manner. You will need to use these values to authenticate with the Velo platform in order to gain access to the APIs. The steps to take are explained in the following: create a string comprising the API key (e.g. 44a9537d-d55d-4b47-8082-14061c2bcdd8) and API secret (e.g. c396b26b-137a-44fd-87f5-34631f8fd529) with a colon between them. E.g. 44a9537d-d55d-4b47-8082-14061c2bcdd8:c396b26b-137a-44fd-87f5-34631f8fd529 base64 encode this string. E.g.: NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ== create an HTTP **Authorization** header with the value set to e.g. Basic NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ== perform the Velo authentication REST call using the HTTP header created above e.g. via curl: ``` curl -X POST \ -H "Content-Type: application/json" \ -H "Authorization: Basic NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ==" \ 'https://api.sandbox.velopayments.com/v1/authenticate?grant_type=client_credentials' ``` If successful, this call will result in a **200** HTTP status code and a response body such as: ``` { "access_token":"19f6bafd-93fd-4747-b229-00507bbc991f", "token_type":"bearer", "expires_in":1799, "scope":"..." } ``` ## API access following authentication Following successful authentication, the value of the access_token field in the response (indicated in green above) should then be presented with all subsequent API calls to allow the Velo platform to validate that the caller is authenticated. This is achieved by setting the HTTP Authorization header with the value set to e.g. Bearer 19f6bafd-93fd-4747-b229-00507bbc991f such as the curl example below: ``` -H "Authorization: Bearer 19f6bafd-93fd-4747-b229-00507bbc991f " ``` If you make other Velo API calls which require authorization but the Authorization header is missing or invalid then you will get a **401** HTTP status response. # noqa: E501
The version of the OpenAPI document: 2.26.124
Generated by: https://openapi-generator.tech
coding: utf-8 noqa: F401 noqa: E501 noqa: E501 noqa: E501 noqa: E501 noqa: E501 noqa: E501 noqa: E501 noqa: E501 noqa: E501 noqa: E501 | 6,110 | en | 0.805235 |
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'calculator2.ui'
#
# Created by: PyQt5 UI code generator 5.15.3
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.setEnabled(True)
MainWindow.setFixedSize(QtCore.QSize(471, 400))
MainWindow.setTabletTracking(False)
MainWindow.setDockNestingEnabled(False)
MainWindow.setUnifiedTitleAndToolBarOnMac(False)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setMinimumSize(QtCore.QSize(471, 390))
self.centralwidget.setMaximumSize(QtCore.QSize(471, 390))
self.centralwidget.setObjectName("centralwidget")
self.lcdNumber = QtWidgets.QLCDNumber(self.centralwidget)
self.lcdNumber.setGeometry(QtCore.QRect(10, 40, 451, 101))
self.lcdNumber.setStyleSheet("background-color: rgb(255, 255, 255);")
self.lcdNumber.setFrameShape(QtWidgets.QFrame.Box)
self.lcdNumber.setSmallDecimalPoint(False)
self.lcdNumber.setDigitCount(14)
self.lcdNumber.setSegmentStyle(QtWidgets.QLCDNumber.Flat)
self.lcdNumber.setObjectName("lcdNumber")
self.num_1 = QtWidgets.QPushButton(self.centralwidget)
self.num_1.setGeometry(QtCore.QRect(10, 290, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_1.setFont(font)
self.num_1.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_1.setObjectName("num_1")
self.buttonGroup = QtWidgets.QButtonGroup(MainWindow)
self.buttonGroup.setObjectName("buttonGroup")
self.buttonGroup.addButton(self.num_1)
self.num_2 = QtWidgets.QPushButton(self.centralwidget)
self.num_2.setGeometry(QtCore.QRect(100, 290, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_2.setFont(font)
self.num_2.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_2.setObjectName("num_2")
self.buttonGroup.addButton(self.num_2)
self.num_3 = QtWidgets.QPushButton(self.centralwidget)
self.num_3.setGeometry(QtCore.QRect(190, 290, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_3.setFont(font)
self.num_3.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_3.setObjectName("num_3")
self.buttonGroup.addButton(self.num_3)
self.num_plus = QtWidgets.QPushButton(self.centralwidget)
self.num_plus.setGeometry(QtCore.QRect(280, 190, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_plus.setFont(font)
self.num_plus.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_plus.setObjectName("num_plus")
self.num_4 = QtWidgets.QPushButton(self.centralwidget)
self.num_4.setGeometry(QtCore.QRect(10, 240, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_4.setFont(font)
self.num_4.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_4.setObjectName("num_4")
self.buttonGroup.addButton(self.num_4)
self.num_5 = QtWidgets.QPushButton(self.centralwidget)
self.num_5.setGeometry(QtCore.QRect(100, 240, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_5.setFont(font)
self.num_5.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_5.setObjectName("num_5")
self.buttonGroup.addButton(self.num_5)
self.num_6 = QtWidgets.QPushButton(self.centralwidget)
self.num_6.setGeometry(QtCore.QRect(190, 240, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_6.setFont(font)
self.num_6.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_6.setObjectName("num_6")
self.buttonGroup.addButton(self.num_6)
self.num_minus = QtWidgets.QPushButton(self.centralwidget)
self.num_minus.setGeometry(QtCore.QRect(280, 240, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_minus.setFont(font)
self.num_minus.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_minus.setObjectName("num_minus")
self.num_7 = QtWidgets.QPushButton(self.centralwidget)
self.num_7.setGeometry(QtCore.QRect(10, 190, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_7.setFont(font)
self.num_7.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_7.setObjectName("num_7")
self.buttonGroup.addButton(self.num_7)
self.num_8 = QtWidgets.QPushButton(self.centralwidget)
self.num_8.setGeometry(QtCore.QRect(100, 190, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_8.setFont(font)
self.num_8.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_8.setObjectName("num_8")
self.buttonGroup.addButton(self.num_8)
self.num_9 = QtWidgets.QPushButton(self.centralwidget)
self.num_9.setGeometry(QtCore.QRect(190, 190, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_9.setFont(font)
self.num_9.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_9.setObjectName("num_9")
self.buttonGroup.addButton(self.num_9)
self.num_mult = QtWidgets.QPushButton(self.centralwidget)
self.num_mult.setGeometry(QtCore.QRect(280, 290, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_mult.setFont(font)
self.num_mult.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_mult.setObjectName("num_mult")
self.num_point = QtWidgets.QPushButton(self.centralwidget)
self.num_point.setGeometry(QtCore.QRect(10, 340, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_point.setFont(font)
self.num_point.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_point.setObjectName("num_point")
self.buttonGroup.addButton(self.num_point)
self.num_0 = QtWidgets.QPushButton(self.centralwidget)
self.num_0.setGeometry(QtCore.QRect(100, 340, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_0.setFont(font)
self.num_0.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_0.setObjectName("num_0")
self.buttonGroup.addButton(self.num_0)
self.num_eq = QtWidgets.QPushButton(self.centralwidget)
self.num_eq.setGeometry(QtCore.QRect(370, 340, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_eq.setFont(font)
self.num_eq.setStyleSheet(
"background-color: rgb(170, 0, 0);\n"
"color: rgb(255, 255, 255);")
self.num_eq.setObjectName("num_eq")
self.num_division = QtWidgets.QPushButton(self.centralwidget)
self.num_division.setGeometry(QtCore.QRect(280, 340, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_division.setFont(font)
self.num_division.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_division.setObjectName("num_division")
self.num_c = QtWidgets.QPushButton(self.centralwidget)
self.num_c.setGeometry(QtCore.QRect(370, 150, 91, 41))
font = QtGui.QFont()
font.setPointSize(20)
self.num_c.setFont(font)
self.num_c.setStyleSheet(
"background-color: rgb(255, 170, 0);\n"
"color: rgb(255, 255, 255);")
self.num_c.setObjectName("num_c")
self.num_ce = QtWidgets.QPushButton(self.centralwidget)
self.num_ce.setGeometry(QtCore.QRect(280, 150, 91, 41))
font = QtGui.QFont()
font.setPointSize(20)
self.num_ce.setFont(font)
self.num_ce.setStyleSheet(
"background-color: rgb(255, 170, 0);\n"
"color: rgb(255, 255, 255);")
self.num_ce.setShortcut("")
self.num_ce.setAutoDefault(False)
self.num_ce.setDefault(False)
self.num_ce.setFlat(False)
self.num_ce.setObjectName("num_ce")
self.num_backspace = QtWidgets.QPushButton(self.centralwidget)
self.num_backspace.setGeometry(QtCore.QRect(370, 190, 91, 51))
font = QtGui.QFont()
font.setPointSize(30)
self.num_backspace.setFont(font)
self.num_backspace.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_backspace.setObjectName("num_backspace")
self.num_procent = QtWidgets.QPushButton(self.centralwidget)
self.num_procent.setGeometry(QtCore.QRect(370, 290, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_procent.setFont(font)
self.num_procent.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_procent.setObjectName("num_procent")
self.num_plus_minus = QtWidgets.QPushButton(self.centralwidget)
self.num_plus_minus.setGeometry(QtCore.QRect(190, 340, 91, 51))
font = QtGui.QFont()
font.setPointSize(20)
self.num_plus_minus.setFont(font)
self.num_plus_minus.setStyleSheet(
"background-color: rgb(71, 64, 64);\n"
"color: rgb(255, 255, 255);")
self.num_plus_minus.setObjectName("num_plus_minus")
self.buttonGroup.addButton(self.num_plus_minus)
self.history = QtWidgets.QLabel(self.centralwidget)
self.history.setGeometry(QtCore.QRect(10, 10, 451, 21))
font = QtGui.QFont()
font.setPointSize(12)
self.history.setFont(font)
self.history.setLayoutDirection(QtCore.Qt.LeftToRight)
self.history.setText("")
self.history.setAlignment(
QtCore.Qt.AlignRight |
QtCore.Qt.AlignTrailing |
QtCore.Qt.AlignVCenter)
self.history.setTextInteractionFlags(QtCore.Qt.LinksAccessibleByMouse)
self.history.setObjectName("history")
self.num_mc = QtWidgets.QPushButton(self.centralwidget)
self.num_mc.setGeometry(QtCore.QRect(10, 150, 68, 41))
font = QtGui.QFont()
font.setPointSize(20)
self.num_mc.setFont(font)
self.num_mc.setStyleSheet(
"background-color: rgb(193, 193, 193);"
"color: rgb(255, 255, 255);\n")
self.num_mc.setObjectName("num_mc")
self.num_mr = QtWidgets.QPushButton(self.centralwidget)
self.num_mr.setGeometry(QtCore.QRect(77, 150, 68, 41))
font = QtGui.QFont()
font.setPointSize(20)
self.num_mr.setFont(font)
self.num_mr.setStyleSheet(
"background-color: rgb(193, 193, 193);"
"color: rgb(255, 255, 255);\n")
self.num_mr.setObjectName("num_mr")
self.num_m_minus = QtWidgets.QPushButton(self.centralwidget)
self.num_m_minus.setGeometry(QtCore.QRect(144, 150, 68, 41))
font = QtGui.QFont()
font.setPointSize(20)
self.num_m_minus.setFont(font)
self.num_m_minus.setStyleSheet(
"background-color: rgb(193, 193, 193);"
"color: rgb(255, 255, 255);\n")
self.num_m_minus.setObjectName("num_m_minus")
self.num_sqrt = QtWidgets.QPushButton(self.centralwidget)
self.num_sqrt.setGeometry(QtCore.QRect(370, 240, 91, 51))
font = QtGui.QFont()
font.setFamily("MS Shell Dlg 2")
font.setPointSize(20)
font.setBold(False)
font.setWeight(50)
self.num_sqrt.setFont(font)
self.num_sqrt.setStyleSheet(
"background-color: rgb(255, 85, 0);\n"
"color: rgb(255, 255, 255);")
self.num_sqrt.setObjectName("num_sqrt")
self.num_m_plus = QtWidgets.QPushButton(self.centralwidget)
self.num_m_plus.setGeometry(QtCore.QRect(211, 150, 70, 41))
font = QtGui.QFont()
font.setPointSize(20)
self.num_m_plus.setFont(font)
self.num_m_plus.setStyleSheet(
"background-color: rgb(193, 193, 193);"
"color: rgb(255, 255, 255);\n")
self.num_m_plus.setObjectName("num_m_plus")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(15, 43, 20, 20))
font = QtGui.QFont()
font.setPointSize(20)
self.label.setFont(font)
self.label.setObjectName("label")
self.lcdNumber.raise_()
self.history.raise_()
self.num_mc.raise_()
self.num_mr.raise_()
self.num_m_minus.raise_()
self.num_m_plus.raise_()
self.num_ce.raise_()
self.num_c.raise_()
self.num_7.raise_()
self.num_8.raise_()
self.num_9.raise_()
self.num_plus.raise_()
self.num_backspace.raise_()
self.num_4.raise_()
self.num_5.raise_()
self.num_6.raise_()
self.num_1.raise_()
self.num_2.raise_()
self.num_3.raise_()
self.num_point.raise_()
self.num_0.raise_()
self.num_minus.raise_()
self.num_mult.raise_()
self.num_plus_minus.raise_()
self.num_division.raise_()
self.label.raise_()
self.num_sqrt.raise_()
self.num_procent.raise_()
self.num_eq.raise_()
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Калькулятор v1.1"))
self.num_1.setText(_translate("MainWindow", "1"))
self.num_2.setText(_translate("MainWindow", "2"))
self.num_3.setText(_translate("MainWindow", "3"))
self.num_plus.setText(_translate("MainWindow", "+"))
self.num_4.setText(_translate("MainWindow", "4"))
self.num_5.setText(_translate("MainWindow", "5"))
self.num_6.setText(_translate("MainWindow", "6"))
self.num_minus.setText(_translate("MainWindow", "-"))
self.num_7.setText(_translate("MainWindow", "7"))
self.num_8.setText(_translate("MainWindow", "8"))
self.num_9.setText(_translate("MainWindow", "9"))
self.num_mult.setText(_translate("MainWindow", "*"))
self.num_point.setText(_translate("MainWindow", "."))
self.num_0.setText(_translate("MainWindow", "0"))
self.num_eq.setText(_translate("MainWindow", "="))
self.num_division.setText(_translate("MainWindow", "÷"))
self.num_c.setText(_translate("MainWindow", "C"))
self.num_ce.setText(_translate("MainWindow", "CE"))
self.num_backspace.setText(_translate("MainWindow", "←"))
self.num_procent.setText(_translate("MainWindow", "%"))
self.num_plus_minus.setText(_translate("MainWindow", "+/-"))
self.num_mc.setText(_translate("MainWindow", "MC"))
self.num_mr.setText(_translate("MainWindow", "MR"))
self.num_m_minus.setText(_translate("MainWindow", "M-"))
self.num_sqrt.setText(_translate("MainWindow", "√"))
self.num_m_plus.setText(_translate("MainWindow", "M+"))
| gui_calculator.py | 16,968 | -*- coding: utf-8 -*- Form implementation generated from reading ui file 'calculator2.ui' Created by: PyQt5 UI code generator 5.15.3 WARNING: Any manual changes made to this file will be lost when pyuic5 is run again. Do not edit this file unless you know what you are doing. | 276 | en | 0.902723 |
"""
Seed-based connectivity on the surface
=======================================
The dataset that is a subset of the enhanced NKI Rockland sample
(http://fcon_1000.projects.nitrc.org/indi/enhanced/, Nooner et al, 2012)
Resting state fMRI scans (TR=645ms) of 102 subjects were preprocessed
(https://github.com/fliem/nki_nilearn) and projected onto the Freesurfer
fsaverage5 template (Dale et al, 1999, Fischl et al, 1999). For this example
we use the time series of a single subject's left hemisphere.
The Destrieux parcellation (Destrieux et al, 2010) in fsaverage5 space as
distributed with Freesurfer is used to select a seed region in the posterior
cingulate cortex.
Functional connectivity of the seed region to all other cortical nodes in the
same hemisphere is calculated using Pearson product-moment correlation
coefficient.
The :func:`nilearn.plotting.plot_surf_stat_map` function is used
to plot the resulting statistical map on the (inflated) pial surface.
See also :ref:`for a similar example but using volumetric input data
<sphx_glr_auto_examples_03_connectivity_plot_seed_to_voxel_correlation.py>`.
See :ref:`plotting` for more details on plotting tools.
NOTE: This example needs matplotlib version higher than 1.3.1.
References
----------
Nooner et al, (2012). The NKI-Rockland Sample: A model for accelerating the
pace of discovery science in psychiatry. Frontiers in neuroscience 6, 152.
URL http://dx.doi.org/10.3389/fnins.2012.00152
Dale et al, (1999). Cortical surface-based analysis.I. Segmentation and
surface reconstruction. Neuroimage 9.
URL http://dx.doi.org/10.1006/nimg.1998.0395
Fischl et al, (1999). Cortical surface-based analysis. II: Inflation,
flattening, and a surface-based coordinate system. Neuroimage 9.
http://dx.doi.org/10.1006/nimg.1998.0396
Destrieux et al, (2010). Automatic parcellation of human cortical gyri and
sulci using standard anatomical nomenclature. NeuroImage, 53, 1.
URL http://dx.doi.org/10.1016/j.neuroimage.2010.06.010.
"""
###############################################################################
# Retrieving the data
# -------------------
# NKI resting state data from nilearn
from nilearn import datasets
nki_dataset = datasets.fetch_surf_nki_enhanced(n_subjects=1)
# The nki dictionary contains file names for the data
# of all downloaded subjects.
print(('Resting state data of the first subjects on the '
'fsaverag5 surface left hemisphere is at: %s' %
nki_dataset['func_left'][0]))
# Destrieux parcellation for left hemisphere in fsaverage5 space
destrieux_atlas = datasets.fetch_atlas_surf_destrieux()
parcellation = destrieux_atlas['map_left']
labels = destrieux_atlas['labels']
# Fsaverage5 surface template
fsaverage = datasets.fetch_surf_fsaverage5()
# The fsaverage dataset contains file names pointing to
# the file locations
print('Fsaverage5 pial surface of left hemisphere is at: %s' %
fsaverage['pial_left'])
print('Fsaverage5 inflated surface of left hemisphere is at: %s' %
fsaverage['infl_left'])
print('Fsaverage5 sulcal depth map of left hemisphere is at: %s' %
fsaverage['sulc_left'])
###############################################################################
# Extracting the seed time series
# --------------------------------
# Load resting state time series from nilearn
from nilearn import surface
timeseries = surface.load_surf_data(nki_dataset['func_left'][0])
# Extract seed region via label
pcc_region = b'G_cingul-Post-dorsal'
import numpy as np
pcc_labels = np.where(parcellation == labels.index(pcc_region))[0]
# Extract time series from seed region
seed_timeseries = np.mean(timeseries[pcc_labels], axis=0)
###############################################################################
# Calculating seed-based functional connectivity
# ----------------------------------------------
# Calculate Pearson product-moment correlation coefficient between seed
# time series and timeseries of all cortical nodes of the hemisphere
from scipy import stats
stat_map = np.zeros(timeseries.shape[0])
for i in range(timeseries.shape[0]):
stat_map[i] = stats.pearsonr(seed_timeseries, timeseries[i])[0]
# Re-mask previously masked nodes (medial wall)
stat_map[np.where(np.mean(timeseries, axis=1) == 0)] = 0
###############################################################################
# Display ROI on surface
from nilearn import plotting
plotting.plot_surf_roi(fsaverage['pial_left'], roi_map=pcc_labels,
hemi='left', view='medial',
bg_map=fsaverage['sulc_left'], bg_on_data=True,
title='PCC Seed')
###############################################################################
# Display unthresholded stat map with dimmed background
plotting.plot_surf_stat_map(fsaverage['pial_left'], stat_map=stat_map,
hemi='left', view='medial',
bg_map=fsaverage['sulc_left'], bg_on_data=True,
darkness=.5, title='Correlation map')
###############################################################################
# Display unthresholded stat map without background map, transparency is
# automatically set to .5, but can also be controlled with the alpha parameter
plotting.plot_surf_stat_map(fsaverage['pial_left'], stat_map=stat_map,
hemi='left', view='medial',
title='Plotting without background')
###############################################################################
# Many different options are available for plotting, for example thresholding,
# or using custom colormaps
plotting.plot_surf_stat_map(fsaverage['pial_left'], stat_map=stat_map,
hemi='left', view='medial',
bg_map=fsaverage['sulc_left'], bg_on_data=True,
cmap='Spectral', threshold=.5,
title='Threshold and colormap')
###############################################################################
# The plots can be saved to file, in which case the display is closed after
# creating the figure
plotting.plot_surf_stat_map(fsaverage['infl_left'], stat_map=stat_map,
hemi='left', bg_map=fsaverage['sulc_left'],
bg_on_data=True, threshold=.6,
output_file='plot_surf_stat_map.png')
plotting.show()
| examples/01_plotting/plot_surf_stat_map.py | 6,445 | Seed-based connectivity on the surface
=======================================
The dataset that is a subset of the enhanced NKI Rockland sample
(http://fcon_1000.projects.nitrc.org/indi/enhanced/, Nooner et al, 2012)
Resting state fMRI scans (TR=645ms) of 102 subjects were preprocessed
(https://github.com/fliem/nki_nilearn) and projected onto the Freesurfer
fsaverage5 template (Dale et al, 1999, Fischl et al, 1999). For this example
we use the time series of a single subject's left hemisphere.
The Destrieux parcellation (Destrieux et al, 2010) in fsaverage5 space as
distributed with Freesurfer is used to select a seed region in the posterior
cingulate cortex.
Functional connectivity of the seed region to all other cortical nodes in the
same hemisphere is calculated using Pearson product-moment correlation
coefficient.
The :func:`nilearn.plotting.plot_surf_stat_map` function is used
to plot the resulting statistical map on the (inflated) pial surface.
See also :ref:`for a similar example but using volumetric input data
<sphx_glr_auto_examples_03_connectivity_plot_seed_to_voxel_correlation.py>`.
See :ref:`plotting` for more details on plotting tools.
NOTE: This example needs matplotlib version higher than 1.3.1.
References
----------
Nooner et al, (2012). The NKI-Rockland Sample: A model for accelerating the
pace of discovery science in psychiatry. Frontiers in neuroscience 6, 152.
URL http://dx.doi.org/10.3389/fnins.2012.00152
Dale et al, (1999). Cortical surface-based analysis.I. Segmentation and
surface reconstruction. Neuroimage 9.
URL http://dx.doi.org/10.1006/nimg.1998.0395
Fischl et al, (1999). Cortical surface-based analysis. II: Inflation,
flattening, and a surface-based coordinate system. Neuroimage 9.
http://dx.doi.org/10.1006/nimg.1998.0396
Destrieux et al, (2010). Automatic parcellation of human cortical gyri and
sulci using standard anatomical nomenclature. NeuroImage, 53, 1.
URL http://dx.doi.org/10.1016/j.neuroimage.2010.06.010.
Retrieving the data ------------------- NKI resting state data from nilearn The nki dictionary contains file names for the data of all downloaded subjects. Destrieux parcellation for left hemisphere in fsaverage5 space Fsaverage5 surface template The fsaverage dataset contains file names pointing to the file locations Extracting the seed time series -------------------------------- Load resting state time series from nilearn Extract seed region via label Extract time series from seed region Calculating seed-based functional connectivity ---------------------------------------------- Calculate Pearson product-moment correlation coefficient between seed time series and timeseries of all cortical nodes of the hemisphere Re-mask previously masked nodes (medial wall) Display ROI on surface Display unthresholded stat map with dimmed background Display unthresholded stat map without background map, transparency is automatically set to .5, but can also be controlled with the alpha parameter Many different options are available for plotting, for example thresholding, or using custom colormaps The plots can be saved to file, in which case the display is closed after creating the figure | 3,187 | en | 0.709076 |
import logging
import sys
from abc import abstractmethod
from typing import TextIO
import pandas as pd
from datapackage import Package
from pandas_datapackage_reader import read_datapackage
from sqlalchemy import create_engine
from otoole import read_packaged_file
logger = logging.getLogger(__name__)
class DataPackageTo(object):
"""Convert a data package to another format
Arguments
---------
datapackage: str
The path to the databackage
datafilepath: str
The path to the destination file or folder
sql: bool, default=False
Flag to set whether the source datapackage is in sqlite format
"""
def __init__(self, datapackage: str, datafilepath: str, sql: bool = False):
self.datapackage = datapackage
self.datafilepath = datafilepath
self.sql = sql
self.package = self._get_package()
self.default_values = self._get_default_values()
self.config = read_packaged_file("config.yaml", "otoole.preprocess")
def _get_package(self):
if self.sql:
engine = create_engine("sqlite:///{}".format(self.datapackage))
package = Package(storage="sql", engine=engine)
else:
package = read_datapackage(self.datapackage) # typing: datapackage.Package
return package
def _get_default_values(self):
default_resource = (
self.package.pop("default_values").set_index("name").to_dict()
)
return default_resource["default_value"]
def convert(self):
"""Perform the conversion from datapackage to destination format
"""
handle = self._header()
logger.debug(self.default_values)
for name, df in self.package.items():
logger.debug(name)
if df.empty:
columns = [x["name"] for x in df._metadata["schema"]["fields"]]
df = pd.DataFrame(columns=columns)
df = df.reset_index()
if "index" in df.columns:
df = df.drop(columns="index")
logger.debug("Number of columns: %s, %s", len(df.columns), df.columns)
if len(df.columns) > 1:
default_value = self.default_values[name]
self._write_parameter(df, name, handle, default=default_value)
else:
self._write_set(df, name, handle)
self._footer(handle)
handle.close()
@abstractmethod
def _header(self) -> TextIO:
raise NotImplementedError()
@abstractmethod
def _write_parameter(
self, df: pd.DataFrame, parameter_name: str, handle: TextIO, default: float
) -> pd.DataFrame:
"""Write parameter data"""
raise NotImplementedError()
@abstractmethod
def _write_set(self, df: pd.DataFrame, set_name, handle: TextIO) -> pd.DataFrame:
"""Write set data"""
raise NotImplementedError()
@abstractmethod
def _footer(self, handle: TextIO):
raise NotImplementedError()
class DataPackageToCsv(DataPackageTo):
def _header(self):
filepath = open(self.datafilepath, "w")
msg = "# Model file written by *otoole*\n"
filepath.write(msg)
return filepath
def _form_parameter(self, df: pd.DataFrame, default: float):
df = df[df.VALUE != default]
return df
def _write_parameter(
self, df: pd.DataFrame, parameter_name: str, handle: TextIO, default: float
):
"""Write parameter data to a csv file, omitting data which matches the default value
Arguments
---------
filepath : StreamIO
df : pandas.DataFrame
parameter_name : str
handle: TextIO
default : int
"""
df = self._form_parameter(df, default)
handle.write("param default {} : {} :=\n".format(default, parameter_name))
df.to_csv(path_or_buf=handle, sep=" ", header=False, index=False)
handle.write(";\n")
def _write_set(self, df: pd.DataFrame, set_name, handle: TextIO):
"""
Arguments
---------
df : pandas.DataFrame
set_name : str
handle: TextIO
"""
handle.write("set {} :=\n".format(set_name))
df.to_csv(path_or_buf=handle, sep=" ", header=False, index=False)
handle.write(";\n")
def _footer(self, handle: TextIO):
handle.write("end;\n")
handle.close()
class DataPackageToExcel(DataPackageTo):
def _header(self):
return pd.ExcelWriter(self.datafilepath, mode="w")
def _form_parameter(
self, df: pd.DataFrame, parameter_name: str, default: float
) -> pd.DataFrame:
"""Converts data into wide format
Arguments
---------
df: pd.DataFrame
parameter_name: str
default: float
Returns
-------
pandas.DataFrame
"""
if not df.empty:
names = df.columns.to_list()
if len(names) > 2:
logger.debug(
"More than 2 columns for {}: {}".format(parameter_name, names)
)
rows = names[0:-2]
columns = names[-2]
values = names[-1]
logger.debug("Rows: {}; columns: {}; values: {}", rows, columns, values)
logger.debug("dtypes: {}".format(df.dtypes))
pivot = pd.pivot_table(
df, index=rows, columns=columns, values=values, fill_value=default
)
elif len(names) == 2:
logger.debug("Two columns for {}: {}".format(parameter_name, names))
values = names[-1]
rows = names[0:-2]
logger.debug("Rows: {}; values: {}", rows, values)
pivot = pd.pivot_table(
df, index=rows, values=values, fill_value=default
)
else:
logger.debug("One column for {}: {}".format(parameter_name, names))
pivot = df.copy()
pivot = pivot.reset_index(drop=True)
else:
logger.debug("Dataframe {} is empty".format(parameter_name))
pivot = df.copy()
return pivot
def _write_parameter(
self,
df: pd.DataFrame,
parameter_name: str,
handle: pd.ExcelWriter,
default: float,
):
df = self._form_parameter(df, parameter_name, default)
df.to_excel(handle, sheet_name=parameter_name, merge_cells=False)
def _write_set(self, df: pd.DataFrame, set_name, handle: pd.ExcelWriter):
df.to_excel(handle, sheet_name=set_name, merge_cells=False, index=False)
def _footer(self, handle=pd.ExcelWriter):
handle.close()
def convert_datapackage_to_datafile(path_to_datapackage, path_to_datafile):
dp = DataPackageToCsv(path_to_datapackage, path_to_datafile)
dp.convert()
def convert_datapackage_to_excel(path_to_datapackage, path_to_excel):
dp = DataPackageToExcel(path_to_datapackage, path_to_excel)
dp.convert()
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG)
path_to_datapackage = sys.argv[1]
path_to_datafile = sys.argv[2]
DataPackageToCsv(path_to_datapackage, path_to_datafile)
| src/otoole/preprocess/narrow_to_datafile.py | 7,302 | Convert a data package to another format
Arguments
---------
datapackage: str
The path to the databackage
datafilepath: str
The path to the destination file or folder
sql: bool, default=False
Flag to set whether the source datapackage is in sqlite format
Converts data into wide format
Arguments
---------
df: pd.DataFrame
parameter_name: str
default: float
Returns
-------
pandas.DataFrame
Write parameter data
Write parameter data to a csv file, omitting data which matches the default value
Arguments
---------
filepath : StreamIO
df : pandas.DataFrame
parameter_name : str
handle: TextIO
default : int
Write set data
Arguments
---------
df : pandas.DataFrame
set_name : str
handle: TextIO
Perform the conversion from datapackage to destination format
typing: datapackage.Package | 809 | en | 0.215723 |
# Data from https://www.kaggle.com/crawford/80-cereals/version/2
import pandas, matplotlib
data = pandas.read_csv('http://www.compassmentis.com/wp-content/uploads/2019/04/cereal.csv')
data = data.set_index('name')
data = data.calories.sort_values()[-10:]
ax = data.plot(kind='barh')
ax.set_xlabel('Calories per serving')
ax.set_ylabel('Cereal')
ax.set_title('Top 10 cereals by calories')
matplotlib.pyplot.subplots_adjust(left=0.25)
matplotlib.pyplot.show()
| read_csv_and_plot/read_and_plot.py | 458 | Data from https://www.kaggle.com/crawford/80-cereals/version/2 | 62 | en | 0.778167 |
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Copyright (C) 2008 Evan Martin <martine@danga.com>
"""A git-command for integrating reviews on Rietveld and Gerrit."""
from __future__ import print_function
from distutils.version import LooseVersion
from multiprocessing.pool import ThreadPool
import base64
import collections
import glob
import httplib
import json
import logging
import multiprocessing
import optparse
import os
import re
import stat
import sys
import textwrap
import time
import traceback
import urllib
import urllib2
import urlparse
import uuid
import webbrowser
import zlib
try:
import readline # pylint: disable=F0401,W0611
except ImportError:
pass
from third_party import colorama
from third_party import httplib2
from third_party import upload
import auth
from luci_hacks import trigger_luci_job as luci_trigger
import clang_format
import commit_queue
import dart_format
import setup_color
import fix_encoding
import gclient_utils
import gerrit_util
import git_cache
import git_common
import git_footers
import owners
import owners_finder
import presubmit_support
import rietveld
import scm
import subcommand
import subprocess2
import watchlists
__version__ = '2.0'
COMMIT_BOT_EMAIL = 'commit-bot@chromium.org'
DEFAULT_SERVER = 'https://codereview.appspot.com'
POSTUPSTREAM_HOOK_PATTERN = '.git/hooks/post-cl-%s'
DESCRIPTION_BACKUP_FILE = '~/.git_cl_description_backup'
GIT_INSTRUCTIONS_URL = 'http://code.google.com/p/chromium/wiki/UsingGit'
REFS_THAT_ALIAS_TO_OTHER_REFS = {
'refs/remotes/origin/lkgr': 'refs/remotes/origin/master',
'refs/remotes/origin/lkcr': 'refs/remotes/origin/master',
}
# Valid extensions for files we want to lint.
DEFAULT_LINT_REGEX = r"(.*\.cpp|.*\.cc|.*\.h)"
DEFAULT_LINT_IGNORE_REGEX = r"$^"
# Shortcut since it quickly becomes redundant.
Fore = colorama.Fore
# Initialized in main()
settings = None
def DieWithError(message):
print(message, file=sys.stderr)
sys.exit(1)
def GetNoGitPagerEnv():
env = os.environ.copy()
# 'cat' is a magical git string that disables pagers on all platforms.
env['GIT_PAGER'] = 'cat'
return env
def RunCommand(args, error_ok=False, error_message=None, shell=False, **kwargs):
try:
return subprocess2.check_output(args, shell=shell, **kwargs)
except subprocess2.CalledProcessError as e:
logging.debug('Failed running %s', args)
if not error_ok:
DieWithError(
'Command "%s" failed.\n%s' % (
' '.join(args), error_message or e.stdout or ''))
return e.stdout
def RunGit(args, **kwargs):
"""Returns stdout."""
return RunCommand(['git'] + args, **kwargs)
def RunGitWithCode(args, suppress_stderr=False):
"""Returns return code and stdout."""
try:
if suppress_stderr:
stderr = subprocess2.VOID
else:
stderr = sys.stderr
out, code = subprocess2.communicate(['git'] + args,
env=GetNoGitPagerEnv(),
stdout=subprocess2.PIPE,
stderr=stderr)
return code, out[0]
except ValueError:
# When the subprocess fails, it returns None. That triggers a ValueError
# when trying to unpack the return value into (out, code).
return 1, ''
def RunGitSilent(args):
"""Returns stdout, suppresses stderr and ignores the return code."""
return RunGitWithCode(args, suppress_stderr=True)[1]
def IsGitVersionAtLeast(min_version):
prefix = 'git version '
version = RunGit(['--version']).strip()
return (version.startswith(prefix) and
LooseVersion(version[len(prefix):]) >= LooseVersion(min_version))
def BranchExists(branch):
"""Return True if specified branch exists."""
code, _ = RunGitWithCode(['rev-parse', '--verify', branch],
suppress_stderr=True)
return not code
def ask_for_data(prompt):
try:
return raw_input(prompt)
except KeyboardInterrupt:
# Hide the exception.
sys.exit(1)
def git_set_branch_value(key, value):
branch = GetCurrentBranch()
if not branch:
return
cmd = ['config']
if isinstance(value, int):
cmd.append('--int')
git_key = 'branch.%s.%s' % (branch, key)
RunGit(cmd + [git_key, str(value)])
def git_get_branch_default(key, default):
branch = GetCurrentBranch()
if branch:
git_key = 'branch.%s.%s' % (branch, key)
(_, stdout) = RunGitWithCode(['config', '--int', '--get', git_key])
try:
return int(stdout.strip())
except ValueError:
pass
return default
def add_git_similarity(parser):
parser.add_option(
'--similarity', metavar='SIM', type='int', action='store',
help='Sets the percentage that a pair of files need to match in order to'
' be considered copies (default 50)')
parser.add_option(
'--find-copies', action='store_true',
help='Allows git to look for copies.')
parser.add_option(
'--no-find-copies', action='store_false', dest='find_copies',
help='Disallows git from looking for copies.')
old_parser_args = parser.parse_args
def Parse(args):
options, args = old_parser_args(args)
if options.similarity is None:
options.similarity = git_get_branch_default('git-cl-similarity', 50)
else:
print('Note: Saving similarity of %d%% in git config.'
% options.similarity)
git_set_branch_value('git-cl-similarity', options.similarity)
options.similarity = max(0, min(options.similarity, 100))
if options.find_copies is None:
options.find_copies = bool(
git_get_branch_default('git-find-copies', True))
else:
git_set_branch_value('git-find-copies', int(options.find_copies))
print('Using %d%% similarity for rename/copy detection. '
'Override with --similarity.' % options.similarity)
return options, args
parser.parse_args = Parse
def _get_properties_from_options(options):
properties = dict(x.split('=', 1) for x in options.properties)
for key, val in properties.iteritems():
try:
properties[key] = json.loads(val)
except ValueError:
pass # If a value couldn't be evaluated, treat it as a string.
return properties
def _prefix_master(master):
"""Convert user-specified master name to full master name.
Buildbucket uses full master name(master.tryserver.chromium.linux) as bucket
name, while the developers always use shortened master name
(tryserver.chromium.linux) by stripping off the prefix 'master.'. This
function does the conversion for buildbucket migration.
"""
prefix = 'master.'
if master.startswith(prefix):
return master
return '%s%s' % (prefix, master)
def _buildbucket_retry(operation_name, http, *args, **kwargs):
"""Retries requests to buildbucket service and returns parsed json content."""
try_count = 0
while True:
response, content = http.request(*args, **kwargs)
try:
content_json = json.loads(content)
except ValueError:
content_json = None
# Buildbucket could return an error even if status==200.
if content_json and content_json.get('error'):
error = content_json.get('error')
if error.get('code') == 403:
raise BuildbucketResponseException(
'Access denied: %s' % error.get('message', ''))
msg = 'Error in response. Reason: %s. Message: %s.' % (
error.get('reason', ''), error.get('message', ''))
raise BuildbucketResponseException(msg)
if response.status == 200:
if not content_json:
raise BuildbucketResponseException(
'Buildbucket returns invalid json content: %s.\n'
'Please file bugs at http://crbug.com, label "Infra-BuildBucket".' %
content)
return content_json
if response.status < 500 or try_count >= 2:
raise httplib2.HttpLib2Error(content)
# status >= 500 means transient failures.
logging.debug('Transient errors when %s. Will retry.', operation_name)
time.sleep(0.5 + 1.5*try_count)
try_count += 1
assert False, 'unreachable'
def trigger_luci_job(changelist, masters, options):
"""Send a job to run on LUCI."""
issue_props = changelist.GetIssueProperties()
issue = changelist.GetIssue()
patchset = changelist.GetMostRecentPatchset()
for builders_and_tests in sorted(masters.itervalues()):
# TODO(hinoka et al): add support for other properties.
# Currently, this completely ignores testfilter and other properties.
for builder in sorted(builders_and_tests):
luci_trigger.trigger(
builder, 'HEAD', issue, patchset, issue_props['project'])
def trigger_try_jobs(auth_config, changelist, options, masters, category):
rietveld_url = settings.GetDefaultServerUrl()
rietveld_host = urlparse.urlparse(rietveld_url).hostname
authenticator = auth.get_authenticator_for_host(rietveld_host, auth_config)
http = authenticator.authorize(httplib2.Http())
http.force_exception_to_status_code = True
issue_props = changelist.GetIssueProperties()
issue = changelist.GetIssue()
patchset = changelist.GetMostRecentPatchset()
properties = _get_properties_from_options(options)
buildbucket_put_url = (
'https://{hostname}/_ah/api/buildbucket/v1/builds/batch'.format(
hostname=options.buildbucket_host))
buildset = 'patch/rietveld/{hostname}/{issue}/{patch}'.format(
hostname=rietveld_host,
issue=issue,
patch=patchset)
batch_req_body = {'builds': []}
print_text = []
print_text.append('Tried jobs on:')
for master, builders_and_tests in sorted(masters.iteritems()):
print_text.append('Master: %s' % master)
bucket = _prefix_master(master)
for builder, tests in sorted(builders_and_tests.iteritems()):
print_text.append(' %s: %s' % (builder, tests))
parameters = {
'builder_name': builder,
'changes': [{
'author': {'email': issue_props['owner_email']},
'revision': options.revision,
}],
'properties': {
'category': category,
'issue': issue,
'master': master,
'patch_project': issue_props['project'],
'patch_storage': 'rietveld',
'patchset': patchset,
'reason': options.name,
'rietveld': rietveld_url,
},
}
if 'presubmit' in builder.lower():
parameters['properties']['dry_run'] = 'true'
if tests:
parameters['properties']['testfilter'] = tests
if properties:
parameters['properties'].update(properties)
if options.clobber:
parameters['properties']['clobber'] = True
batch_req_body['builds'].append(
{
'bucket': bucket,
'parameters_json': json.dumps(parameters),
'client_operation_id': str(uuid.uuid4()),
'tags': ['builder:%s' % builder,
'buildset:%s' % buildset,
'master:%s' % master,
'user_agent:git_cl_try']
}
)
_buildbucket_retry(
'triggering tryjobs',
http,
buildbucket_put_url,
'PUT',
body=json.dumps(batch_req_body),
headers={'Content-Type': 'application/json'}
)
print_text.append('To see results here, run: git cl try-results')
print_text.append('To see results in browser, run: git cl web')
print('\n'.join(print_text))
def fetch_try_jobs(auth_config, changelist, options):
"""Fetches tryjobs from buildbucket.
Returns a map from build id to build info as json dictionary.
"""
rietveld_url = settings.GetDefaultServerUrl()
rietveld_host = urlparse.urlparse(rietveld_url).hostname
authenticator = auth.get_authenticator_for_host(rietveld_host, auth_config)
if authenticator.has_cached_credentials():
http = authenticator.authorize(httplib2.Http())
else:
print('Warning: Some results might be missing because %s' %
# Get the message on how to login.
(auth.LoginRequiredError(rietveld_host).message,))
http = httplib2.Http()
http.force_exception_to_status_code = True
buildset = 'patch/rietveld/{hostname}/{issue}/{patch}'.format(
hostname=rietveld_host,
issue=changelist.GetIssue(),
patch=options.patchset)
params = {'tag': 'buildset:%s' % buildset}
builds = {}
while True:
url = 'https://{hostname}/_ah/api/buildbucket/v1/search?{params}'.format(
hostname=options.buildbucket_host,
params=urllib.urlencode(params))
content = _buildbucket_retry('fetching tryjobs', http, url, 'GET')
for build in content.get('builds', []):
builds[build['id']] = build
if 'next_cursor' in content:
params['start_cursor'] = content['next_cursor']
else:
break
return builds
def print_tryjobs(options, builds):
"""Prints nicely result of fetch_try_jobs."""
if not builds:
print('No tryjobs scheduled')
return
# Make a copy, because we'll be modifying builds dictionary.
builds = builds.copy()
builder_names_cache = {}
def get_builder(b):
try:
return builder_names_cache[b['id']]
except KeyError:
try:
parameters = json.loads(b['parameters_json'])
name = parameters['builder_name']
except (ValueError, KeyError) as error:
print('WARNING: failed to get builder name for build %s: %s' % (
b['id'], error))
name = None
builder_names_cache[b['id']] = name
return name
def get_bucket(b):
bucket = b['bucket']
if bucket.startswith('master.'):
return bucket[len('master.'):]
return bucket
if options.print_master:
name_fmt = '%%-%ds %%-%ds' % (
max(len(str(get_bucket(b))) for b in builds.itervalues()),
max(len(str(get_builder(b))) for b in builds.itervalues()))
def get_name(b):
return name_fmt % (get_bucket(b), get_builder(b))
else:
name_fmt = '%%-%ds' % (
max(len(str(get_builder(b))) for b in builds.itervalues()))
def get_name(b):
return name_fmt % get_builder(b)
def sort_key(b):
return b['status'], b.get('result'), get_name(b), b.get('url')
def pop(title, f, color=None, **kwargs):
"""Pop matching builds from `builds` dict and print them."""
if not options.color or color is None:
colorize = str
else:
colorize = lambda x: '%s%s%s' % (color, x, Fore.RESET)
result = []
for b in builds.values():
if all(b.get(k) == v for k, v in kwargs.iteritems()):
builds.pop(b['id'])
result.append(b)
if result:
print(colorize(title))
for b in sorted(result, key=sort_key):
print(' ', colorize('\t'.join(map(str, f(b)))))
total = len(builds)
pop(status='COMPLETED', result='SUCCESS',
title='Successes:', color=Fore.GREEN,
f=lambda b: (get_name(b), b.get('url')))
pop(status='COMPLETED', result='FAILURE', failure_reason='INFRA_FAILURE',
title='Infra Failures:', color=Fore.MAGENTA,
f=lambda b: (get_name(b), b.get('url')))
pop(status='COMPLETED', result='FAILURE', failure_reason='BUILD_FAILURE',
title='Failures:', color=Fore.RED,
f=lambda b: (get_name(b), b.get('url')))
pop(status='COMPLETED', result='CANCELED',
title='Canceled:', color=Fore.MAGENTA,
f=lambda b: (get_name(b),))
pop(status='COMPLETED', result='FAILURE',
failure_reason='INVALID_BUILD_DEFINITION',
title='Wrong master/builder name:', color=Fore.MAGENTA,
f=lambda b: (get_name(b),))
pop(status='COMPLETED', result='FAILURE',
title='Other failures:',
f=lambda b: (get_name(b), b.get('failure_reason'), b.get('url')))
pop(status='COMPLETED',
title='Other finished:',
f=lambda b: (get_name(b), b.get('result'), b.get('url')))
pop(status='STARTED',
title='Started:', color=Fore.YELLOW,
f=lambda b: (get_name(b), b.get('url')))
pop(status='SCHEDULED',
title='Scheduled:',
f=lambda b: (get_name(b), 'id=%s' % b['id']))
# The last section is just in case buildbucket API changes OR there is a bug.
pop(title='Other:',
f=lambda b: (get_name(b), 'id=%s' % b['id']))
assert len(builds) == 0
print('Total: %d tryjobs' % total)
def MatchSvnGlob(url, base_url, glob_spec, allow_wildcards):
"""Return the corresponding git ref if |base_url| together with |glob_spec|
matches the full |url|.
If |allow_wildcards| is true, |glob_spec| can contain wildcards (see below).
"""
fetch_suburl, as_ref = glob_spec.split(':')
if allow_wildcards:
glob_match = re.match('(.+/)?(\*|{[^/]*})(/.+)?', fetch_suburl)
if glob_match:
# Parse specs like "branches/*/src:refs/remotes/svn/*" or
# "branches/{472,597,648}/src:refs/remotes/svn/*".
branch_re = re.escape(base_url)
if glob_match.group(1):
branch_re += '/' + re.escape(glob_match.group(1))
wildcard = glob_match.group(2)
if wildcard == '*':
branch_re += '([^/]*)'
else:
# Escape and replace surrounding braces with parentheses and commas
# with pipe symbols.
wildcard = re.escape(wildcard)
wildcard = re.sub('^\\\\{', '(', wildcard)
wildcard = re.sub('\\\\,', '|', wildcard)
wildcard = re.sub('\\\\}$', ')', wildcard)
branch_re += wildcard
if glob_match.group(3):
branch_re += re.escape(glob_match.group(3))
match = re.match(branch_re, url)
if match:
return re.sub('\*$', match.group(1), as_ref)
# Parse specs like "trunk/src:refs/remotes/origin/trunk".
if fetch_suburl:
full_url = base_url + '/' + fetch_suburl
else:
full_url = base_url
if full_url == url:
return as_ref
return None
def print_stats(similarity, find_copies, args):
"""Prints statistics about the change to the user."""
# --no-ext-diff is broken in some versions of Git, so try to work around
# this by overriding the environment (but there is still a problem if the
# git config key "diff.external" is used).
env = GetNoGitPagerEnv()
if 'GIT_EXTERNAL_DIFF' in env:
del env['GIT_EXTERNAL_DIFF']
if find_copies:
similarity_options = ['--find-copies-harder', '-l100000',
'-C%s' % similarity]
else:
similarity_options = ['-M%s' % similarity]
try:
stdout = sys.stdout.fileno()
except AttributeError:
stdout = None
return subprocess2.call(
['git',
'diff', '--no-ext-diff', '--stat'] + similarity_options + args,
stdout=stdout, env=env)
class BuildbucketResponseException(Exception):
pass
class Settings(object):
def __init__(self):
self.default_server = None
self.cc = None
self.root = None
self.is_git_svn = None
self.svn_branch = None
self.tree_status_url = None
self.viewvc_url = None
self.updated = False
self.is_gerrit = None
self.squash_gerrit_uploads = None
self.gerrit_skip_ensure_authenticated = None
self.git_editor = None
self.project = None
self.force_https_commit_url = None
self.pending_ref_prefix = None
def LazyUpdateIfNeeded(self):
"""Updates the settings from a codereview.settings file, if available."""
if not self.updated:
# The only value that actually changes the behavior is
# autoupdate = "false". Everything else means "true".
autoupdate = RunGit(['config', 'rietveld.autoupdate'],
error_ok=True
).strip().lower()
cr_settings_file = FindCodereviewSettingsFile()
if autoupdate != 'false' and cr_settings_file:
LoadCodereviewSettingsFromFile(cr_settings_file)
self.updated = True
def GetDefaultServerUrl(self, error_ok=False):
if not self.default_server:
self.LazyUpdateIfNeeded()
self.default_server = gclient_utils.UpgradeToHttps(
self._GetRietveldConfig('server', error_ok=True))
if error_ok:
return self.default_server
if not self.default_server:
error_message = ('Could not find settings file. You must configure '
'your review setup by running "git cl config".')
self.default_server = gclient_utils.UpgradeToHttps(
self._GetRietveldConfig('server', error_message=error_message))
return self.default_server
@staticmethod
def GetRelativeRoot():
return RunGit(['rev-parse', '--show-cdup']).strip()
def GetRoot(self):
if self.root is None:
self.root = os.path.abspath(self.GetRelativeRoot())
return self.root
def GetGitMirror(self, remote='origin'):
"""If this checkout is from a local git mirror, return a Mirror object."""
local_url = RunGit(['config', '--get', 'remote.%s.url' % remote]).strip()
if not os.path.isdir(local_url):
return None
git_cache.Mirror.SetCachePath(os.path.dirname(local_url))
remote_url = git_cache.Mirror.CacheDirToUrl(local_url)
# Use the /dev/null print_func to avoid terminal spew in WaitForRealCommit.
mirror = git_cache.Mirror(remote_url, print_func = lambda *args: None)
if mirror.exists():
return mirror
return None
def GetIsGitSvn(self):
"""Return true if this repo looks like it's using git-svn."""
if self.is_git_svn is None:
if self.GetPendingRefPrefix():
# If PENDING_REF_PREFIX is set then it's a pure git repo no matter what.
self.is_git_svn = False
else:
# If you have any "svn-remote.*" config keys, we think you're using svn.
self.is_git_svn = RunGitWithCode(
['config', '--local', '--get-regexp', r'^svn-remote\.'])[0] == 0
return self.is_git_svn
def GetSVNBranch(self):
if self.svn_branch is None:
if not self.GetIsGitSvn():
DieWithError('Repo doesn\'t appear to be a git-svn repo.')
# Try to figure out which remote branch we're based on.
# Strategy:
# 1) iterate through our branch history and find the svn URL.
# 2) find the svn-remote that fetches from the URL.
# regexp matching the git-svn line that contains the URL.
git_svn_re = re.compile(r'^\s*git-svn-id: (\S+)@', re.MULTILINE)
# We don't want to go through all of history, so read a line from the
# pipe at a time.
# The -100 is an arbitrary limit so we don't search forever.
cmd = ['git', 'log', '-100', '--pretty=medium']
proc = subprocess2.Popen(cmd, stdout=subprocess2.PIPE,
env=GetNoGitPagerEnv())
url = None
for line in proc.stdout:
match = git_svn_re.match(line)
if match:
url = match.group(1)
proc.stdout.close() # Cut pipe.
break
if url:
svn_remote_re = re.compile(r'^svn-remote\.([^.]+)\.url (.*)$')
remotes = RunGit(['config', '--get-regexp',
r'^svn-remote\..*\.url']).splitlines()
for remote in remotes:
match = svn_remote_re.match(remote)
if match:
remote = match.group(1)
base_url = match.group(2)
rewrite_root = RunGit(
['config', 'svn-remote.%s.rewriteRoot' % remote],
error_ok=True).strip()
if rewrite_root:
base_url = rewrite_root
fetch_spec = RunGit(
['config', 'svn-remote.%s.fetch' % remote],
error_ok=True).strip()
if fetch_spec:
self.svn_branch = MatchSvnGlob(url, base_url, fetch_spec, False)
if self.svn_branch:
break
branch_spec = RunGit(
['config', 'svn-remote.%s.branches' % remote],
error_ok=True).strip()
if branch_spec:
self.svn_branch = MatchSvnGlob(url, base_url, branch_spec, True)
if self.svn_branch:
break
tag_spec = RunGit(
['config', 'svn-remote.%s.tags' % remote],
error_ok=True).strip()
if tag_spec:
self.svn_branch = MatchSvnGlob(url, base_url, tag_spec, True)
if self.svn_branch:
break
if not self.svn_branch:
DieWithError('Can\'t guess svn branch -- try specifying it on the '
'command line')
return self.svn_branch
def GetTreeStatusUrl(self, error_ok=False):
if not self.tree_status_url:
error_message = ('You must configure your tree status URL by running '
'"git cl config".')
self.tree_status_url = self._GetRietveldConfig(
'tree-status-url', error_ok=error_ok, error_message=error_message)
return self.tree_status_url
def GetViewVCUrl(self):
if not self.viewvc_url:
self.viewvc_url = self._GetRietveldConfig('viewvc-url', error_ok=True)
return self.viewvc_url
def GetBugPrefix(self):
return self._GetRietveldConfig('bug-prefix', error_ok=True)
def GetIsSkipDependencyUpload(self, branch_name):
"""Returns true if specified branch should skip dep uploads."""
return self._GetBranchConfig(branch_name, 'skip-deps-uploads',
error_ok=True)
def GetRunPostUploadHook(self):
run_post_upload_hook = self._GetRietveldConfig(
'run-post-upload-hook', error_ok=True)
return run_post_upload_hook == "True"
def GetDefaultCCList(self):
return self._GetRietveldConfig('cc', error_ok=True)
def GetDefaultPrivateFlag(self):
return self._GetRietveldConfig('private', error_ok=True)
def GetIsGerrit(self):
"""Return true if this repo is assosiated with gerrit code review system."""
if self.is_gerrit is None:
self.is_gerrit = self._GetConfig('gerrit.host', error_ok=True)
return self.is_gerrit
def GetSquashGerritUploads(self):
"""Return true if uploads to Gerrit should be squashed by default."""
if self.squash_gerrit_uploads is None:
self.squash_gerrit_uploads = self.GetSquashGerritUploadsOverride()
if self.squash_gerrit_uploads is None:
# Default is squash now (http://crbug.com/611892#c23).
self.squash_gerrit_uploads = not (
RunGit(['config', '--bool', 'gerrit.squash-uploads'],
error_ok=True).strip() == 'false')
return self.squash_gerrit_uploads
def GetSquashGerritUploadsOverride(self):
"""Return True or False if codereview.settings should be overridden.
Returns None if no override has been defined.
"""
# See also http://crbug.com/611892#c23
result = RunGit(['config', '--bool', 'gerrit.override-squash-uploads'],
error_ok=True).strip()
if result == 'true':
return True
if result == 'false':
return False
return None
def GetGerritSkipEnsureAuthenticated(self):
"""Return True if EnsureAuthenticated should not be done for Gerrit
uploads."""
if self.gerrit_skip_ensure_authenticated is None:
self.gerrit_skip_ensure_authenticated = (
RunGit(['config', '--bool', 'gerrit.skip-ensure-authenticated'],
error_ok=True).strip() == 'true')
return self.gerrit_skip_ensure_authenticated
def GetGitEditor(self):
"""Return the editor specified in the git config, or None if none is."""
if self.git_editor is None:
self.git_editor = self._GetConfig('core.editor', error_ok=True)
return self.git_editor or None
def GetLintRegex(self):
return (self._GetRietveldConfig('cpplint-regex', error_ok=True) or
DEFAULT_LINT_REGEX)
def GetLintIgnoreRegex(self):
return (self._GetRietveldConfig('cpplint-ignore-regex', error_ok=True) or
DEFAULT_LINT_IGNORE_REGEX)
def GetProject(self):
if not self.project:
self.project = self._GetRietveldConfig('project', error_ok=True)
return self.project
def GetForceHttpsCommitUrl(self):
if not self.force_https_commit_url:
self.force_https_commit_url = self._GetRietveldConfig(
'force-https-commit-url', error_ok=True)
return self.force_https_commit_url
def GetPendingRefPrefix(self):
if not self.pending_ref_prefix:
self.pending_ref_prefix = self._GetRietveldConfig(
'pending-ref-prefix', error_ok=True)
return self.pending_ref_prefix
def _GetRietveldConfig(self, param, **kwargs):
return self._GetConfig('rietveld.' + param, **kwargs)
def _GetBranchConfig(self, branch_name, param, **kwargs):
return self._GetConfig('branch.' + branch_name + '.' + param, **kwargs)
def _GetConfig(self, param, **kwargs):
self.LazyUpdateIfNeeded()
return RunGit(['config', param], **kwargs).strip()
def ShortBranchName(branch):
"""Convert a name like 'refs/heads/foo' to just 'foo'."""
return branch.replace('refs/heads/', '', 1)
def GetCurrentBranchRef():
"""Returns branch ref (e.g., refs/heads/master) or None."""
return RunGit(['symbolic-ref', 'HEAD'],
stderr=subprocess2.VOID, error_ok=True).strip() or None
def GetCurrentBranch():
"""Returns current branch or None.
For refs/heads/* branches, returns just last part. For others, full ref.
"""
branchref = GetCurrentBranchRef()
if branchref:
return ShortBranchName(branchref)
return None
class _CQState(object):
"""Enum for states of CL with respect to Commit Queue."""
NONE = 'none'
DRY_RUN = 'dry_run'
COMMIT = 'commit'
ALL_STATES = [NONE, DRY_RUN, COMMIT]
class _ParsedIssueNumberArgument(object):
def __init__(self, issue=None, patchset=None, hostname=None):
self.issue = issue
self.patchset = patchset
self.hostname = hostname
@property
def valid(self):
return self.issue is not None
class _RietveldParsedIssueNumberArgument(_ParsedIssueNumberArgument):
def __init__(self, *args, **kwargs):
self.patch_url = kwargs.pop('patch_url', None)
super(_RietveldParsedIssueNumberArgument, self).__init__(*args, **kwargs)
def ParseIssueNumberArgument(arg):
"""Parses the issue argument and returns _ParsedIssueNumberArgument."""
fail_result = _ParsedIssueNumberArgument()
if arg.isdigit():
return _ParsedIssueNumberArgument(issue=int(arg))
if not arg.startswith('http'):
return fail_result
url = gclient_utils.UpgradeToHttps(arg)
try:
parsed_url = urlparse.urlparse(url)
except ValueError:
return fail_result
for cls in _CODEREVIEW_IMPLEMENTATIONS.itervalues():
tmp = cls.ParseIssueURL(parsed_url)
if tmp is not None:
return tmp
return fail_result
class Changelist(object):
"""Changelist works with one changelist in local branch.
Supports two codereview backends: Rietveld or Gerrit, selected at object
creation.
Notes:
* Not safe for concurrent multi-{thread,process} use.
* Caches values from current branch. Therefore, re-use after branch change
with care.
"""
def __init__(self, branchref=None, issue=None, codereview=None, **kwargs):
"""Create a new ChangeList instance.
If issue is given, the codereview must be given too.
If `codereview` is given, it must be 'rietveld' or 'gerrit'.
Otherwise, it's decided based on current configuration of the local branch,
with default being 'rietveld' for backwards compatibility.
See _load_codereview_impl for more details.
**kwargs will be passed directly to codereview implementation.
"""
# Poke settings so we get the "configure your server" message if necessary.
global settings
if not settings:
# Happens when git_cl.py is used as a utility library.
settings = Settings()
if issue:
assert codereview, 'codereview must be known, if issue is known'
self.branchref = branchref
if self.branchref:
assert branchref.startswith('refs/heads/')
self.branch = ShortBranchName(self.branchref)
else:
self.branch = None
self.upstream_branch = None
self.lookedup_issue = False
self.issue = issue or None
self.has_description = False
self.description = None
self.lookedup_patchset = False
self.patchset = None
self.cc = None
self.watchers = ()
self._remote = None
self._codereview_impl = None
self._codereview = None
self._load_codereview_impl(codereview, **kwargs)
assert self._codereview_impl
assert self._codereview in _CODEREVIEW_IMPLEMENTATIONS
def _load_codereview_impl(self, codereview=None, **kwargs):
if codereview:
assert codereview in _CODEREVIEW_IMPLEMENTATIONS
cls = _CODEREVIEW_IMPLEMENTATIONS[codereview]
self._codereview = codereview
self._codereview_impl = cls(self, **kwargs)
return
# Automatic selection based on issue number set for a current branch.
# Rietveld takes precedence over Gerrit.
assert not self.issue
# Whether we find issue or not, we are doing the lookup.
self.lookedup_issue = True
for codereview, cls in _CODEREVIEW_IMPLEMENTATIONS.iteritems():
setting = cls.IssueSetting(self.GetBranch())
issue = RunGit(['config', setting], error_ok=True).strip()
if issue:
self._codereview = codereview
self._codereview_impl = cls(self, **kwargs)
self.issue = int(issue)
return
# No issue is set for this branch, so decide based on repo-wide settings.
return self._load_codereview_impl(
codereview='gerrit' if settings.GetIsGerrit() else 'rietveld',
**kwargs)
def IsGerrit(self):
return self._codereview == 'gerrit'
def GetCCList(self):
"""Return the users cc'd on this CL.
Return is a string suitable for passing to gcl with the --cc flag.
"""
if self.cc is None:
base_cc = settings.GetDefaultCCList()
more_cc = ','.join(self.watchers)
self.cc = ','.join(filter(None, (base_cc, more_cc))) or ''
return self.cc
def GetCCListWithoutDefault(self):
"""Return the users cc'd on this CL excluding default ones."""
if self.cc is None:
self.cc = ','.join(self.watchers)
return self.cc
def SetWatchers(self, watchers):
"""Set the list of email addresses that should be cc'd based on the changed
files in this CL.
"""
self.watchers = watchers
def GetBranch(self):
"""Returns the short branch name, e.g. 'master'."""
if not self.branch:
branchref = GetCurrentBranchRef()
if not branchref:
return None
self.branchref = branchref
self.branch = ShortBranchName(self.branchref)
return self.branch
def GetBranchRef(self):
"""Returns the full branch name, e.g. 'refs/heads/master'."""
self.GetBranch() # Poke the lazy loader.
return self.branchref
def ClearBranch(self):
"""Clears cached branch data of this object."""
self.branch = self.branchref = None
@staticmethod
def FetchUpstreamTuple(branch):
"""Returns a tuple containing remote and remote ref,
e.g. 'origin', 'refs/heads/master'
"""
remote = '.'
upstream_branch = RunGit(['config', 'branch.%s.merge' % branch],
error_ok=True).strip()
if upstream_branch:
remote = RunGit(['config', 'branch.%s.remote' % branch]).strip()
else:
upstream_branch = RunGit(['config', 'rietveld.upstream-branch'],
error_ok=True).strip()
if upstream_branch:
remote = RunGit(['config', 'rietveld.upstream-remote']).strip()
else:
# Fall back on trying a git-svn upstream branch.
if settings.GetIsGitSvn():
upstream_branch = settings.GetSVNBranch()
else:
# Else, try to guess the origin remote.
remote_branches = RunGit(['branch', '-r']).split()
if 'origin/master' in remote_branches:
# Fall back on origin/master if it exits.
remote = 'origin'
upstream_branch = 'refs/heads/master'
elif 'origin/trunk' in remote_branches:
# Fall back on origin/trunk if it exists. Generally a shared
# git-svn clone
remote = 'origin'
upstream_branch = 'refs/heads/trunk'
else:
DieWithError(
'Unable to determine default branch to diff against.\n'
'Either pass complete "git diff"-style arguments, like\n'
' git cl upload origin/master\n'
'or verify this branch is set up to track another \n'
'(via the --track argument to "git checkout -b ...").')
return remote, upstream_branch
def GetCommonAncestorWithUpstream(self):
upstream_branch = self.GetUpstreamBranch()
if not BranchExists(upstream_branch):
DieWithError('The upstream for the current branch (%s) does not exist '
'anymore.\nPlease fix it and try again.' % self.GetBranch())
return git_common.get_or_create_merge_base(self.GetBranch(),
upstream_branch)
def GetUpstreamBranch(self):
if self.upstream_branch is None:
remote, upstream_branch = self.FetchUpstreamTuple(self.GetBranch())
if remote is not '.':
upstream_branch = upstream_branch.replace('refs/heads/',
'refs/remotes/%s/' % remote)
upstream_branch = upstream_branch.replace('refs/branch-heads/',
'refs/remotes/branch-heads/')
self.upstream_branch = upstream_branch
return self.upstream_branch
def GetRemoteBranch(self):
if not self._remote:
remote, branch = None, self.GetBranch()
seen_branches = set()
while branch not in seen_branches:
seen_branches.add(branch)
remote, branch = self.FetchUpstreamTuple(branch)
branch = ShortBranchName(branch)
if remote != '.' or branch.startswith('refs/remotes'):
break
else:
remotes = RunGit(['remote'], error_ok=True).split()
if len(remotes) == 1:
remote, = remotes
elif 'origin' in remotes:
remote = 'origin'
logging.warning('Could not determine which remote this change is '
'associated with, so defaulting to "%s". This may '
'not be what you want. You may prevent this message '
'by running "git svn info" as documented here: %s',
self._remote,
GIT_INSTRUCTIONS_URL)
else:
logging.warn('Could not determine which remote this change is '
'associated with. You may prevent this message by '
'running "git svn info" as documented here: %s',
GIT_INSTRUCTIONS_URL)
branch = 'HEAD'
if branch.startswith('refs/remotes'):
self._remote = (remote, branch)
elif branch.startswith('refs/branch-heads/'):
self._remote = (remote, branch.replace('refs/', 'refs/remotes/'))
else:
self._remote = (remote, 'refs/remotes/%s/%s' % (remote, branch))
return self._remote
def GitSanityChecks(self, upstream_git_obj):
"""Checks git repo status and ensures diff is from local commits."""
if upstream_git_obj is None:
if self.GetBranch() is None:
print('ERROR: unable to determine current branch (detached HEAD?)',
file=sys.stderr)
else:
print('ERROR: no upstream branch', file=sys.stderr)
return False
# Verify the commit we're diffing against is in our current branch.
upstream_sha = RunGit(['rev-parse', '--verify', upstream_git_obj]).strip()
common_ancestor = RunGit(['merge-base', upstream_sha, 'HEAD']).strip()
if upstream_sha != common_ancestor:
print('ERROR: %s is not in the current branch. You may need to rebase '
'your tracking branch' % upstream_sha, file=sys.stderr)
return False
# List the commits inside the diff, and verify they are all local.
commits_in_diff = RunGit(
['rev-list', '^%s' % upstream_sha, 'HEAD']).splitlines()
code, remote_branch = RunGitWithCode(['config', 'gitcl.remotebranch'])
remote_branch = remote_branch.strip()
if code != 0:
_, remote_branch = self.GetRemoteBranch()
commits_in_remote = RunGit(
['rev-list', '^%s' % upstream_sha, remote_branch]).splitlines()
common_commits = set(commits_in_diff) & set(commits_in_remote)
if common_commits:
print('ERROR: Your diff contains %d commits already in %s.\n'
'Run "git log --oneline %s..HEAD" to get a list of commits in '
'the diff. If you are using a custom git flow, you can override'
' the reference used for this check with "git config '
'gitcl.remotebranch <git-ref>".' % (
len(common_commits), remote_branch, upstream_git_obj),
file=sys.stderr)
return False
return True
def GetGitBaseUrlFromConfig(self):
"""Return the configured base URL from branch.<branchname>.baseurl.
Returns None if it is not set.
"""
return RunGit(['config', 'branch.%s.base-url' % self.GetBranch()],
error_ok=True).strip()
def GetGitSvnRemoteUrl(self):
"""Return the configured git-svn remote URL parsed from git svn info.
Returns None if it is not set.
"""
# URL is dependent on the current directory.
data = RunGit(['svn', 'info'], cwd=settings.GetRoot())
if data:
keys = dict(line.split(': ', 1) for line in data.splitlines()
if ': ' in line)
return keys.get('URL', None)
return None
def GetRemoteUrl(self):
"""Return the configured remote URL, e.g. 'git://example.org/foo.git/'.
Returns None if there is no remote.
"""
remote, _ = self.GetRemoteBranch()
url = RunGit(['config', 'remote.%s.url' % remote], error_ok=True).strip()
# If URL is pointing to a local directory, it is probably a git cache.
if os.path.isdir(url):
url = RunGit(['config', 'remote.%s.url' % remote],
error_ok=True,
cwd=url).strip()
return url
def GetIssue(self):
"""Returns the issue number as a int or None if not set."""
if self.issue is None and not self.lookedup_issue:
issue = RunGit(['config',
self._codereview_impl.IssueSetting(self.GetBranch())],
error_ok=True).strip()
self.issue = int(issue) or None if issue else None
self.lookedup_issue = True
return self.issue
def GetIssueURL(self):
"""Get the URL for a particular issue."""
issue = self.GetIssue()
if not issue:
return None
return '%s/%s' % (self._codereview_impl.GetCodereviewServer(), issue)
def GetDescription(self, pretty=False):
if not self.has_description:
if self.GetIssue():
self.description = self._codereview_impl.FetchDescription()
self.has_description = True
if pretty:
wrapper = textwrap.TextWrapper()
wrapper.initial_indent = wrapper.subsequent_indent = ' '
return wrapper.fill(self.description)
return self.description
def GetPatchset(self):
"""Returns the patchset number as a int or None if not set."""
if self.patchset is None and not self.lookedup_patchset:
patchset = RunGit(['config', self._codereview_impl.PatchsetSetting()],
error_ok=True).strip()
self.patchset = int(patchset) or None if patchset else None
self.lookedup_patchset = True
return self.patchset
def SetPatchset(self, patchset):
"""Set this branch's patchset. If patchset=0, clears the patchset."""
patchset_setting = self._codereview_impl.PatchsetSetting()
if patchset:
RunGit(['config', patchset_setting, str(patchset)])
self.patchset = patchset
else:
RunGit(['config', '--unset', patchset_setting],
stderr=subprocess2.PIPE, error_ok=True)
self.patchset = None
def SetIssue(self, issue=None):
"""Set this branch's issue. If issue isn't given, clears the issue."""
issue_setting = self._codereview_impl.IssueSetting(self.GetBranch())
codereview_setting = self._codereview_impl.GetCodereviewServerSetting()
if issue:
self.issue = issue
RunGit(['config', issue_setting, str(issue)])
codereview_server = self._codereview_impl.GetCodereviewServer()
if codereview_server:
RunGit(['config', codereview_setting, codereview_server])
else:
# Reset it regardless. It doesn't hurt.
config_settings = [issue_setting, self._codereview_impl.PatchsetSetting()]
for prop in (['last-upload-hash'] +
self._codereview_impl._PostUnsetIssueProperties()):
config_settings.append('branch.%s.%s' % (self.GetBranch(), prop))
for setting in config_settings:
RunGit(['config', '--unset', setting], error_ok=True)
self.issue = None
self.patchset = None
def GetChange(self, upstream_branch, author):
if not self.GitSanityChecks(upstream_branch):
DieWithError('\nGit sanity check failure')
root = settings.GetRelativeRoot()
if not root:
root = '.'
absroot = os.path.abspath(root)
# We use the sha1 of HEAD as a name of this change.
name = RunGitWithCode(['rev-parse', 'HEAD'])[1].strip()
# Need to pass a relative path for msysgit.
try:
files = scm.GIT.CaptureStatus([root], '.', upstream_branch)
except subprocess2.CalledProcessError:
DieWithError(
('\nFailed to diff against upstream branch %s\n\n'
'This branch probably doesn\'t exist anymore. To reset the\n'
'tracking branch, please run\n'
' git branch --set-upstream %s trunk\n'
'replacing trunk with origin/master or the relevant branch') %
(upstream_branch, self.GetBranch()))
issue = self.GetIssue()
patchset = self.GetPatchset()
if issue:
description = self.GetDescription()
else:
# If the change was never uploaded, use the log messages of all commits
# up to the branch point, as git cl upload will prefill the description
# with these log messages.
args = ['log', '--pretty=format:%s%n%n%b', '%s...' % (upstream_branch)]
description = RunGitWithCode(args)[1].strip()
if not author:
author = RunGit(['config', 'user.email']).strip() or None
return presubmit_support.GitChange(
name,
description,
absroot,
files,
issue,
patchset,
author,
upstream=upstream_branch)
def UpdateDescription(self, description):
self.description = description
return self._codereview_impl.UpdateDescriptionRemote(description)
def RunHook(self, committing, may_prompt, verbose, change):
"""Calls sys.exit() if the hook fails; returns a HookResults otherwise."""
try:
return presubmit_support.DoPresubmitChecks(change, committing,
verbose=verbose, output_stream=sys.stdout, input_stream=sys.stdin,
default_presubmit=None, may_prompt=may_prompt,
rietveld_obj=self._codereview_impl.GetRieveldObjForPresubmit(),
gerrit_obj=self._codereview_impl.GetGerritObjForPresubmit())
except presubmit_support.PresubmitFailure as e:
DieWithError(
('%s\nMaybe your depot_tools is out of date?\n'
'If all fails, contact maruel@') % e)
def CMDPatchIssue(self, issue_arg, reject, nocommit, directory):
"""Fetches and applies the issue patch from codereview to local branch."""
if isinstance(issue_arg, (int, long)) or issue_arg.isdigit():
parsed_issue_arg = _ParsedIssueNumberArgument(int(issue_arg))
else:
# Assume url.
parsed_issue_arg = self._codereview_impl.ParseIssueURL(
urlparse.urlparse(issue_arg))
if not parsed_issue_arg or not parsed_issue_arg.valid:
DieWithError('Failed to parse issue argument "%s". '
'Must be an issue number or a valid URL.' % issue_arg)
return self._codereview_impl.CMDPatchWithParsedIssue(
parsed_issue_arg, reject, nocommit, directory)
def CMDUpload(self, options, git_diff_args, orig_args):
"""Uploads a change to codereview."""
if git_diff_args:
# TODO(ukai): is it ok for gerrit case?
base_branch = git_diff_args[0]
else:
if self.GetBranch() is None:
DieWithError('Can\'t upload from detached HEAD state. Get on a branch!')
# Default to diffing against common ancestor of upstream branch
base_branch = self.GetCommonAncestorWithUpstream()
git_diff_args = [base_branch, 'HEAD']
# Make sure authenticated to codereview before running potentially expensive
# hooks. It is a fast, best efforts check. Codereview still can reject the
# authentication during the actual upload.
self._codereview_impl.EnsureAuthenticated(force=options.force)
# Apply watchlists on upload.
change = self.GetChange(base_branch, None)
watchlist = watchlists.Watchlists(change.RepositoryRoot())
files = [f.LocalPath() for f in change.AffectedFiles()]
if not options.bypass_watchlists:
self.SetWatchers(watchlist.GetWatchersForPaths(files))
if not options.bypass_hooks:
if options.reviewers or options.tbr_owners:
# Set the reviewer list now so that presubmit checks can access it.
change_description = ChangeDescription(change.FullDescriptionText())
change_description.update_reviewers(options.reviewers,
options.tbr_owners,
change)
change.SetDescriptionText(change_description.description)
hook_results = self.RunHook(committing=False,
may_prompt=not options.force,
verbose=options.verbose,
change=change)
if not hook_results.should_continue():
return 1
if not options.reviewers and hook_results.reviewers:
options.reviewers = hook_results.reviewers.split(',')
if self.GetIssue():
latest_patchset = self.GetMostRecentPatchset()
local_patchset = self.GetPatchset()
if (latest_patchset and local_patchset and
local_patchset != latest_patchset):
print('The last upload made from this repository was patchset #%d but '
'the most recent patchset on the server is #%d.'
% (local_patchset, latest_patchset))
print('Uploading will still work, but if you\'ve uploaded to this '
'issue from another machine or branch the patch you\'re '
'uploading now might not include those changes.')
ask_for_data('About to upload; enter to confirm.')
print_stats(options.similarity, options.find_copies, git_diff_args)
ret = self.CMDUploadChange(options, git_diff_args, change)
if not ret:
git_set_branch_value('last-upload-hash',
RunGit(['rev-parse', 'HEAD']).strip())
# Run post upload hooks, if specified.
if settings.GetRunPostUploadHook():
presubmit_support.DoPostUploadExecuter(
change,
self,
settings.GetRoot(),
options.verbose,
sys.stdout)
# Upload all dependencies if specified.
if options.dependencies:
print()
print('--dependencies has been specified.')
print('All dependent local branches will be re-uploaded.')
print()
# Remove the dependencies flag from args so that we do not end up in a
# loop.
orig_args.remove('--dependencies')
ret = upload_branch_deps(self, orig_args)
return ret
def SetCQState(self, new_state):
"""Update the CQ state for latest patchset.
Issue must have been already uploaded and known.
"""
assert new_state in _CQState.ALL_STATES
assert self.GetIssue()
return self._codereview_impl.SetCQState(new_state)
# Forward methods to codereview specific implementation.
def CloseIssue(self):
return self._codereview_impl.CloseIssue()
def GetStatus(self):
return self._codereview_impl.GetStatus()
def GetCodereviewServer(self):
return self._codereview_impl.GetCodereviewServer()
def GetApprovingReviewers(self):
return self._codereview_impl.GetApprovingReviewers()
def GetMostRecentPatchset(self):
return self._codereview_impl.GetMostRecentPatchset()
def __getattr__(self, attr):
# This is because lots of untested code accesses Rietveld-specific stuff
# directly, and it's hard to fix for sure. So, just let it work, and fix
# on a case by case basis.
return getattr(self._codereview_impl, attr)
class _ChangelistCodereviewBase(object):
"""Abstract base class encapsulating codereview specifics of a changelist."""
def __init__(self, changelist):
self._changelist = changelist # instance of Changelist
def __getattr__(self, attr):
# Forward methods to changelist.
# TODO(tandrii): maybe clean up _GerritChangelistImpl and
# _RietveldChangelistImpl to avoid this hack?
return getattr(self._changelist, attr)
def GetStatus(self):
"""Apply a rough heuristic to give a simple summary of an issue's review
or CQ status, assuming adherence to a common workflow.
Returns None if no issue for this branch, or specific string keywords.
"""
raise NotImplementedError()
def GetCodereviewServer(self):
"""Returns server URL without end slash, like "https://codereview.com"."""
raise NotImplementedError()
def FetchDescription(self):
"""Fetches and returns description from the codereview server."""
raise NotImplementedError()
def GetCodereviewServerSetting(self):
"""Returns git config setting for the codereview server."""
raise NotImplementedError()
@classmethod
def IssueSetting(cls, branch):
return 'branch.%s.%s' % (branch, cls.IssueSettingSuffix())
@classmethod
def IssueSettingSuffix(cls):
"""Returns name of git config setting which stores issue number for a given
branch."""
raise NotImplementedError()
def PatchsetSetting(self):
"""Returns name of git config setting which stores issue number."""
raise NotImplementedError()
def _PostUnsetIssueProperties(self):
"""Which branch-specific properties to erase when unsettin issue."""
raise NotImplementedError()
def GetRieveldObjForPresubmit(self):
# This is an unfortunate Rietveld-embeddedness in presubmit.
# For non-Rietveld codereviews, this probably should return a dummy object.
raise NotImplementedError()
def GetGerritObjForPresubmit(self):
# None is valid return value, otherwise presubmit_support.GerritAccessor.
return None
def UpdateDescriptionRemote(self, description):
"""Update the description on codereview site."""
raise NotImplementedError()
def CloseIssue(self):
"""Closes the issue."""
raise NotImplementedError()
def GetApprovingReviewers(self):
"""Returns a list of reviewers approving the change.
Note: not necessarily committers.
"""
raise NotImplementedError()
def GetMostRecentPatchset(self):
"""Returns the most recent patchset number from the codereview site."""
raise NotImplementedError()
def CMDPatchWithParsedIssue(self, parsed_issue_arg, reject, nocommit,
directory):
"""Fetches and applies the issue.
Arguments:
parsed_issue_arg: instance of _ParsedIssueNumberArgument.
reject: if True, reject the failed patch instead of switching to 3-way
merge. Rietveld only.
nocommit: do not commit the patch, thus leave the tree dirty. Rietveld
only.
directory: switch to directory before applying the patch. Rietveld only.
"""
raise NotImplementedError()
@staticmethod
def ParseIssueURL(parsed_url):
"""Parses url and returns instance of _ParsedIssueNumberArgument or None if
failed."""
raise NotImplementedError()
def EnsureAuthenticated(self, force):
"""Best effort check that user is authenticated with codereview server.
Arguments:
force: whether to skip confirmation questions.
"""
raise NotImplementedError()
def CMDUploadChange(self, options, args, change):
"""Uploads a change to codereview."""
raise NotImplementedError()
def SetCQState(self, new_state):
"""Update the CQ state for latest patchset.
Issue must have been already uploaded and known.
"""
raise NotImplementedError()
class _RietveldChangelistImpl(_ChangelistCodereviewBase):
def __init__(self, changelist, auth_config=None, rietveld_server=None):
super(_RietveldChangelistImpl, self).__init__(changelist)
assert settings, 'must be initialized in _ChangelistCodereviewBase'
settings.GetDefaultServerUrl()
self._rietveld_server = rietveld_server
self._auth_config = auth_config
self._props = None
self._rpc_server = None
def GetCodereviewServer(self):
if not self._rietveld_server:
# If we're on a branch then get the server potentially associated
# with that branch.
if self.GetIssue():
rietveld_server_setting = self.GetCodereviewServerSetting()
if rietveld_server_setting:
self._rietveld_server = gclient_utils.UpgradeToHttps(RunGit(
['config', rietveld_server_setting], error_ok=True).strip())
if not self._rietveld_server:
self._rietveld_server = settings.GetDefaultServerUrl()
return self._rietveld_server
def EnsureAuthenticated(self, force):
"""Best effort check that user is authenticated with Rietveld server."""
if self._auth_config.use_oauth2:
authenticator = auth.get_authenticator_for_host(
self.GetCodereviewServer(), self._auth_config)
if not authenticator.has_cached_credentials():
raise auth.LoginRequiredError(self.GetCodereviewServer())
def FetchDescription(self):
issue = self.GetIssue()
assert issue
try:
return self.RpcServer().get_description(issue).strip()
except urllib2.HTTPError as e:
if e.code == 404:
DieWithError(
('\nWhile fetching the description for issue %d, received a '
'404 (not found)\n'
'error. It is likely that you deleted this '
'issue on the server. If this is the\n'
'case, please run\n\n'
' git cl issue 0\n\n'
'to clear the association with the deleted issue. Then run '
'this command again.') % issue)
else:
DieWithError(
'\nFailed to fetch issue description. HTTP error %d' % e.code)
except urllib2.URLError as e:
print('Warning: Failed to retrieve CL description due to network '
'failure.', file=sys.stderr)
return ''
def GetMostRecentPatchset(self):
return self.GetIssueProperties()['patchsets'][-1]
def GetPatchSetDiff(self, issue, patchset):
return self.RpcServer().get(
'/download/issue%s_%s.diff' % (issue, patchset))
def GetIssueProperties(self):
if self._props is None:
issue = self.GetIssue()
if not issue:
self._props = {}
else:
self._props = self.RpcServer().get_issue_properties(issue, True)
return self._props
def GetApprovingReviewers(self):
return get_approving_reviewers(self.GetIssueProperties())
def AddComment(self, message):
return self.RpcServer().add_comment(self.GetIssue(), message)
def GetStatus(self):
"""Apply a rough heuristic to give a simple summary of an issue's review
or CQ status, assuming adherence to a common workflow.
Returns None if no issue for this branch, or one of the following keywords:
* 'error' - error from review tool (including deleted issues)
* 'unsent' - not sent for review
* 'waiting' - waiting for review
* 'reply' - waiting for owner to reply to review
* 'lgtm' - LGTM from at least one approved reviewer
* 'commit' - in the commit queue
* 'closed' - closed
"""
if not self.GetIssue():
return None
try:
props = self.GetIssueProperties()
except urllib2.HTTPError:
return 'error'
if props.get('closed'):
# Issue is closed.
return 'closed'
if props.get('commit') and not props.get('cq_dry_run', False):
# Issue is in the commit queue.
return 'commit'
try:
reviewers = self.GetApprovingReviewers()
except urllib2.HTTPError:
return 'error'
if reviewers:
# Was LGTM'ed.
return 'lgtm'
messages = props.get('messages') or []
# Skip CQ messages that don't require owner's action.
while messages and messages[-1]['sender'] == COMMIT_BOT_EMAIL:
if 'Dry run:' in messages[-1]['text']:
messages.pop()
elif 'The CQ bit was unchecked' in messages[-1]['text']:
# This message always follows prior messages from CQ,
# so skip this too.
messages.pop()
else:
# This is probably a CQ messages warranting user attention.
break
if not messages:
# No message was sent.
return 'unsent'
if messages[-1]['sender'] != props.get('owner_email'):
# Non-LGTM reply from non-owner and not CQ bot.
return 'reply'
return 'waiting'
def UpdateDescriptionRemote(self, description):
return self.RpcServer().update_description(
self.GetIssue(), self.description)
def CloseIssue(self):
return self.RpcServer().close_issue(self.GetIssue())
def SetFlag(self, flag, value):
"""Patchset must match."""
if not self.GetPatchset():
DieWithError('The patchset needs to match. Send another patchset.')
try:
return self.RpcServer().set_flag(
self.GetIssue(), self.GetPatchset(), flag, value)
except urllib2.HTTPError as e:
if e.code == 404:
DieWithError('The issue %s doesn\'t exist.' % self.GetIssue())
if e.code == 403:
DieWithError(
('Access denied to issue %s. Maybe the patchset %s doesn\'t '
'match?') % (self.GetIssue(), self.GetPatchset()))
raise
def RpcServer(self):
"""Returns an upload.RpcServer() to access this review's rietveld instance.
"""
if not self._rpc_server:
self._rpc_server = rietveld.CachingRietveld(
self.GetCodereviewServer(),
self._auth_config or auth.make_auth_config())
return self._rpc_server
@classmethod
def IssueSettingSuffix(cls):
return 'rietveldissue'
def PatchsetSetting(self):
"""Return the git setting that stores this change's most recent patchset."""
return 'branch.%s.rietveldpatchset' % self.GetBranch()
def GetCodereviewServerSetting(self):
"""Returns the git setting that stores this change's rietveld server."""
branch = self.GetBranch()
if branch:
return 'branch.%s.rietveldserver' % branch
return None
def _PostUnsetIssueProperties(self):
"""Which branch-specific properties to erase when unsetting issue."""
return ['rietveldserver']
def GetRieveldObjForPresubmit(self):
return self.RpcServer()
def SetCQState(self, new_state):
props = self.GetIssueProperties()
if props.get('private'):
DieWithError('Cannot set-commit on private issue')
if new_state == _CQState.COMMIT:
self.SetFlag('commit', '1')
elif new_state == _CQState.NONE:
self.SetFlag('commit', '0')
else:
raise NotImplementedError()
def CMDPatchWithParsedIssue(self, parsed_issue_arg, reject, nocommit,
directory):
# TODO(maruel): Use apply_issue.py
# PatchIssue should never be called with a dirty tree. It is up to the
# caller to check this, but just in case we assert here since the
# consequences of the caller not checking this could be dire.
assert(not git_common.is_dirty_git_tree('apply'))
assert(parsed_issue_arg.valid)
self._changelist.issue = parsed_issue_arg.issue
if parsed_issue_arg.hostname:
self._rietveld_server = 'https://%s' % parsed_issue_arg.hostname
if (isinstance(parsed_issue_arg, _RietveldParsedIssueNumberArgument) and
parsed_issue_arg.patch_url):
assert parsed_issue_arg.patchset
patchset = parsed_issue_arg.patchset
patch_data = urllib2.urlopen(parsed_issue_arg.patch_url).read()
else:
patchset = parsed_issue_arg.patchset or self.GetMostRecentPatchset()
patch_data = self.GetPatchSetDiff(self.GetIssue(), patchset)
# Switch up to the top-level directory, if necessary, in preparation for
# applying the patch.
top = settings.GetRelativeRoot()
if top:
os.chdir(top)
# Git patches have a/ at the beginning of source paths. We strip that out
# with a sed script rather than the -p flag to patch so we can feed either
# Git or svn-style patches into the same apply command.
# re.sub() should be used but flags=re.MULTILINE is only in python 2.7.
try:
patch_data = subprocess2.check_output(
['sed', '-e', 's|^--- a/|--- |; s|^+++ b/|+++ |'], stdin=patch_data)
except subprocess2.CalledProcessError:
DieWithError('Git patch mungling failed.')
logging.info(patch_data)
# We use "git apply" to apply the patch instead of "patch" so that we can
# pick up file adds.
# The --index flag means: also insert into the index (so we catch adds).
cmd = ['git', 'apply', '--index', '-p0']
if directory:
cmd.extend(('--directory', directory))
if reject:
cmd.append('--reject')
elif IsGitVersionAtLeast('1.7.12'):
cmd.append('--3way')
try:
subprocess2.check_call(cmd, env=GetNoGitPagerEnv(),
stdin=patch_data, stdout=subprocess2.VOID)
except subprocess2.CalledProcessError:
print('Failed to apply the patch')
return 1
# If we had an issue, commit the current state and register the issue.
if not nocommit:
RunGit(['commit', '-m', (self.GetDescription() + '\n\n' +
'patch from issue %(i)s at patchset '
'%(p)s (http://crrev.com/%(i)s#ps%(p)s)'
% {'i': self.GetIssue(), 'p': patchset})])
self.SetIssue(self.GetIssue())
self.SetPatchset(patchset)
print('Committed patch locally.')
else:
print('Patch applied to index.')
return 0
@staticmethod
def ParseIssueURL(parsed_url):
if not parsed_url.scheme or not parsed_url.scheme.startswith('http'):
return None
# Typical url: https://domain/<issue_number>[/[other]]
match = re.match('/(\d+)(/.*)?$', parsed_url.path)
if match:
return _RietveldParsedIssueNumberArgument(
issue=int(match.group(1)),
hostname=parsed_url.netloc)
# Rietveld patch: https://domain/download/issue<number>_<patchset>.diff
match = re.match(r'/download/issue(\d+)_(\d+).diff$', parsed_url.path)
if match:
return _RietveldParsedIssueNumberArgument(
issue=int(match.group(1)),
patchset=int(match.group(2)),
hostname=parsed_url.netloc,
patch_url=gclient_utils.UpgradeToHttps(parsed_url.geturl()))
return None
def CMDUploadChange(self, options, args, change):
"""Upload the patch to Rietveld."""
upload_args = ['--assume_yes'] # Don't ask about untracked files.
upload_args.extend(['--server', self.GetCodereviewServer()])
upload_args.extend(auth.auth_config_to_command_options(self._auth_config))
if options.emulate_svn_auto_props:
upload_args.append('--emulate_svn_auto_props')
change_desc = None
if options.email is not None:
upload_args.extend(['--email', options.email])
if self.GetIssue():
if options.title:
upload_args.extend(['--title', options.title])
if options.message:
upload_args.extend(['--message', options.message])
upload_args.extend(['--issue', str(self.GetIssue())])
print('This branch is associated with issue %s. '
'Adding patch to that issue.' % self.GetIssue())
else:
if options.title:
upload_args.extend(['--title', options.title])
message = (options.title or options.message or
CreateDescriptionFromLog(args))
change_desc = ChangeDescription(message)
if options.reviewers or options.tbr_owners:
change_desc.update_reviewers(options.reviewers,
options.tbr_owners,
change)
if not options.force:
change_desc.prompt()
if not change_desc.description:
print('Description is empty; aborting.')
return 1
upload_args.extend(['--message', change_desc.description])
if change_desc.get_reviewers():
upload_args.append('--reviewers=%s' % ','.join(
change_desc.get_reviewers()))
if options.send_mail:
if not change_desc.get_reviewers():
DieWithError("Must specify reviewers to send email.")
upload_args.append('--send_mail')
# We check this before applying rietveld.private assuming that in
# rietveld.cc only addresses which we can send private CLs to are listed
# if rietveld.private is set, and so we should ignore rietveld.cc only
# when --private is specified explicitly on the command line.
if options.private:
logging.warn('rietveld.cc is ignored since private flag is specified. '
'You need to review and add them manually if necessary.')
cc = self.GetCCListWithoutDefault()
else:
cc = self.GetCCList()
cc = ','.join(filter(None, (cc, ','.join(options.cc))))
if cc:
upload_args.extend(['--cc', cc])
if options.private or settings.GetDefaultPrivateFlag() == "True":
upload_args.append('--private')
upload_args.extend(['--git_similarity', str(options.similarity)])
if not options.find_copies:
upload_args.extend(['--git_no_find_copies'])
# Include the upstream repo's URL in the change -- this is useful for
# projects that have their source spread across multiple repos.
remote_url = self.GetGitBaseUrlFromConfig()
if not remote_url:
if settings.GetIsGitSvn():
remote_url = self.GetGitSvnRemoteUrl()
else:
if self.GetRemoteUrl() and '/' in self.GetUpstreamBranch():
remote_url = '%s@%s' % (self.GetRemoteUrl(),
self.GetUpstreamBranch().split('/')[-1])
if remote_url:
upload_args.extend(['--base_url', remote_url])
remote, remote_branch = self.GetRemoteBranch()
target_ref = GetTargetRef(remote, remote_branch, options.target_branch,
settings.GetPendingRefPrefix())
if target_ref:
upload_args.extend(['--target_ref', target_ref])
# Look for dependent patchsets. See crbug.com/480453 for more details.
remote, upstream_branch = self.FetchUpstreamTuple(self.GetBranch())
upstream_branch = ShortBranchName(upstream_branch)
if remote is '.':
# A local branch is being tracked.
local_branch = upstream_branch
if settings.GetIsSkipDependencyUpload(local_branch):
print()
print('Skipping dependency patchset upload because git config '
'branch.%s.skip-deps-uploads is set to True.' % local_branch)
print()
else:
auth_config = auth.extract_auth_config_from_options(options)
branch_cl = Changelist(branchref='refs/heads/'+local_branch,
auth_config=auth_config)
branch_cl_issue_url = branch_cl.GetIssueURL()
branch_cl_issue = branch_cl.GetIssue()
branch_cl_patchset = branch_cl.GetPatchset()
if branch_cl_issue_url and branch_cl_issue and branch_cl_patchset:
upload_args.extend(
['--depends_on_patchset', '%s:%s' % (
branch_cl_issue, branch_cl_patchset)])
print(
'\n'
'The current branch (%s) is tracking a local branch (%s) with '
'an associated CL.\n'
'Adding %s/#ps%s as a dependency patchset.\n'
'\n' % (self.GetBranch(), local_branch, branch_cl_issue_url,
branch_cl_patchset))
project = settings.GetProject()
if project:
upload_args.extend(['--project', project])
if options.cq_dry_run:
upload_args.extend(['--cq_dry_run'])
try:
upload_args = ['upload'] + upload_args + args
logging.info('upload.RealMain(%s)', upload_args)
issue, patchset = upload.RealMain(upload_args)
issue = int(issue)
patchset = int(patchset)
except KeyboardInterrupt:
sys.exit(1)
except:
# If we got an exception after the user typed a description for their
# change, back up the description before re-raising.
if change_desc:
backup_path = os.path.expanduser(DESCRIPTION_BACKUP_FILE)
print('\nGot exception while uploading -- saving description to %s\n' %
backup_path)
backup_file = open(backup_path, 'w')
backup_file.write(change_desc.description)
backup_file.close()
raise
if not self.GetIssue():
self.SetIssue(issue)
self.SetPatchset(patchset)
if options.use_commit_queue:
self.SetCQState(_CQState.COMMIT)
return 0
class _GerritChangelistImpl(_ChangelistCodereviewBase):
def __init__(self, changelist, auth_config=None):
# auth_config is Rietveld thing, kept here to preserve interface only.
super(_GerritChangelistImpl, self).__init__(changelist)
self._change_id = None
# Lazily cached values.
self._gerrit_server = None # e.g. https://chromium-review.googlesource.com
self._gerrit_host = None # e.g. chromium-review.googlesource.com
def _GetGerritHost(self):
# Lazy load of configs.
self.GetCodereviewServer()
if self._gerrit_host and '.' not in self._gerrit_host:
# Abbreviated domain like "chromium" instead of chromium.googlesource.com.
# This happens for internal stuff http://crbug.com/614312.
parsed = urlparse.urlparse(self.GetRemoteUrl())
if parsed.scheme == 'sso':
print('WARNING: using non https URLs for remote is likely broken\n'
' Your current remote is: %s' % self.GetRemoteUrl())
self._gerrit_host = '%s.googlesource.com' % self._gerrit_host
self._gerrit_server = 'https://%s' % self._gerrit_host
return self._gerrit_host
def _GetGitHost(self):
"""Returns git host to be used when uploading change to Gerrit."""
return urlparse.urlparse(self.GetRemoteUrl()).netloc
def GetCodereviewServer(self):
if not self._gerrit_server:
# If we're on a branch then get the server potentially associated
# with that branch.
if self.GetIssue():
gerrit_server_setting = self.GetCodereviewServerSetting()
if gerrit_server_setting:
self._gerrit_server = RunGit(['config', gerrit_server_setting],
error_ok=True).strip()
if self._gerrit_server:
self._gerrit_host = urlparse.urlparse(self._gerrit_server).netloc
if not self._gerrit_server:
# We assume repo to be hosted on Gerrit, and hence Gerrit server
# has "-review" suffix for lowest level subdomain.
parts = self._GetGitHost().split('.')
parts[0] = parts[0] + '-review'
self._gerrit_host = '.'.join(parts)
self._gerrit_server = 'https://%s' % self._gerrit_host
return self._gerrit_server
@classmethod
def IssueSettingSuffix(cls):
return 'gerritissue'
def EnsureAuthenticated(self, force):
"""Best effort check that user is authenticated with Gerrit server."""
if settings.GetGerritSkipEnsureAuthenticated():
# For projects with unusual authentication schemes.
# See http://crbug.com/603378.
return
# Lazy-loader to identify Gerrit and Git hosts.
if gerrit_util.GceAuthenticator.is_gce():
return
self.GetCodereviewServer()
git_host = self._GetGitHost()
assert self._gerrit_server and self._gerrit_host
cookie_auth = gerrit_util.CookiesAuthenticator()
gerrit_auth = cookie_auth.get_auth_header(self._gerrit_host)
git_auth = cookie_auth.get_auth_header(git_host)
if gerrit_auth and git_auth:
if gerrit_auth == git_auth:
return
print((
'WARNING: you have different credentials for Gerrit and git hosts.\n'
' Check your %s or %s file for credentials of hosts:\n'
' %s\n'
' %s\n'
' %s') %
(cookie_auth.get_gitcookies_path(), cookie_auth.get_netrc_path(),
git_host, self._gerrit_host,
cookie_auth.get_new_password_message(git_host)))
if not force:
ask_for_data('If you know what you are doing, press Enter to continue, '
'Ctrl+C to abort.')
return
else:
missing = (
[] if gerrit_auth else [self._gerrit_host] +
[] if git_auth else [git_host])
DieWithError('Credentials for the following hosts are required:\n'
' %s\n'
'These are read from %s (or legacy %s)\n'
'%s' % (
'\n '.join(missing),
cookie_auth.get_gitcookies_path(),
cookie_auth.get_netrc_path(),
cookie_auth.get_new_password_message(git_host)))
def PatchsetSetting(self):
"""Return the git setting that stores this change's most recent patchset."""
return 'branch.%s.gerritpatchset' % self.GetBranch()
def GetCodereviewServerSetting(self):
"""Returns the git setting that stores this change's Gerrit server."""
branch = self.GetBranch()
if branch:
return 'branch.%s.gerritserver' % branch
return None
def _PostUnsetIssueProperties(self):
"""Which branch-specific properties to erase when unsetting issue."""
return [
'gerritserver',
'gerritsquashhash',
]
def GetRieveldObjForPresubmit(self):
class ThisIsNotRietveldIssue(object):
def __nonzero__(self):
# This is a hack to make presubmit_support think that rietveld is not
# defined, yet still ensure that calls directly result in a decent
# exception message below.
return False
def __getattr__(self, attr):
print(
'You aren\'t using Rietveld at the moment, but Gerrit.\n'
'Using Rietveld in your PRESUBMIT scripts won\'t work.\n'
'Please, either change your PRESUBIT to not use rietveld_obj.%s,\n'
'or use Rietveld for codereview.\n'
'See also http://crbug.com/579160.' % attr)
raise NotImplementedError()
return ThisIsNotRietveldIssue()
def GetGerritObjForPresubmit(self):
return presubmit_support.GerritAccessor(self._GetGerritHost())
def GetStatus(self):
"""Apply a rough heuristic to give a simple summary of an issue's review
or CQ status, assuming adherence to a common workflow.
Returns None if no issue for this branch, or one of the following keywords:
* 'error' - error from review tool (including deleted issues)
* 'unsent' - no reviewers added
* 'waiting' - waiting for review
* 'reply' - waiting for owner to reply to review
* 'not lgtm' - Code-Review -2 from at least one approved reviewer
* 'lgtm' - Code-Review +2 from at least one approved reviewer
* 'commit' - in the commit queue
* 'closed' - abandoned
"""
if not self.GetIssue():
return None
try:
data = self._GetChangeDetail(['DETAILED_LABELS', 'CURRENT_REVISION'])
except httplib.HTTPException:
return 'error'
if data['status'] in ('ABANDONED', 'MERGED'):
return 'closed'
cq_label = data['labels'].get('Commit-Queue', {})
if cq_label:
# Vote value is a stringified integer, which we expect from 0 to 2.
vote_value = cq_label.get('value', '0')
vote_text = cq_label.get('values', {}).get(vote_value, '')
if vote_text.lower() == 'commit':
return 'commit'
lgtm_label = data['labels'].get('Code-Review', {})
if lgtm_label:
if 'rejected' in lgtm_label:
return 'not lgtm'
if 'approved' in lgtm_label:
return 'lgtm'
if not data.get('reviewers', {}).get('REVIEWER', []):
return 'unsent'
messages = data.get('messages', [])
if messages:
owner = data['owner'].get('_account_id')
last_message_author = messages[-1].get('author', {}).get('_account_id')
if owner != last_message_author:
# Some reply from non-owner.
return 'reply'
return 'waiting'
def GetMostRecentPatchset(self):
data = self._GetChangeDetail(['CURRENT_REVISION'])
return data['revisions'][data['current_revision']]['_number']
def FetchDescription(self):
data = self._GetChangeDetail(['CURRENT_REVISION'])
current_rev = data['current_revision']
url = data['revisions'][current_rev]['fetch']['http']['url']
return gerrit_util.GetChangeDescriptionFromGitiles(url, current_rev)
def UpdateDescriptionRemote(self, description):
gerrit_util.SetCommitMessage(self._GetGerritHost(), self.GetIssue(),
description)
def CloseIssue(self):
gerrit_util.AbandonChange(self._GetGerritHost(), self.GetIssue(), msg='')
def GetApprovingReviewers(self):
"""Returns a list of reviewers approving the change.
Note: not necessarily committers.
"""
raise NotImplementedError()
def SubmitIssue(self, wait_for_merge=True):
gerrit_util.SubmitChange(self._GetGerritHost(), self.GetIssue(),
wait_for_merge=wait_for_merge)
def _GetChangeDetail(self, options=None, issue=None):
options = options or []
issue = issue or self.GetIssue()
assert issue, 'issue required to query Gerrit'
return gerrit_util.GetChangeDetail(self._GetGerritHost(), str(issue),
options)
def CMDLand(self, force, bypass_hooks, verbose):
if git_common.is_dirty_git_tree('land'):
return 1
detail = self._GetChangeDetail(['CURRENT_REVISION', 'LABELS'])
if u'Commit-Queue' in detail.get('labels', {}):
if not force:
ask_for_data('\nIt seems this repository has a Commit Queue, '
'which can test and land changes for you. '
'Are you sure you wish to bypass it?\n'
'Press Enter to continue, Ctrl+C to abort.')
differs = True
last_upload = RunGit(['config',
'branch.%s.gerritsquashhash' % self.GetBranch()],
error_ok=True).strip()
# Note: git diff outputs nothing if there is no diff.
if not last_upload or RunGit(['diff', last_upload]).strip():
print('WARNING: some changes from local branch haven\'t been uploaded')
else:
if detail['current_revision'] == last_upload:
differs = False
else:
print('WARNING: local branch contents differ from latest uploaded '
'patchset')
if differs:
if not force:
ask_for_data(
'Do you want to submit latest Gerrit patchset and bypass hooks?')
print('WARNING: bypassing hooks and submitting latest uploaded patchset')
elif not bypass_hooks:
hook_results = self.RunHook(
committing=True,
may_prompt=not force,
verbose=verbose,
change=self.GetChange(self.GetCommonAncestorWithUpstream(), None))
if not hook_results.should_continue():
return 1
self.SubmitIssue(wait_for_merge=True)
print('Issue %s has been submitted.' % self.GetIssueURL())
return 0
def CMDPatchWithParsedIssue(self, parsed_issue_arg, reject, nocommit,
directory):
assert not reject
assert not nocommit
assert not directory
assert parsed_issue_arg.valid
self._changelist.issue = parsed_issue_arg.issue
if parsed_issue_arg.hostname:
self._gerrit_host = parsed_issue_arg.hostname
self._gerrit_server = 'https://%s' % self._gerrit_host
detail = self._GetChangeDetail(['ALL_REVISIONS'])
if not parsed_issue_arg.patchset:
# Use current revision by default.
revision_info = detail['revisions'][detail['current_revision']]
patchset = int(revision_info['_number'])
else:
patchset = parsed_issue_arg.patchset
for revision_info in detail['revisions'].itervalues():
if int(revision_info['_number']) == parsed_issue_arg.patchset:
break
else:
DieWithError('Couldn\'t find patchset %i in issue %i' %
(parsed_issue_arg.patchset, self.GetIssue()))
fetch_info = revision_info['fetch']['http']
RunGit(['fetch', fetch_info['url'], fetch_info['ref']])
RunGit(['cherry-pick', 'FETCH_HEAD'])
self.SetIssue(self.GetIssue())
self.SetPatchset(patchset)
print('Committed patch for issue %i pathset %i locally' %
(self.GetIssue(), self.GetPatchset()))
return 0
@staticmethod
def ParseIssueURL(parsed_url):
if not parsed_url.scheme or not parsed_url.scheme.startswith('http'):
return None
# Gerrit's new UI is https://domain/c/<issue_number>[/[patchset]]
# But current GWT UI is https://domain/#/c/<issue_number>[/[patchset]]
# Short urls like https://domain/<issue_number> can be used, but don't allow
# specifying the patchset (you'd 404), but we allow that here.
if parsed_url.path == '/':
part = parsed_url.fragment
else:
part = parsed_url.path
match = re.match('(/c)?/(\d+)(/(\d+)?/?)?$', part)
if match:
return _ParsedIssueNumberArgument(
issue=int(match.group(2)),
patchset=int(match.group(4)) if match.group(4) else None,
hostname=parsed_url.netloc)
return None
def _GerritCommitMsgHookCheck(self, offer_removal):
hook = os.path.join(settings.GetRoot(), '.git', 'hooks', 'commit-msg')
if not os.path.exists(hook):
return
# Crude attempt to distinguish Gerrit Codereview hook from potentially
# custom developer made one.
data = gclient_utils.FileRead(hook)
if not('From Gerrit Code Review' in data and 'add_ChangeId()' in data):
return
print('Warning: you have Gerrit commit-msg hook installed.\n'
'It is not neccessary for uploading with git cl in squash mode, '
'and may interfere with it in subtle ways.\n'
'We recommend you remove the commit-msg hook.')
if offer_removal:
reply = ask_for_data('Do you want to remove it now? [Yes/No]')
if reply.lower().startswith('y'):
gclient_utils.rm_file_or_tree(hook)
print('Gerrit commit-msg hook removed.')
else:
print('OK, will keep Gerrit commit-msg hook in place.')
def CMDUploadChange(self, options, args, change):
"""Upload the current branch to Gerrit."""
if options.squash and options.no_squash:
DieWithError('Can only use one of --squash or --no-squash')
if not options.squash and not options.no_squash:
# Load default for user, repo, squash=true, in this order.
options.squash = settings.GetSquashGerritUploads()
elif options.no_squash:
options.squash = False
# We assume the remote called "origin" is the one we want.
# It is probably not worthwhile to support different workflows.
gerrit_remote = 'origin'
remote, remote_branch = self.GetRemoteBranch()
branch = GetTargetRef(remote, remote_branch, options.target_branch,
pending_prefix='')
if options.squash:
self._GerritCommitMsgHookCheck(offer_removal=not options.force)
if not self.GetIssue():
# TODO(tandrii): deperecate this after 2016Q2. Backwards compatibility
# with shadow branch, which used to contain change-id for a given
# branch, using which we can fetch actual issue number and set it as the
# property of the branch, which is the new way.
message = RunGitSilent([
'show', '--format=%B', '-s',
'refs/heads/git_cl_uploads/%s' % self.GetBranch()])
if message:
change_ids = git_footers.get_footer_change_id(message.strip())
if change_ids and len(change_ids) == 1:
details = self._GetChangeDetail(issue=change_ids[0])
if details:
print('WARNING: found old upload in branch git_cl_uploads/%s '
'corresponding to issue %s' %
(self.GetBranch(), details['_number']))
self.SetIssue(details['_number'])
if not self.GetIssue():
DieWithError(
'\n' # For readability of the blob below.
'Found old upload in branch git_cl_uploads/%s, '
'but failed to find corresponding Gerrit issue.\n'
'If you know the issue number, set it manually first:\n'
' git cl issue 123456\n'
'If you intended to upload this CL as new issue, '
'just delete or rename the old upload branch:\n'
' git rename-branch git_cl_uploads/%s old_upload-%s\n'
'After that, please run git cl upload again.' %
tuple([self.GetBranch()] * 3))
# End of backwards compatability.
if self.GetIssue():
# Try to get the message from a previous upload.
message = self.GetDescription()
if not message:
DieWithError(
'failed to fetch description from current Gerrit issue %d\n'
'%s' % (self.GetIssue(), self.GetIssueURL()))
change_id = self._GetChangeDetail()['change_id']
while True:
footer_change_ids = git_footers.get_footer_change_id(message)
if footer_change_ids == [change_id]:
break
if not footer_change_ids:
message = git_footers.add_footer_change_id(message, change_id)
print('WARNING: appended missing Change-Id to issue description')
continue
# There is already a valid footer but with different or several ids.
# Doing this automatically is non-trivial as we don't want to lose
# existing other footers, yet we want to append just 1 desired
# Change-Id. Thus, just create a new footer, but let user verify the
# new description.
message = '%s\n\nChange-Id: %s' % (message, change_id)
print(
'WARNING: issue %s has Change-Id footer(s):\n'
' %s\n'
'but issue has Change-Id %s, according to Gerrit.\n'
'Please, check the proposed correction to the description, '
'and edit it if necessary but keep the "Change-Id: %s" footer\n'
% (self.GetIssue(), '\n '.join(footer_change_ids), change_id,
change_id))
ask_for_data('Press enter to edit now, Ctrl+C to abort')
if not options.force:
change_desc = ChangeDescription(message)
change_desc.prompt()
message = change_desc.description
if not message:
DieWithError("Description is empty. Aborting...")
# Continue the while loop.
# Sanity check of this code - we should end up with proper message
# footer.
assert [change_id] == git_footers.get_footer_change_id(message)
change_desc = ChangeDescription(message)
else:
change_desc = ChangeDescription(
options.message or CreateDescriptionFromLog(args))
if not options.force:
change_desc.prompt()
if not change_desc.description:
DieWithError("Description is empty. Aborting...")
message = change_desc.description
change_ids = git_footers.get_footer_change_id(message)
if len(change_ids) > 1:
DieWithError('too many Change-Id footers, at most 1 allowed.')
if not change_ids:
# Generate the Change-Id automatically.
message = git_footers.add_footer_change_id(
message, GenerateGerritChangeId(message))
change_desc.set_description(message)
change_ids = git_footers.get_footer_change_id(message)
assert len(change_ids) == 1
change_id = change_ids[0]
remote, upstream_branch = self.FetchUpstreamTuple(self.GetBranch())
if remote is '.':
# If our upstream branch is local, we base our squashed commit on its
# squashed version.
upstream_branch_name = scm.GIT.ShortBranchName(upstream_branch)
# Check the squashed hash of the parent.
parent = RunGit(['config',
'branch.%s.gerritsquashhash' % upstream_branch_name],
error_ok=True).strip()
# Verify that the upstream branch has been uploaded too, otherwise
# Gerrit will create additional CLs when uploading.
if not parent or (RunGitSilent(['rev-parse', upstream_branch + ':']) !=
RunGitSilent(['rev-parse', parent + ':'])):
# TODO(tandrii): remove "old depot_tools" part on April 12, 2016.
DieWithError(
'Upload upstream branch %s first.\n'
'Note: maybe you\'ve uploaded it with --no-squash or with an old '
'version of depot_tools. If so, then re-upload it with:\n'
' git cl upload --squash\n' % upstream_branch_name)
else:
parent = self.GetCommonAncestorWithUpstream()
tree = RunGit(['rev-parse', 'HEAD:']).strip()
ref_to_push = RunGit(['commit-tree', tree, '-p', parent,
'-m', message]).strip()
else:
change_desc = ChangeDescription(
options.message or CreateDescriptionFromLog(args))
if not change_desc.description:
DieWithError("Description is empty. Aborting...")
if not git_footers.get_footer_change_id(change_desc.description):
DownloadGerritHook(False)
change_desc.set_description(self._AddChangeIdToCommitMessage(options,
args))
ref_to_push = 'HEAD'
parent = '%s/%s' % (gerrit_remote, branch)
change_id = git_footers.get_footer_change_id(change_desc.description)[0]
assert change_desc
commits = RunGitSilent(['rev-list', '%s..%s' % (parent,
ref_to_push)]).splitlines()
if len(commits) > 1:
print('WARNING: This will upload %d commits. Run the following command '
'to see which commits will be uploaded: ' % len(commits))
print('git log %s..%s' % (parent, ref_to_push))
print('You can also use `git squash-branch` to squash these into a '
'single commit.')
ask_for_data('About to upload; enter to confirm.')
if options.reviewers or options.tbr_owners:
change_desc.update_reviewers(options.reviewers, options.tbr_owners,
change)
# Extra options that can be specified at push time. Doc:
# https://gerrit-review.googlesource.com/Documentation/user-upload.html
refspec_opts = []
if options.title:
# Per doc, spaces must be converted to underscores, and Gerrit will do the
# reverse on its side.
if '_' in options.title:
print('WARNING: underscores in title will be converted to spaces.')
refspec_opts.append('m=' + options.title.replace(' ', '_'))
if options.send_mail:
if not change_desc.get_reviewers():
DieWithError('Must specify reviewers to send email.')
refspec_opts.append('notify=ALL')
else:
refspec_opts.append('notify=NONE')
cc = self.GetCCList().split(',')
if options.cc:
cc.extend(options.cc)
cc = filter(None, cc)
if cc:
refspec_opts.extend('cc=' + email.strip() for email in cc)
if change_desc.get_reviewers():
refspec_opts.extend('r=' + email.strip()
for email in change_desc.get_reviewers())
refspec_suffix = ''
if refspec_opts:
refspec_suffix = '%' + ','.join(refspec_opts)
assert ' ' not in refspec_suffix, (
'spaces not allowed in refspec: "%s"' % refspec_suffix)
refspec = '%s:refs/for/%s%s' % (ref_to_push, branch, refspec_suffix)
push_stdout = gclient_utils.CheckCallAndFilter(
['git', 'push', gerrit_remote, refspec],
print_stdout=True,
# Flush after every line: useful for seeing progress when running as
# recipe.
filter_fn=lambda _: sys.stdout.flush())
if options.squash:
regex = re.compile(r'remote:\s+https?://[\w\-\.\/]*/(\d+)\s.*')
change_numbers = [m.group(1)
for m in map(regex.match, push_stdout.splitlines())
if m]
if len(change_numbers) != 1:
DieWithError(
('Created|Updated %d issues on Gerrit, but only 1 expected.\n'
'Change-Id: %s') % (len(change_numbers), change_id))
self.SetIssue(change_numbers[0])
RunGit(['config', 'branch.%s.gerritsquashhash' % self.GetBranch(),
ref_to_push])
return 0
def _AddChangeIdToCommitMessage(self, options, args):
"""Re-commits using the current message, assumes the commit hook is in
place.
"""
log_desc = options.message or CreateDescriptionFromLog(args)
git_command = ['commit', '--amend', '-m', log_desc]
RunGit(git_command)
new_log_desc = CreateDescriptionFromLog(args)
if git_footers.get_footer_change_id(new_log_desc):
print('git-cl: Added Change-Id to commit message.')
return new_log_desc
else:
DieWithError('ERROR: Gerrit commit-msg hook not installed.')
def SetCQState(self, new_state):
"""Sets the Commit-Queue label assuming canonical CQ config for Gerrit."""
# TODO(tandrii): maybe allow configurability in codereview.settings or by
# self-discovery of label config for this CL using REST API.
vote_map = {
_CQState.NONE: 0,
_CQState.DRY_RUN: 1,
_CQState.COMMIT : 2,
}
gerrit_util.SetReview(self._GetGerritHost(), self.GetIssue(),
labels={'Commit-Queue': vote_map[new_state]})
_CODEREVIEW_IMPLEMENTATIONS = {
'rietveld': _RietveldChangelistImpl,
'gerrit': _GerritChangelistImpl,
}
def _add_codereview_select_options(parser):
"""Appends --gerrit and --rietveld options to force specific codereview."""
parser.codereview_group = optparse.OptionGroup(
parser, 'EXPERIMENTAL! Codereview override options')
parser.add_option_group(parser.codereview_group)
parser.codereview_group.add_option(
'--gerrit', action='store_true',
help='Force the use of Gerrit for codereview')
parser.codereview_group.add_option(
'--rietveld', action='store_true',
help='Force the use of Rietveld for codereview')
def _process_codereview_select_options(parser, options):
if options.gerrit and options.rietveld:
parser.error('Options --gerrit and --rietveld are mutually exclusive')
options.forced_codereview = None
if options.gerrit:
options.forced_codereview = 'gerrit'
elif options.rietveld:
options.forced_codereview = 'rietveld'
class ChangeDescription(object):
"""Contains a parsed form of the change description."""
R_LINE = r'^[ \t]*(TBR|R)[ \t]*=[ \t]*(.*?)[ \t]*$'
BUG_LINE = r'^[ \t]*(BUG)[ \t]*=[ \t]*(.*?)[ \t]*$'
def __init__(self, description):
self._description_lines = (description or '').strip().splitlines()
@property # www.logilab.org/ticket/89786
def description(self): # pylint: disable=E0202
return '\n'.join(self._description_lines)
def set_description(self, desc):
if isinstance(desc, basestring):
lines = desc.splitlines()
else:
lines = [line.rstrip() for line in desc]
while lines and not lines[0]:
lines.pop(0)
while lines and not lines[-1]:
lines.pop(-1)
self._description_lines = lines
def update_reviewers(self, reviewers, add_owners_tbr=False, change=None):
"""Rewrites the R=/TBR= line(s) as a single line each."""
assert isinstance(reviewers, list), reviewers
if not reviewers and not add_owners_tbr:
return
reviewers = reviewers[:]
# Get the set of R= and TBR= lines and remove them from the desciption.
regexp = re.compile(self.R_LINE)
matches = [regexp.match(line) for line in self._description_lines]
new_desc = [l for i, l in enumerate(self._description_lines)
if not matches[i]]
self.set_description(new_desc)
# Construct new unified R= and TBR= lines.
r_names = []
tbr_names = []
for match in matches:
if not match:
continue
people = cleanup_list([match.group(2).strip()])
if match.group(1) == 'TBR':
tbr_names.extend(people)
else:
r_names.extend(people)
for name in r_names:
if name not in reviewers:
reviewers.append(name)
if add_owners_tbr:
owners_db = owners.Database(change.RepositoryRoot(),
fopen=file, os_path=os.path, glob=glob.glob)
all_reviewers = set(tbr_names + reviewers)
missing_files = owners_db.files_not_covered_by(change.LocalPaths(),
all_reviewers)
tbr_names.extend(owners_db.reviewers_for(missing_files,
change.author_email))
new_r_line = 'R=' + ', '.join(reviewers) if reviewers else None
new_tbr_line = 'TBR=' + ', '.join(tbr_names) if tbr_names else None
# Put the new lines in the description where the old first R= line was.
line_loc = next((i for i, match in enumerate(matches) if match), -1)
if 0 <= line_loc < len(self._description_lines):
if new_tbr_line:
self._description_lines.insert(line_loc, new_tbr_line)
if new_r_line:
self._description_lines.insert(line_loc, new_r_line)
else:
if new_r_line:
self.append_footer(new_r_line)
if new_tbr_line:
self.append_footer(new_tbr_line)
def prompt(self):
"""Asks the user to update the description."""
self.set_description([
'# Enter a description of the change.',
'# This will be displayed on the codereview site.',
'# The first line will also be used as the subject of the review.',
'#--------------------This line is 72 characters long'
'--------------------',
] + self._description_lines)
regexp = re.compile(self.BUG_LINE)
if not any((regexp.match(line) for line in self._description_lines)):
self.append_footer('BUG=%s' % settings.GetBugPrefix())
content = gclient_utils.RunEditor(self.description, True,
git_editor=settings.GetGitEditor())
if not content:
DieWithError('Running editor failed')
lines = content.splitlines()
# Strip off comments.
clean_lines = [line.rstrip() for line in lines if not line.startswith('#')]
if not clean_lines:
DieWithError('No CL description, aborting')
self.set_description(clean_lines)
def append_footer(self, line):
"""Adds a footer line to the description.
Differentiates legacy "KEY=xxx" footers (used to be called tags) and
Gerrit's footers in the form of "Footer-Key: footer any value" and ensures
that Gerrit footers are always at the end.
"""
parsed_footer_line = git_footers.parse_footer(line)
if parsed_footer_line:
# Line is a gerrit footer in the form: Footer-Key: any value.
# Thus, must be appended observing Gerrit footer rules.
self.set_description(
git_footers.add_footer(self.description,
key=parsed_footer_line[0],
value=parsed_footer_line[1]))
return
if not self._description_lines:
self._description_lines.append(line)
return
top_lines, gerrit_footers, _ = git_footers.split_footers(self.description)
if gerrit_footers:
# git_footers.split_footers ensures that there is an empty line before
# actual (gerrit) footers, if any. We have to keep it that way.
assert top_lines and top_lines[-1] == ''
top_lines, separator = top_lines[:-1], top_lines[-1:]
else:
separator = [] # No need for separator if there are no gerrit_footers.
prev_line = top_lines[-1] if top_lines else ''
if (not presubmit_support.Change.TAG_LINE_RE.match(prev_line) or
not presubmit_support.Change.TAG_LINE_RE.match(line)):
top_lines.append('')
top_lines.append(line)
self._description_lines = top_lines + separator + gerrit_footers
def get_reviewers(self):
"""Retrieves the list of reviewers."""
matches = [re.match(self.R_LINE, line) for line in self._description_lines]
reviewers = [match.group(2).strip() for match in matches if match]
return cleanup_list(reviewers)
def get_approving_reviewers(props):
"""Retrieves the reviewers that approved a CL from the issue properties with
messages.
Note that the list may contain reviewers that are not committer, thus are not
considered by the CQ.
"""
return sorted(
set(
message['sender']
for message in props['messages']
if message['approval'] and message['sender'] in props['reviewers']
)
)
def FindCodereviewSettingsFile(filename='codereview.settings'):
"""Finds the given file starting in the cwd and going up.
Only looks up to the top of the repository unless an
'inherit-review-settings-ok' file exists in the root of the repository.
"""
inherit_ok_file = 'inherit-review-settings-ok'
cwd = os.getcwd()
root = settings.GetRoot()
if os.path.isfile(os.path.join(root, inherit_ok_file)):
root = '/'
while True:
if filename in os.listdir(cwd):
if os.path.isfile(os.path.join(cwd, filename)):
return open(os.path.join(cwd, filename))
if cwd == root:
break
cwd = os.path.dirname(cwd)
def LoadCodereviewSettingsFromFile(fileobj):
"""Parse a codereview.settings file and updates hooks."""
keyvals = gclient_utils.ParseCodereviewSettingsContent(fileobj.read())
def SetProperty(name, setting, unset_error_ok=False):
fullname = 'rietveld.' + name
if setting in keyvals:
RunGit(['config', fullname, keyvals[setting]])
else:
RunGit(['config', '--unset-all', fullname], error_ok=unset_error_ok)
SetProperty('server', 'CODE_REVIEW_SERVER')
# Only server setting is required. Other settings can be absent.
# In that case, we ignore errors raised during option deletion attempt.
SetProperty('cc', 'CC_LIST', unset_error_ok=True)
SetProperty('private', 'PRIVATE', unset_error_ok=True)
SetProperty('tree-status-url', 'STATUS', unset_error_ok=True)
SetProperty('viewvc-url', 'VIEW_VC', unset_error_ok=True)
SetProperty('bug-prefix', 'BUG_PREFIX', unset_error_ok=True)
SetProperty('cpplint-regex', 'LINT_REGEX', unset_error_ok=True)
SetProperty('force-https-commit-url', 'FORCE_HTTPS_COMMIT_URL',
unset_error_ok=True)
SetProperty('cpplint-ignore-regex', 'LINT_IGNORE_REGEX', unset_error_ok=True)
SetProperty('project', 'PROJECT', unset_error_ok=True)
SetProperty('pending-ref-prefix', 'PENDING_REF_PREFIX', unset_error_ok=True)
SetProperty('run-post-upload-hook', 'RUN_POST_UPLOAD_HOOK',
unset_error_ok=True)
if 'GERRIT_HOST' in keyvals:
RunGit(['config', 'gerrit.host', keyvals['GERRIT_HOST']])
if 'GERRIT_SQUASH_UPLOADS' in keyvals:
RunGit(['config', 'gerrit.squash-uploads',
keyvals['GERRIT_SQUASH_UPLOADS']])
if 'GERRIT_SKIP_ENSURE_AUTHENTICATED' in keyvals:
RunGit(['config', 'gerrit.skip-ensure-authenticated',
keyvals['GERRIT_SKIP_ENSURE_AUTHENTICATED']])
if 'PUSH_URL_CONFIG' in keyvals and 'ORIGIN_URL_CONFIG' in keyvals:
#should be of the form
#PUSH_URL_CONFIG: url.ssh://gitrw.chromium.org.pushinsteadof
#ORIGIN_URL_CONFIG: http://src.chromium.org/git
RunGit(['config', keyvals['PUSH_URL_CONFIG'],
keyvals['ORIGIN_URL_CONFIG']])
def urlretrieve(source, destination):
"""urllib is broken for SSL connections via a proxy therefore we
can't use urllib.urlretrieve()."""
with open(destination, 'w') as f:
f.write(urllib2.urlopen(source).read())
def hasSheBang(fname):
"""Checks fname is a #! script."""
with open(fname) as f:
return f.read(2).startswith('#!')
# TODO(bpastene) Remove once a cleaner fix to crbug.com/600473 presents itself.
def DownloadHooks(*args, **kwargs):
pass
def DownloadGerritHook(force):
"""Download and install Gerrit commit-msg hook.
Args:
force: True to update hooks. False to install hooks if not present.
"""
if not settings.GetIsGerrit():
return
src = 'https://gerrit-review.googlesource.com/tools/hooks/commit-msg'
dst = os.path.join(settings.GetRoot(), '.git', 'hooks', 'commit-msg')
if not os.access(dst, os.X_OK):
if os.path.exists(dst):
if not force:
return
try:
urlretrieve(src, dst)
if not hasSheBang(dst):
DieWithError('Not a script: %s\n'
'You need to download from\n%s\n'
'into .git/hooks/commit-msg and '
'chmod +x .git/hooks/commit-msg' % (dst, src))
os.chmod(dst, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
except Exception:
if os.path.exists(dst):
os.remove(dst)
DieWithError('\nFailed to download hooks.\n'
'You need to download from\n%s\n'
'into .git/hooks/commit-msg and '
'chmod +x .git/hooks/commit-msg' % src)
def GetRietveldCodereviewSettingsInteractively():
"""Prompt the user for settings."""
server = settings.GetDefaultServerUrl(error_ok=True)
prompt = 'Rietveld server (host[:port])'
prompt += ' [%s]' % (server or DEFAULT_SERVER)
newserver = ask_for_data(prompt + ':')
if not server and not newserver:
newserver = DEFAULT_SERVER
if newserver:
newserver = gclient_utils.UpgradeToHttps(newserver)
if newserver != server:
RunGit(['config', 'rietveld.server', newserver])
def SetProperty(initial, caption, name, is_url):
prompt = caption
if initial:
prompt += ' ("x" to clear) [%s]' % initial
new_val = ask_for_data(prompt + ':')
if new_val == 'x':
RunGit(['config', '--unset-all', 'rietveld.' + name], error_ok=True)
elif new_val:
if is_url:
new_val = gclient_utils.UpgradeToHttps(new_val)
if new_val != initial:
RunGit(['config', 'rietveld.' + name, new_val])
SetProperty(settings.GetDefaultCCList(), 'CC list', 'cc', False)
SetProperty(settings.GetDefaultPrivateFlag(),
'Private flag (rietveld only)', 'private', False)
SetProperty(settings.GetTreeStatusUrl(error_ok=True), 'Tree status URL',
'tree-status-url', False)
SetProperty(settings.GetViewVCUrl(), 'ViewVC URL', 'viewvc-url', True)
SetProperty(settings.GetBugPrefix(), 'Bug Prefix', 'bug-prefix', False)
SetProperty(settings.GetRunPostUploadHook(), 'Run Post Upload Hook',
'run-post-upload-hook', False)
@subcommand.usage('[repo root containing codereview.settings]')
def CMDconfig(parser, args):
"""Edits configuration for this tree."""
print('WARNING: git cl config works for Rietveld only.\n'
'For Gerrit, see http://crbug.com/603116.')
# TODO(tandrii): add Gerrit support as part of http://crbug.com/603116.
parser.add_option('--activate-update', action='store_true',
help='activate auto-updating [rietveld] section in '
'.git/config')
parser.add_option('--deactivate-update', action='store_true',
help='deactivate auto-updating [rietveld] section in '
'.git/config')
options, args = parser.parse_args(args)
if options.deactivate_update:
RunGit(['config', 'rietveld.autoupdate', 'false'])
return
if options.activate_update:
RunGit(['config', '--unset', 'rietveld.autoupdate'])
return
if len(args) == 0:
GetRietveldCodereviewSettingsInteractively()
return 0
url = args[0]
if not url.endswith('codereview.settings'):
url = os.path.join(url, 'codereview.settings')
# Load code review settings and download hooks (if available).
LoadCodereviewSettingsFromFile(urllib2.urlopen(url))
return 0
def CMDbaseurl(parser, args):
"""Gets or sets base-url for this branch."""
branchref = RunGit(['symbolic-ref', 'HEAD']).strip()
branch = ShortBranchName(branchref)
_, args = parser.parse_args(args)
if not args:
print('Current base-url:')
return RunGit(['config', 'branch.%s.base-url' % branch],
error_ok=False).strip()
else:
print('Setting base-url to %s' % args[0])
return RunGit(['config', 'branch.%s.base-url' % branch, args[0]],
error_ok=False).strip()
def color_for_status(status):
"""Maps a Changelist status to color, for CMDstatus and other tools."""
return {
'unsent': Fore.RED,
'waiting': Fore.BLUE,
'reply': Fore.YELLOW,
'lgtm': Fore.GREEN,
'commit': Fore.MAGENTA,
'closed': Fore.CYAN,
'error': Fore.WHITE,
}.get(status, Fore.WHITE)
def get_cl_statuses(changes, fine_grained, max_processes=None):
"""Returns a blocking iterable of (cl, status) for given branches.
If fine_grained is true, this will fetch CL statuses from the server.
Otherwise, simply indicate if there's a matching url for the given branches.
If max_processes is specified, it is used as the maximum number of processes
to spawn to fetch CL status from the server. Otherwise 1 process per branch is
spawned.
See GetStatus() for a list of possible statuses.
"""
# Silence upload.py otherwise it becomes unwieldly.
upload.verbosity = 0
if fine_grained:
# Process one branch synchronously to work through authentication, then
# spawn processes to process all the other branches in parallel.
if changes:
fetch = lambda cl: (cl, cl.GetStatus())
yield fetch(changes[0])
if not changes:
# Exit early if there was only one branch to fetch.
return
changes_to_fetch = changes[1:]
pool = ThreadPool(
min(max_processes, len(changes_to_fetch))
if max_processes is not None
else len(changes_to_fetch))
fetched_cls = set()
it = pool.imap_unordered(fetch, changes_to_fetch).__iter__()
while True:
try:
row = it.next(timeout=5)
except multiprocessing.TimeoutError:
break
fetched_cls.add(row[0])
yield row
# Add any branches that failed to fetch.
for cl in set(changes_to_fetch) - fetched_cls:
yield (cl, 'error')
else:
# Do not use GetApprovingReviewers(), since it requires an HTTP request.
for cl in changes:
yield (cl, 'waiting' if cl.GetIssueURL() else 'error')
def upload_branch_deps(cl, args):
"""Uploads CLs of local branches that are dependents of the current branch.
If the local branch dependency tree looks like:
test1 -> test2.1 -> test3.1
-> test3.2
-> test2.2 -> test3.3
and you run "git cl upload --dependencies" from test1 then "git cl upload" is
run on the dependent branches in this order:
test2.1, test3.1, test3.2, test2.2, test3.3
Note: This function does not rebase your local dependent branches. Use it when
you make a change to the parent branch that will not conflict with its
dependent branches, and you would like their dependencies updated in
Rietveld.
"""
if git_common.is_dirty_git_tree('upload-branch-deps'):
return 1
root_branch = cl.GetBranch()
if root_branch is None:
DieWithError('Can\'t find dependent branches from detached HEAD state. '
'Get on a branch!')
if not cl.GetIssue() or not cl.GetPatchset():
DieWithError('Current branch does not have an uploaded CL. We cannot set '
'patchset dependencies without an uploaded CL.')
branches = RunGit(['for-each-ref',
'--format=%(refname:short) %(upstream:short)',
'refs/heads'])
if not branches:
print('No local branches found.')
return 0
# Create a dictionary of all local branches to the branches that are dependent
# on it.
tracked_to_dependents = collections.defaultdict(list)
for b in branches.splitlines():
tokens = b.split()
if len(tokens) == 2:
branch_name, tracked = tokens
tracked_to_dependents[tracked].append(branch_name)
print()
print('The dependent local branches of %s are:' % root_branch)
dependents = []
def traverse_dependents_preorder(branch, padding=''):
dependents_to_process = tracked_to_dependents.get(branch, [])
padding += ' '
for dependent in dependents_to_process:
print('%s%s' % (padding, dependent))
dependents.append(dependent)
traverse_dependents_preorder(dependent, padding)
traverse_dependents_preorder(root_branch)
print()
if not dependents:
print('There are no dependent local branches for %s' % root_branch)
return 0
print('This command will checkout all dependent branches and run '
'"git cl upload".')
ask_for_data('[Press enter to continue or ctrl-C to quit]')
# Add a default patchset title to all upload calls in Rietveld.
if not cl.IsGerrit():
args.extend(['-t', 'Updated patchset dependency'])
# Record all dependents that failed to upload.
failures = {}
# Go through all dependents, checkout the branch and upload.
try:
for dependent_branch in dependents:
print()
print('--------------------------------------')
print('Running "git cl upload" from %s:' % dependent_branch)
RunGit(['checkout', '-q', dependent_branch])
print()
try:
if CMDupload(OptionParser(), args) != 0:
print('Upload failed for %s!' % dependent_branch)
failures[dependent_branch] = 1
except: # pylint: disable=W0702
failures[dependent_branch] = 1
print()
finally:
# Swap back to the original root branch.
RunGit(['checkout', '-q', root_branch])
print()
print('Upload complete for dependent branches!')
for dependent_branch in dependents:
upload_status = 'failed' if failures.get(dependent_branch) else 'succeeded'
print(' %s : %s' % (dependent_branch, upload_status))
print()
return 0
def CMDarchive(parser, args):
"""Archives and deletes branches associated with closed changelists."""
parser.add_option(
'-j', '--maxjobs', action='store', type=int,
help='The maximum number of jobs to use when retrieving review status')
parser.add_option(
'-f', '--force', action='store_true',
help='Bypasses the confirmation prompt.')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
if args:
parser.error('Unsupported args: %s' % ' '.join(args))
auth_config = auth.extract_auth_config_from_options(options)
branches = RunGit(['for-each-ref', '--format=%(refname)', 'refs/heads'])
if not branches:
return 0
print('Finding all branches associated with closed issues...')
changes = [Changelist(branchref=b, auth_config=auth_config)
for b in branches.splitlines()]
alignment = max(5, max(len(c.GetBranch()) for c in changes))
statuses = get_cl_statuses(changes,
fine_grained=True,
max_processes=options.maxjobs)
proposal = [(cl.GetBranch(),
'git-cl-archived-%s-%s' % (cl.GetIssue(), cl.GetBranch()))
for cl, status in statuses
if status == 'closed']
proposal.sort()
if not proposal:
print('No branches with closed codereview issues found.')
return 0
current_branch = GetCurrentBranch()
print('\nBranches with closed issues that will be archived:\n')
print('%*s | %s' % (alignment, 'Branch name', 'Archival tag name'))
for next_item in proposal:
print('%*s %s' % (alignment, next_item[0], next_item[1]))
if any(branch == current_branch for branch, _ in proposal):
print('You are currently on a branch \'%s\' which is associated with a '
'closed codereview issue, so archive cannot proceed. Please '
'checkout another branch and run this command again.' %
current_branch)
return 1
if not options.force:
answer = ask_for_data('\nProceed with deletion (Y/n)? ').lower()
if answer not in ('y', ''):
print('Aborted.')
return 1
for branch, tagname in proposal:
RunGit(['tag', tagname, branch])
RunGit(['branch', '-D', branch])
print('\nJob\'s done!')
return 0
def CMDstatus(parser, args):
"""Show status of changelists.
Colors are used to tell the state of the CL unless --fast is used:
- Red not sent for review or broken
- Blue waiting for review
- Yellow waiting for you to reply to review
- Green LGTM'ed
- Magenta in the commit queue
- Cyan was committed, branch can be deleted
Also see 'git cl comments'.
"""
parser.add_option('--field',
help='print only specific field (desc|id|patch|url)')
parser.add_option('-f', '--fast', action='store_true',
help='Do not retrieve review status')
parser.add_option(
'-j', '--maxjobs', action='store', type=int,
help='The maximum number of jobs to use when retrieving review status')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
if args:
parser.error('Unsupported args: %s' % args)
auth_config = auth.extract_auth_config_from_options(options)
if options.field:
cl = Changelist(auth_config=auth_config)
if options.field.startswith('desc'):
print(cl.GetDescription())
elif options.field == 'id':
issueid = cl.GetIssue()
if issueid:
print(issueid)
elif options.field == 'patch':
patchset = cl.GetPatchset()
if patchset:
print(patchset)
elif options.field == 'url':
url = cl.GetIssueURL()
if url:
print(url)
return 0
branches = RunGit(['for-each-ref', '--format=%(refname)', 'refs/heads'])
if not branches:
print('No local branch found.')
return 0
changes = [
Changelist(branchref=b, auth_config=auth_config)
for b in branches.splitlines()]
print('Branches associated with reviews:')
output = get_cl_statuses(changes,
fine_grained=not options.fast,
max_processes=options.maxjobs)
branch_statuses = {}
alignment = max(5, max(len(ShortBranchName(c.GetBranch())) for c in changes))
for cl in sorted(changes, key=lambda c: c.GetBranch()):
branch = cl.GetBranch()
while branch not in branch_statuses:
c, status = output.next()
branch_statuses[c.GetBranch()] = status
status = branch_statuses.pop(branch)
url = cl.GetIssueURL()
if url and (not status or status == 'error'):
# The issue probably doesn't exist anymore.
url += ' (broken)'
color = color_for_status(status)
reset = Fore.RESET
if not setup_color.IS_TTY:
color = ''
reset = ''
status_str = '(%s)' % status if status else ''
print(' %*s : %s%s %s%s' % (
alignment, ShortBranchName(branch), color, url,
status_str, reset))
cl = Changelist(auth_config=auth_config)
print()
print('Current branch:',)
print(cl.GetBranch())
if not cl.GetIssue():
print('No issue assigned.')
return 0
print('Issue number: %s (%s)' % (cl.GetIssue(), cl.GetIssueURL()))
if not options.fast:
print('Issue description:')
print(cl.GetDescription(pretty=True))
return 0
def colorize_CMDstatus_doc():
"""To be called once in main() to add colors to git cl status help."""
colors = [i for i in dir(Fore) if i[0].isupper()]
def colorize_line(line):
for color in colors:
if color in line.upper():
# Extract whitespaces first and the leading '-'.
indent = len(line) - len(line.lstrip(' ')) + 1
return line[:indent] + getattr(Fore, color) + line[indent:] + Fore.RESET
return line
lines = CMDstatus.__doc__.splitlines()
CMDstatus.__doc__ = '\n'.join(colorize_line(l) for l in lines)
@subcommand.usage('[issue_number]')
def CMDissue(parser, args):
"""Sets or displays the current code review issue number.
Pass issue number 0 to clear the current issue.
"""
parser.add_option('-r', '--reverse', action='store_true',
help='Lookup the branch(es) for the specified issues. If '
'no issues are specified, all branches with mapped '
'issues will be listed.')
_add_codereview_select_options(parser)
options, args = parser.parse_args(args)
_process_codereview_select_options(parser, options)
if options.reverse:
branches = RunGit(['for-each-ref', 'refs/heads',
'--format=%(refname:short)']).splitlines()
# Reverse issue lookup.
issue_branch_map = {}
for branch in branches:
cl = Changelist(branchref=branch)
issue_branch_map.setdefault(cl.GetIssue(), []).append(branch)
if not args:
args = sorted(issue_branch_map.iterkeys())
for issue in args:
if not issue:
continue
print('Branch for issue number %s: %s' % (
issue, ', '.join(issue_branch_map.get(int(issue)) or ('None',))))
else:
cl = Changelist(codereview=options.forced_codereview)
if len(args) > 0:
try:
issue = int(args[0])
except ValueError:
DieWithError('Pass a number to set the issue or none to list it.\n'
'Maybe you want to run git cl status?')
cl.SetIssue(issue)
print('Issue number: %s (%s)' % (cl.GetIssue(), cl.GetIssueURL()))
return 0
def CMDcomments(parser, args):
"""Shows or posts review comments for any changelist."""
parser.add_option('-a', '--add-comment', dest='comment',
help='comment to add to an issue')
parser.add_option('-i', dest='issue',
help="review issue id (defaults to current issue)")
parser.add_option('-j', '--json-file',
help='File to write JSON summary to')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
issue = None
if options.issue:
try:
issue = int(options.issue)
except ValueError:
DieWithError('A review issue id is expected to be a number')
cl = Changelist(issue=issue, codereview='rietveld', auth_config=auth_config)
if options.comment:
cl.AddComment(options.comment)
return 0
data = cl.GetIssueProperties()
summary = []
for message in sorted(data.get('messages', []), key=lambda x: x['date']):
summary.append({
'date': message['date'],
'lgtm': False,
'message': message['text'],
'not_lgtm': False,
'sender': message['sender'],
})
if message['disapproval']:
color = Fore.RED
summary[-1]['not lgtm'] = True
elif message['approval']:
color = Fore.GREEN
summary[-1]['lgtm'] = True
elif message['sender'] == data['owner_email']:
color = Fore.MAGENTA
else:
color = Fore.BLUE
print('\n%s%s %s%s' % (
color, message['date'].split('.', 1)[0], message['sender'],
Fore.RESET))
if message['text'].strip():
print('\n'.join(' ' + l for l in message['text'].splitlines()))
if options.json_file:
with open(options.json_file, 'wb') as f:
json.dump(summary, f)
return 0
@subcommand.usage('[codereview url or issue id]')
def CMDdescription(parser, args):
"""Brings up the editor for the current CL's description."""
parser.add_option('-d', '--display', action='store_true',
help='Display the description instead of opening an editor')
parser.add_option('-n', '--new-description',
help='New description to set for this issue (- for stdin)')
_add_codereview_select_options(parser)
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
_process_codereview_select_options(parser, options)
target_issue = None
if len(args) > 0:
issue_arg = ParseIssueNumberArgument(args[0])
if not issue_arg.valid:
parser.print_help()
return 1
target_issue = issue_arg.issue
auth_config = auth.extract_auth_config_from_options(options)
cl = Changelist(
auth_config=auth_config, issue=target_issue,
codereview=options.forced_codereview)
if not cl.GetIssue():
DieWithError('This branch has no associated changelist.')
description = ChangeDescription(cl.GetDescription())
if options.display:
print(description.description)
return 0
if options.new_description:
text = options.new_description
if text == '-':
text = '\n'.join(l.rstrip() for l in sys.stdin)
description.set_description(text)
else:
description.prompt()
if cl.GetDescription() != description.description:
cl.UpdateDescription(description.description)
return 0
def CreateDescriptionFromLog(args):
"""Pulls out the commit log to use as a base for the CL description."""
log_args = []
if len(args) == 1 and not args[0].endswith('.'):
log_args = [args[0] + '..']
elif len(args) == 1 and args[0].endswith('...'):
log_args = [args[0][:-1]]
elif len(args) == 2:
log_args = [args[0] + '..' + args[1]]
else:
log_args = args[:] # Hope for the best!
return RunGit(['log', '--pretty=format:%s\n\n%b'] + log_args)
def CMDlint(parser, args):
"""Runs cpplint on the current changelist."""
parser.add_option('--filter', action='append', metavar='-x,+y',
help='Comma-separated list of cpplint\'s category-filters')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
# Access to a protected member _XX of a client class
# pylint: disable=W0212
try:
import cpplint
import cpplint_chromium
except ImportError:
print('Your depot_tools is missing cpplint.py and/or cpplint_chromium.py.')
return 1
# Change the current working directory before calling lint so that it
# shows the correct base.
previous_cwd = os.getcwd()
os.chdir(settings.GetRoot())
try:
cl = Changelist(auth_config=auth_config)
change = cl.GetChange(cl.GetCommonAncestorWithUpstream(), None)
files = [f.LocalPath() for f in change.AffectedFiles()]
if not files:
print('Cannot lint an empty CL')
return 1
# Process cpplints arguments if any.
command = args + files
if options.filter:
command = ['--filter=' + ','.join(options.filter)] + command
filenames = cpplint.ParseArguments(command)
white_regex = re.compile(settings.GetLintRegex())
black_regex = re.compile(settings.GetLintIgnoreRegex())
extra_check_functions = [cpplint_chromium.CheckPointerDeclarationWhitespace]
for filename in filenames:
if white_regex.match(filename):
if black_regex.match(filename):
print('Ignoring file %s' % filename)
else:
cpplint.ProcessFile(filename, cpplint._cpplint_state.verbose_level,
extra_check_functions)
else:
print('Skipping file %s' % filename)
finally:
os.chdir(previous_cwd)
print('Total errors found: %d\n' % cpplint._cpplint_state.error_count)
if cpplint._cpplint_state.error_count != 0:
return 1
return 0
def CMDpresubmit(parser, args):
"""Runs presubmit tests on the current changelist."""
parser.add_option('-u', '--upload', action='store_true',
help='Run upload hook instead of the push/dcommit hook')
parser.add_option('-f', '--force', action='store_true',
help='Run checks even if tree is dirty')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
if not options.force and git_common.is_dirty_git_tree('presubmit'):
print('use --force to check even if tree is dirty.')
return 1
cl = Changelist(auth_config=auth_config)
if args:
base_branch = args[0]
else:
# Default to diffing against the common ancestor of the upstream branch.
base_branch = cl.GetCommonAncestorWithUpstream()
cl.RunHook(
committing=not options.upload,
may_prompt=False,
verbose=options.verbose,
change=cl.GetChange(base_branch, None))
return 0
def GenerateGerritChangeId(message):
"""Returns Ixxxxxx...xxx change id.
Works the same way as
https://gerrit-review.googlesource.com/tools/hooks/commit-msg
but can be called on demand on all platforms.
The basic idea is to generate git hash of a state of the tree, original commit
message, author/committer info and timestamps.
"""
lines = []
tree_hash = RunGitSilent(['write-tree'])
lines.append('tree %s' % tree_hash.strip())
code, parent = RunGitWithCode(['rev-parse', 'HEAD~0'], suppress_stderr=False)
if code == 0:
lines.append('parent %s' % parent.strip())
author = RunGitSilent(['var', 'GIT_AUTHOR_IDENT'])
lines.append('author %s' % author.strip())
committer = RunGitSilent(['var', 'GIT_COMMITTER_IDENT'])
lines.append('committer %s' % committer.strip())
lines.append('')
# Note: Gerrit's commit-hook actually cleans message of some lines and
# whitespace. This code is not doing this, but it clearly won't decrease
# entropy.
lines.append(message)
change_hash = RunCommand(['git', 'hash-object', '-t', 'commit', '--stdin'],
stdin='\n'.join(lines))
return 'I%s' % change_hash.strip()
def GetTargetRef(remote, remote_branch, target_branch, pending_prefix):
"""Computes the remote branch ref to use for the CL.
Args:
remote (str): The git remote for the CL.
remote_branch (str): The git remote branch for the CL.
target_branch (str): The target branch specified by the user.
pending_prefix (str): The pending prefix from the settings.
"""
if not (remote and remote_branch):
return None
if target_branch:
# Cannonicalize branch references to the equivalent local full symbolic
# refs, which are then translated into the remote full symbolic refs
# below.
if '/' not in target_branch:
remote_branch = 'refs/remotes/%s/%s' % (remote, target_branch)
else:
prefix_replacements = (
('^((refs/)?remotes/)?branch-heads/', 'refs/remotes/branch-heads/'),
('^((refs/)?remotes/)?%s/' % remote, 'refs/remotes/%s/' % remote),
('^(refs/)?heads/', 'refs/remotes/%s/' % remote),
)
match = None
for regex, replacement in prefix_replacements:
match = re.search(regex, target_branch)
if match:
remote_branch = target_branch.replace(match.group(0), replacement)
break
if not match:
# This is a branch path but not one we recognize; use as-is.
remote_branch = target_branch
elif remote_branch in REFS_THAT_ALIAS_TO_OTHER_REFS:
# Handle the refs that need to land in different refs.
remote_branch = REFS_THAT_ALIAS_TO_OTHER_REFS[remote_branch]
# Create the true path to the remote branch.
# Does the following translation:
# * refs/remotes/origin/refs/diff/test -> refs/diff/test
# * refs/remotes/origin/master -> refs/heads/master
# * refs/remotes/branch-heads/test -> refs/branch-heads/test
if remote_branch.startswith('refs/remotes/%s/refs/' % remote):
remote_branch = remote_branch.replace('refs/remotes/%s/' % remote, '')
elif remote_branch.startswith('refs/remotes/%s/' % remote):
remote_branch = remote_branch.replace('refs/remotes/%s/' % remote,
'refs/heads/')
elif remote_branch.startswith('refs/remotes/branch-heads'):
remote_branch = remote_branch.replace('refs/remotes/', 'refs/')
# If a pending prefix exists then replace refs/ with it.
if pending_prefix:
remote_branch = remote_branch.replace('refs/', pending_prefix)
return remote_branch
def cleanup_list(l):
"""Fixes a list so that comma separated items are put as individual items.
So that "--reviewers joe@c,john@c --reviewers joa@c" results in
options.reviewers == sorted(['joe@c', 'john@c', 'joa@c']).
"""
items = sum((i.split(',') for i in l), [])
stripped_items = (i.strip() for i in items)
return sorted(filter(None, stripped_items))
@subcommand.usage('[args to "git diff"]')
def CMDupload(parser, args):
"""Uploads the current changelist to codereview.
Can skip dependency patchset uploads for a branch by running:
git config branch.branch_name.skip-deps-uploads True
To unset run:
git config --unset branch.branch_name.skip-deps-uploads
Can also set the above globally by using the --global flag.
"""
parser.add_option('--bypass-hooks', action='store_true', dest='bypass_hooks',
help='bypass upload presubmit hook')
parser.add_option('--bypass-watchlists', action='store_true',
dest='bypass_watchlists',
help='bypass watchlists auto CC-ing reviewers')
parser.add_option('-f', action='store_true', dest='force',
help="force yes to questions (don't prompt)")
parser.add_option('-m', dest='message', help='message for patchset')
parser.add_option('--message-file', dest='message_file',
help='file which contains message for patchset')
parser.add_option('-t', dest='title',
help='title for patchset (Rietveld only)')
parser.add_option('-r', '--reviewers',
action='append', default=[],
help='reviewer email addresses')
parser.add_option('--cc',
action='append', default=[],
help='cc email addresses')
parser.add_option('-s', '--send-mail', action='store_true',
help='send email to reviewer immediately')
parser.add_option('--emulate_svn_auto_props',
'--emulate-svn-auto-props',
action="store_true",
dest="emulate_svn_auto_props",
help="Emulate Subversion's auto properties feature.")
parser.add_option('-c', '--use-commit-queue', action='store_true',
help='tell the commit queue to commit this patchset')
parser.add_option('--private', action='store_true',
help='set the review private (rietveld only)')
parser.add_option('--target_branch',
'--target-branch',
metavar='TARGET',
help='Apply CL to remote ref TARGET. ' +
'Default: remote branch head, or master')
parser.add_option('--squash', action='store_true',
help='Squash multiple commits into one (Gerrit only)')
parser.add_option('--no-squash', action='store_true',
help='Don\'t squash multiple commits into one ' +
'(Gerrit only)')
parser.add_option('--email', default=None,
help='email address to use to connect to Rietveld')
parser.add_option('--tbr-owners', dest='tbr_owners', action='store_true',
help='add a set of OWNERS to TBR')
parser.add_option('-d', '--cq-dry-run', dest='cq_dry_run',
action='store_true',
help='Send the patchset to do a CQ dry run right after '
'upload.')
parser.add_option('--dependencies', action='store_true',
help='Uploads CLs of all the local branches that depend on '
'the current branch')
orig_args = args
add_git_similarity(parser)
auth.add_auth_options(parser)
_add_codereview_select_options(parser)
(options, args) = parser.parse_args(args)
_process_codereview_select_options(parser, options)
auth_config = auth.extract_auth_config_from_options(options)
if git_common.is_dirty_git_tree('upload'):
return 1
options.reviewers = cleanup_list(options.reviewers)
options.cc = cleanup_list(options.cc)
if options.message_file:
if options.message:
parser.error('only one of --message and --message-file allowed.')
options.message = gclient_utils.FileRead(options.message_file)
options.message_file = None
# For sanity of test expectations, do this otherwise lazy-loading *now*.
settings.GetIsGerrit()
cl = Changelist(auth_config=auth_config, codereview=options.forced_codereview)
return cl.CMDUpload(options, args, orig_args)
def IsSubmoduleMergeCommit(ref):
# When submodules are added to the repo, we expect there to be a single
# non-git-svn merge commit at remote HEAD with a signature comment.
pattern = '^SVN changes up to revision [0-9]*$'
cmd = ['rev-list', '--merges', '--grep=%s' % pattern, '%s^!' % ref]
return RunGit(cmd) != ''
def SendUpstream(parser, args, cmd):
"""Common code for CMDland and CmdDCommit
In case of Gerrit, uses Gerrit REST api to "submit" the issue, which pushes
upstream and closes the issue automatically and atomically.
Otherwise (in case of Rietveld):
Squashes branch into a single commit.
Updates changelog with metadata (e.g. pointer to review).
Pushes/dcommits the code upstream.
Updates review and closes.
"""
parser.add_option('--bypass-hooks', action='store_true', dest='bypass_hooks',
help='bypass upload presubmit hook')
parser.add_option('-m', dest='message',
help="override review description")
parser.add_option('-f', action='store_true', dest='force',
help="force yes to questions (don't prompt)")
parser.add_option('-c', dest='contributor',
help="external contributor for patch (appended to " +
"description and used as author for git). Should be " +
"formatted as 'First Last <email@example.com>'")
add_git_similarity(parser)
auth.add_auth_options(parser)
(options, args) = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
cl = Changelist(auth_config=auth_config)
# TODO(tandrii): refactor this into _RietveldChangelistImpl method.
if cl.IsGerrit():
if options.message:
# This could be implemented, but it requires sending a new patch to
# Gerrit, as Gerrit unlike Rietveld versions messages with patchsets.
# Besides, Gerrit has the ability to change the commit message on submit
# automatically, thus there is no need to support this option (so far?).
parser.error('-m MESSAGE option is not supported for Gerrit.')
if options.contributor:
parser.error(
'-c CONTRIBUTOR option is not supported for Gerrit.\n'
'Before uploading a commit to Gerrit, ensure it\'s author field is '
'the contributor\'s "name <email>". If you can\'t upload such a '
'commit for review, contact your repository admin and request'
'"Forge-Author" permission.')
return cl._codereview_impl.CMDLand(options.force, options.bypass_hooks,
options.verbose)
current = cl.GetBranch()
remote, upstream_branch = cl.FetchUpstreamTuple(cl.GetBranch())
if not settings.GetIsGitSvn() and remote == '.':
print()
print('Attempting to push branch %r into another local branch!' % current)
print()
print('Either reparent this branch on top of origin/master:')
print(' git reparent-branch --root')
print()
print('OR run `git rebase-update` if you think the parent branch is ')
print('already committed.')
print()
print(' Current parent: %r' % upstream_branch)
return 1
if not args or cmd == 'land':
# Default to merging against our best guess of the upstream branch.
args = [cl.GetUpstreamBranch()]
if options.contributor:
if not re.match('^.*\s<\S+@\S+>$', options.contributor):
print("Please provide contibutor as 'First Last <email@example.com>'")
return 1
base_branch = args[0]
base_has_submodules = IsSubmoduleMergeCommit(base_branch)
if git_common.is_dirty_git_tree(cmd):
return 1
# This rev-list syntax means "show all commits not in my branch that
# are in base_branch".
upstream_commits = RunGit(['rev-list', '^' + cl.GetBranchRef(),
base_branch]).splitlines()
if upstream_commits:
print('Base branch "%s" has %d commits '
'not in this branch.' % (base_branch, len(upstream_commits)))
print('Run "git merge %s" before attempting to %s.' % (base_branch, cmd))
return 1
# This is the revision `svn dcommit` will commit on top of.
svn_head = None
if cmd == 'dcommit' or base_has_submodules:
svn_head = RunGit(['log', '--grep=^git-svn-id:', '-1',
'--pretty=format:%H'])
if cmd == 'dcommit':
# If the base_head is a submodule merge commit, the first parent of the
# base_head should be a git-svn commit, which is what we're interested in.
base_svn_head = base_branch
if base_has_submodules:
base_svn_head += '^1'
extra_commits = RunGit(['rev-list', '^' + svn_head, base_svn_head])
if extra_commits:
print('This branch has %d additional commits not upstreamed yet.'
% len(extra_commits.splitlines()))
print('Upstream "%s" or rebase this branch on top of the upstream trunk '
'before attempting to %s.' % (base_branch, cmd))
return 1
merge_base = RunGit(['merge-base', base_branch, 'HEAD']).strip()
if not options.bypass_hooks:
author = None
if options.contributor:
author = re.search(r'\<(.*)\>', options.contributor).group(1)
hook_results = cl.RunHook(
committing=True,
may_prompt=not options.force,
verbose=options.verbose,
change=cl.GetChange(merge_base, author))
if not hook_results.should_continue():
return 1
# Check the tree status if the tree status URL is set.
status = GetTreeStatus()
if 'closed' == status:
print('The tree is closed. Please wait for it to reopen. Use '
'"git cl %s --bypass-hooks" to commit on a closed tree.' % cmd)
return 1
elif 'unknown' == status:
print('Unable to determine tree status. Please verify manually and '
'use "git cl %s --bypass-hooks" to commit on a closed tree.' % cmd)
return 1
change_desc = ChangeDescription(options.message)
if not change_desc.description and cl.GetIssue():
change_desc = ChangeDescription(cl.GetDescription())
if not change_desc.description:
if not cl.GetIssue() and options.bypass_hooks:
change_desc = ChangeDescription(CreateDescriptionFromLog([merge_base]))
else:
print('No description set.')
print('Visit %s/edit to set it.' % (cl.GetIssueURL()))
return 1
# Keep a separate copy for the commit message, because the commit message
# contains the link to the Rietveld issue, while the Rietveld message contains
# the commit viewvc url.
# Keep a separate copy for the commit message.
if cl.GetIssue():
change_desc.update_reviewers(cl.GetApprovingReviewers())
commit_desc = ChangeDescription(change_desc.description)
if cl.GetIssue():
# Xcode won't linkify this URL unless there is a non-whitespace character
# after it. Add a period on a new line to circumvent this. Also add a space
# before the period to make sure that Gitiles continues to correctly resolve
# the URL.
commit_desc.append_footer('Review URL: %s .' % cl.GetIssueURL())
if options.contributor:
commit_desc.append_footer('Patch from %s.' % options.contributor)
print('Description:')
print(commit_desc.description)
branches = [merge_base, cl.GetBranchRef()]
if not options.force:
print_stats(options.similarity, options.find_copies, branches)
# We want to squash all this branch's commits into one commit with the proper
# description. We do this by doing a "reset --soft" to the base branch (which
# keeps the working copy the same), then dcommitting that. If origin/master
# has a submodule merge commit, we'll also need to cherry-pick the squashed
# commit onto a branch based on the git-svn head.
MERGE_BRANCH = 'git-cl-commit'
CHERRY_PICK_BRANCH = 'git-cl-cherry-pick'
# Delete the branches if they exist.
for branch in [MERGE_BRANCH, CHERRY_PICK_BRANCH]:
showref_cmd = ['show-ref', '--quiet', '--verify', 'refs/heads/%s' % branch]
result = RunGitWithCode(showref_cmd)
if result[0] == 0:
RunGit(['branch', '-D', branch])
# We might be in a directory that's present in this branch but not in the
# trunk. Move up to the top of the tree so that git commands that expect a
# valid CWD won't fail after we check out the merge branch.
rel_base_path = settings.GetRelativeRoot()
if rel_base_path:
os.chdir(rel_base_path)
# Stuff our change into the merge branch.
# We wrap in a try...finally block so if anything goes wrong,
# we clean up the branches.
retcode = -1
pushed_to_pending = False
pending_ref = None
revision = None
try:
RunGit(['checkout', '-q', '-b', MERGE_BRANCH])
RunGit(['reset', '--soft', merge_base])
if options.contributor:
RunGit(
[
'commit', '--author', options.contributor,
'-m', commit_desc.description,
])
else:
RunGit(['commit', '-m', commit_desc.description])
if base_has_submodules:
cherry_pick_commit = RunGit(['rev-list', 'HEAD^!']).rstrip()
RunGit(['branch', CHERRY_PICK_BRANCH, svn_head])
RunGit(['checkout', CHERRY_PICK_BRANCH])
RunGit(['cherry-pick', cherry_pick_commit])
if cmd == 'land':
remote, branch = cl.FetchUpstreamTuple(cl.GetBranch())
mirror = settings.GetGitMirror(remote)
pushurl = mirror.url if mirror else remote
pending_prefix = settings.GetPendingRefPrefix()
if not pending_prefix or branch.startswith(pending_prefix):
# If not using refs/pending/heads/* at all, or target ref is already set
# to pending, then push to the target ref directly.
retcode, output = RunGitWithCode(
['push', '--porcelain', pushurl, 'HEAD:%s' % branch])
pushed_to_pending = pending_prefix and branch.startswith(pending_prefix)
else:
# Cherry-pick the change on top of pending ref and then push it.
assert branch.startswith('refs/'), branch
assert pending_prefix[-1] == '/', pending_prefix
pending_ref = pending_prefix + branch[len('refs/'):]
retcode, output = PushToGitPending(pushurl, pending_ref, branch)
pushed_to_pending = (retcode == 0)
if retcode == 0:
revision = RunGit(['rev-parse', 'HEAD']).strip()
else:
# dcommit the merge branch.
cmd_args = [
'svn', 'dcommit',
'-C%s' % options.similarity,
'--no-rebase', '--rmdir',
]
if settings.GetForceHttpsCommitUrl():
# Allow forcing https commit URLs for some projects that don't allow
# committing to http URLs (like Google Code).
remote_url = cl.GetGitSvnRemoteUrl()
if urlparse.urlparse(remote_url).scheme == 'http':
remote_url = remote_url.replace('http://', 'https://')
cmd_args.append('--commit-url=%s' % remote_url)
_, output = RunGitWithCode(cmd_args)
if 'Committed r' in output:
revision = re.match(
'.*?\nCommitted r(\\d+)', output, re.DOTALL).group(1)
logging.debug(output)
finally:
# And then swap back to the original branch and clean up.
RunGit(['checkout', '-q', cl.GetBranch()])
RunGit(['branch', '-D', MERGE_BRANCH])
if base_has_submodules:
RunGit(['branch', '-D', CHERRY_PICK_BRANCH])
if not revision:
print('Failed to push. If this persists, please file a bug.')
return 1
killed = False
if pushed_to_pending:
try:
revision = WaitForRealCommit(remote, revision, base_branch, branch)
# We set pushed_to_pending to False, since it made it all the way to the
# real ref.
pushed_to_pending = False
except KeyboardInterrupt:
killed = True
if cl.GetIssue():
to_pending = ' to pending queue' if pushed_to_pending else ''
viewvc_url = settings.GetViewVCUrl()
if not to_pending:
if viewvc_url and revision:
change_desc.append_footer(
'Committed: %s%s' % (viewvc_url, revision))
elif revision:
change_desc.append_footer('Committed: %s' % (revision,))
print('Closing issue '
'(you may be prompted for your codereview password)...')
cl.UpdateDescription(change_desc.description)
cl.CloseIssue()
props = cl.GetIssueProperties()
patch_num = len(props['patchsets'])
comment = "Committed patchset #%d (id:%d)%s manually as %s" % (
patch_num, props['patchsets'][-1], to_pending, revision)
if options.bypass_hooks:
comment += ' (tree was closed).' if GetTreeStatus() == 'closed' else '.'
else:
comment += ' (presubmit successful).'
cl.RpcServer().add_comment(cl.GetIssue(), comment)
cl.SetIssue(None)
if pushed_to_pending:
_, branch = cl.FetchUpstreamTuple(cl.GetBranch())
print('The commit is in the pending queue (%s).' % pending_ref)
print('It will show up on %s in ~1 min, once it gets a Cr-Commit-Position '
'footer.' % branch)
hook = POSTUPSTREAM_HOOK_PATTERN % cmd
if os.path.isfile(hook):
RunCommand([hook, merge_base], error_ok=True)
return 1 if killed else 0
def WaitForRealCommit(remote, pushed_commit, local_base_ref, real_ref):
print()
print('Waiting for commit to be landed on %s...' % real_ref)
print('(If you are impatient, you may Ctrl-C once without harm)')
target_tree = RunGit(['rev-parse', '%s:' % pushed_commit]).strip()
current_rev = RunGit(['rev-parse', local_base_ref]).strip()
mirror = settings.GetGitMirror(remote)
loop = 0
while True:
sys.stdout.write('fetching (%d)... \r' % loop)
sys.stdout.flush()
loop += 1
if mirror:
mirror.populate()
RunGit(['retry', 'fetch', remote, real_ref], stderr=subprocess2.VOID)
to_rev = RunGit(['rev-parse', 'FETCH_HEAD']).strip()
commits = RunGit(['rev-list', '%s..%s' % (current_rev, to_rev)])
for commit in commits.splitlines():
if RunGit(['rev-parse', '%s:' % commit]).strip() == target_tree:
print('Found commit on %s' % real_ref)
return commit
current_rev = to_rev
def PushToGitPending(remote, pending_ref, upstream_ref):
"""Fetches pending_ref, cherry-picks current HEAD on top of it, pushes.
Returns:
(retcode of last operation, output log of last operation).
"""
assert pending_ref.startswith('refs/'), pending_ref
local_pending_ref = 'refs/git-cl/' + pending_ref[len('refs/'):]
cherry = RunGit(['rev-parse', 'HEAD']).strip()
code = 0
out = ''
max_attempts = 3
attempts_left = max_attempts
while attempts_left:
if attempts_left != max_attempts:
print('Retrying, %d attempts left...' % (attempts_left - 1,))
attempts_left -= 1
# Fetch. Retry fetch errors.
print('Fetching pending ref %s...' % pending_ref)
code, out = RunGitWithCode(
['retry', 'fetch', remote, '+%s:%s' % (pending_ref, local_pending_ref)])
if code:
print('Fetch failed with exit code %d.' % code)
if out.strip():
print(out.strip())
continue
# Try to cherry pick. Abort on merge conflicts.
print('Cherry-picking commit on top of pending ref...')
RunGitWithCode(['checkout', local_pending_ref], suppress_stderr=True)
code, out = RunGitWithCode(['cherry-pick', cherry])
if code:
print('Your patch doesn\'t apply cleanly to ref \'%s\', '
'the following files have merge conflicts:' % pending_ref)
print(RunGit(['diff', '--name-status', '--diff-filter=U']).strip())
print('Please rebase your patch and try again.')
RunGitWithCode(['cherry-pick', '--abort'])
return code, out
# Applied cleanly, try to push now. Retry on error (flake or non-ff push).
print('Pushing commit to %s... It can take a while.' % pending_ref)
code, out = RunGitWithCode(
['retry', 'push', '--porcelain', remote, 'HEAD:%s' % pending_ref])
if code == 0:
# Success.
print('Commit pushed to pending ref successfully!')
return code, out
print('Push failed with exit code %d.' % code)
if out.strip():
print(out.strip())
if IsFatalPushFailure(out):
print('Fatal push error. Make sure your .netrc credentials and git '
'user.email are correct and you have push access to the repo.')
return code, out
print('All attempts to push to pending ref failed.')
return code, out
def IsFatalPushFailure(push_stdout):
"""True if retrying push won't help."""
return '(prohibited by Gerrit)' in push_stdout
@subcommand.usage('[upstream branch to apply against]')
def CMDdcommit(parser, args):
"""Commits the current changelist via git-svn."""
if not settings.GetIsGitSvn():
if git_footers.get_footer_svn_id():
# If it looks like previous commits were mirrored with git-svn.
message = """This repository appears to be a git-svn mirror, but no
upstream SVN master is set. You probably need to run 'git auto-svn' once."""
else:
message = """This doesn't appear to be an SVN repository.
If your project has a true, writeable git repository, you probably want to run
'git cl land' instead.
If your project has a git mirror of an upstream SVN master, you probably need
to run 'git svn init'.
Using the wrong command might cause your commit to appear to succeed, and the
review to be closed, without actually landing upstream. If you choose to
proceed, please verify that the commit lands upstream as expected."""
print(message)
ask_for_data('[Press enter to dcommit or ctrl-C to quit]')
# TODO(tandrii): kill this post SVN migration with
# https://codereview.chromium.org/2076683002
print('WARNING: chrome infrastructure is migrating SVN repos to Git.\n'
'Please let us know of this project you are committing to:'
' http://crbug.com/600451')
return SendUpstream(parser, args, 'dcommit')
@subcommand.usage('[upstream branch to apply against]')
def CMDland(parser, args):
"""Commits the current changelist via git."""
if settings.GetIsGitSvn() or git_footers.get_footer_svn_id():
print('This appears to be an SVN repository.')
print('Are you sure you didn\'t mean \'git cl dcommit\'?')
print('(Ignore if this is the first commit after migrating from svn->git)')
ask_for_data('[Press enter to push or ctrl-C to quit]')
return SendUpstream(parser, args, 'land')
@subcommand.usage('<patch url or issue id or issue url>')
def CMDpatch(parser, args):
"""Patches in a code review."""
parser.add_option('-b', dest='newbranch',
help='create a new branch off trunk for the patch')
parser.add_option('-f', '--force', action='store_true',
help='with -b, clobber any existing branch')
parser.add_option('-d', '--directory', action='store', metavar='DIR',
help='Change to the directory DIR immediately, '
'before doing anything else. Rietveld only.')
parser.add_option('--reject', action='store_true',
help='failed patches spew .rej files rather than '
'attempting a 3-way merge. Rietveld only.')
parser.add_option('-n', '--no-commit', action='store_true', dest='nocommit',
help='don\'t commit after patch applies. Rietveld only.')
group = optparse.OptionGroup(
parser,
'Options for continuing work on the current issue uploaded from a '
'different clone (e.g. different machine). Must be used independently '
'from the other options. No issue number should be specified, and the '
'branch must have an issue number associated with it')
group.add_option('--reapply', action='store_true', dest='reapply',
help='Reset the branch and reapply the issue.\n'
'CAUTION: This will undo any local changes in this '
'branch')
group.add_option('--pull', action='store_true', dest='pull',
help='Performs a pull before reapplying.')
parser.add_option_group(group)
auth.add_auth_options(parser)
_add_codereview_select_options(parser)
(options, args) = parser.parse_args(args)
_process_codereview_select_options(parser, options)
auth_config = auth.extract_auth_config_from_options(options)
if options.reapply :
if options.newbranch:
parser.error('--reapply works on the current branch only')
if len(args) > 0:
parser.error('--reapply implies no additional arguments')
cl = Changelist(auth_config=auth_config,
codereview=options.forced_codereview)
if not cl.GetIssue():
parser.error('current branch must have an associated issue')
upstream = cl.GetUpstreamBranch()
if upstream == None:
parser.error('No upstream branch specified. Cannot reset branch')
RunGit(['reset', '--hard', upstream])
if options.pull:
RunGit(['pull'])
return cl.CMDPatchIssue(cl.GetIssue(), options.reject, options.nocommit,
options.directory)
if len(args) != 1 or not args[0]:
parser.error('Must specify issue number or url')
# We don't want uncommitted changes mixed up with the patch.
if git_common.is_dirty_git_tree('patch'):
return 1
if options.newbranch:
if options.force:
RunGit(['branch', '-D', options.newbranch],
stderr=subprocess2.PIPE, error_ok=True)
RunGit(['new-branch', options.newbranch])
cl = Changelist(auth_config=auth_config, codereview=options.forced_codereview)
if cl.IsGerrit():
if options.reject:
parser.error('--reject is not supported with Gerrit codereview.')
if options.nocommit:
parser.error('--nocommit is not supported with Gerrit codereview.')
if options.directory:
parser.error('--directory is not supported with Gerrit codereview.')
return cl.CMDPatchIssue(args[0], options.reject, options.nocommit,
options.directory)
def CMDrebase(parser, args):
"""Rebases current branch on top of svn repo."""
# Provide a wrapper for git svn rebase to help avoid accidental
# git svn dcommit.
# It's the only command that doesn't use parser at all since we just defer
# execution to git-svn.
return RunGitWithCode(['svn', 'rebase'] + args)[1]
def GetTreeStatus(url=None):
"""Fetches the tree status and returns either 'open', 'closed',
'unknown' or 'unset'."""
url = url or settings.GetTreeStatusUrl(error_ok=True)
if url:
status = urllib2.urlopen(url).read().lower()
if status.find('closed') != -1 or status == '0':
return 'closed'
elif status.find('open') != -1 or status == '1':
return 'open'
return 'unknown'
return 'unset'
def GetTreeStatusReason():
"""Fetches the tree status from a json url and returns the message
with the reason for the tree to be opened or closed."""
url = settings.GetTreeStatusUrl()
json_url = urlparse.urljoin(url, '/current?format=json')
connection = urllib2.urlopen(json_url)
status = json.loads(connection.read())
connection.close()
return status['message']
def GetBuilderMaster(bot_list):
"""For a given builder, fetch the master from AE if available."""
map_url = 'https://builders-map.appspot.com/'
try:
master_map = json.load(urllib2.urlopen(map_url))
except urllib2.URLError as e:
return None, ('Failed to fetch builder-to-master map from %s. Error: %s.' %
(map_url, e))
except ValueError as e:
return None, ('Invalid json string from %s. Error: %s.' % (map_url, e))
if not master_map:
return None, 'Failed to build master map.'
result_master = ''
for bot in bot_list:
builder = bot.split(':', 1)[0]
master_list = master_map.get(builder, [])
if not master_list:
return None, ('No matching master for builder %s.' % builder)
elif len(master_list) > 1:
return None, ('The builder name %s exists in multiple masters %s.' %
(builder, master_list))
else:
cur_master = master_list[0]
if not result_master:
result_master = cur_master
elif result_master != cur_master:
return None, 'The builders do not belong to the same master.'
return result_master, None
def CMDtree(parser, args):
"""Shows the status of the tree."""
_, args = parser.parse_args(args)
status = GetTreeStatus()
if 'unset' == status:
print('You must configure your tree status URL by running "git cl config".')
return 2
print('The tree is %s' % status)
print()
print(GetTreeStatusReason())
if status != 'open':
return 1
return 0
def CMDtry(parser, args):
"""Triggers try jobs through BuildBucket."""
group = optparse.OptionGroup(parser, "Try job options")
group.add_option(
"-b", "--bot", action="append",
help=("IMPORTANT: specify ONE builder per --bot flag. Use it multiple "
"times to specify multiple builders. ex: "
"'-b win_rel -b win_layout'. See "
"the try server waterfall for the builders name and the tests "
"available."))
group.add_option(
"-m", "--master", default='',
help=("Specify a try master where to run the tries."))
group.add_option( "--luci", action='store_true')
group.add_option(
"-r", "--revision",
help="Revision to use for the try job; default: the "
"revision will be determined by the try server; see "
"its waterfall for more info")
group.add_option(
"-c", "--clobber", action="store_true", default=False,
help="Force a clobber before building; e.g. don't do an "
"incremental build")
group.add_option(
"--project",
help="Override which project to use. Projects are defined "
"server-side to define what default bot set to use")
group.add_option(
"-p", "--property", dest="properties", action="append", default=[],
help="Specify generic properties in the form -p key1=value1 -p "
"key2=value2 etc (buildbucket only). The value will be treated as "
"json if decodable, or as string otherwise.")
group.add_option(
"-n", "--name", help="Try job name; default to current branch name")
group.add_option(
"--use-rietveld", action="store_true", default=False,
help="Use Rietveld to trigger try jobs.")
group.add_option(
"--buildbucket-host", default='cr-buildbucket.appspot.com',
help="Host of buildbucket. The default host is %default.")
parser.add_option_group(group)
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
if options.use_rietveld and options.properties:
parser.error('Properties can only be specified with buildbucket')
# Make sure that all properties are prop=value pairs.
bad_params = [x for x in options.properties if '=' not in x]
if bad_params:
parser.error('Got properties with missing "=": %s' % bad_params)
if args:
parser.error('Unknown arguments: %s' % args)
cl = Changelist(auth_config=auth_config)
if not cl.GetIssue():
parser.error('Need to upload first')
if cl.IsGerrit():
parser.error(
'Not yet supported for Gerrit (http://crbug.com/599931).\n'
'If your project has Commit Queue, dry run is a workaround:\n'
' git cl set-commit --dry-run')
# Code below assumes Rietveld issue.
# TODO(tandrii): actually implement for Gerrit http://crbug.com/599931.
props = cl.GetIssueProperties()
if props.get('closed'):
parser.error('Cannot send tryjobs for a closed CL')
if props.get('private'):
parser.error('Cannot use trybots with private issue')
if not options.name:
options.name = cl.GetBranch()
if options.bot and not options.master:
options.master, err_msg = GetBuilderMaster(options.bot)
if err_msg:
parser.error('Tryserver master cannot be found because: %s\n'
'Please manually specify the tryserver master'
', e.g. "-m tryserver.chromium.linux".' % err_msg)
def GetMasterMap():
# Process --bot.
if not options.bot:
change = cl.GetChange(cl.GetCommonAncestorWithUpstream(), None)
# Get try masters from PRESUBMIT.py files.
masters = presubmit_support.DoGetTryMasters(
change,
change.LocalPaths(),
settings.GetRoot(),
None,
None,
options.verbose,
sys.stdout)
if masters:
return masters
# Fall back to deprecated method: get try slaves from PRESUBMIT.py files.
options.bot = presubmit_support.DoGetTrySlaves(
change,
change.LocalPaths(),
settings.GetRoot(),
None,
None,
options.verbose,
sys.stdout)
if not options.bot:
# Get try masters from cq.cfg if any.
# TODO(tandrii): some (but very few) projects store cq.cfg in different
# location.
cq_cfg = os.path.join(change.RepositoryRoot(),
'infra', 'config', 'cq.cfg')
if os.path.exists(cq_cfg):
masters = {}
cq_masters = commit_queue.get_master_builder_map(
cq_cfg, include_experimental=False, include_triggered=False)
for master, builders in cq_masters.iteritems():
for builder in builders:
# Skip presubmit builders, because these will fail without LGTM.
masters.setdefault(master, {})[builder] = ['defaulttests']
if masters:
print('Loaded default bots from CQ config (%s)' % cq_cfg)
return masters
else:
print('CQ config exists (%s) but has no try bots listed' % cq_cfg)
if not options.bot:
parser.error('No default try builder to try, use --bot')
builders_and_tests = {}
# TODO(machenbach): The old style command-line options don't support
# multiple try masters yet.
old_style = filter(lambda x: isinstance(x, basestring), options.bot)
new_style = filter(lambda x: isinstance(x, tuple), options.bot)
for bot in old_style:
if ':' in bot:
parser.error('Specifying testfilter is no longer supported')
elif ',' in bot:
parser.error('Specify one bot per --bot flag')
else:
builders_and_tests.setdefault(bot, [])
for bot, tests in new_style:
builders_and_tests.setdefault(bot, []).extend(tests)
# Return a master map with one master to be backwards compatible. The
# master name defaults to an empty string, which will cause the master
# not to be set on rietveld (deprecated).
return {options.master: builders_and_tests}
masters = GetMasterMap()
for builders in masters.itervalues():
if any('triggered' in b for b in builders):
print('ERROR You are trying to send a job to a triggered bot. This type '
'of bot requires an\ninitial job from a parent (usually a builder).'
' Instead send your job to the parent.\n'
'Bot list: %s' % builders, file=sys.stderr)
return 1
patchset = cl.GetMostRecentPatchset()
if patchset and patchset != cl.GetPatchset():
print(
'\nWARNING Mismatch between local config and server. Did a previous '
'upload fail?\ngit-cl try always uses latest patchset from rietveld. '
'Continuing using\npatchset %s.\n' % patchset)
if options.luci:
trigger_luci_job(cl, masters, options)
elif not options.use_rietveld:
try:
trigger_try_jobs(auth_config, cl, options, masters, 'git_cl_try')
except BuildbucketResponseException as ex:
print('ERROR: %s' % ex)
return 1
except Exception as e:
stacktrace = (''.join(traceback.format_stack()) + traceback.format_exc())
print('ERROR: Exception when trying to trigger tryjobs: %s\n%s' %
(e, stacktrace))
return 1
else:
try:
cl.RpcServer().trigger_distributed_try_jobs(
cl.GetIssue(), patchset, options.name, options.clobber,
options.revision, masters)
except urllib2.HTTPError as e:
if e.code == 404:
print('404 from rietveld; '
'did you mean to use "git try" instead of "git cl try"?')
return 1
print('Tried jobs on:')
for (master, builders) in sorted(masters.iteritems()):
if master:
print('Master: %s' % master)
length = max(len(builder) for builder in builders)
for builder in sorted(builders):
print(' %*s: %s' % (length, builder, ','.join(builders[builder])))
return 0
def CMDtry_results(parser, args):
group = optparse.OptionGroup(parser, "Try job results options")
group.add_option(
"-p", "--patchset", type=int, help="patchset number if not current.")
group.add_option(
"--print-master", action='store_true', help="print master name as well.")
group.add_option(
"--color", action='store_true', default=setup_color.IS_TTY,
help="force color output, useful when piping output.")
group.add_option(
"--buildbucket-host", default='cr-buildbucket.appspot.com',
help="Host of buildbucket. The default host is %default.")
parser.add_option_group(group)
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
if args:
parser.error('Unrecognized args: %s' % ' '.join(args))
auth_config = auth.extract_auth_config_from_options(options)
cl = Changelist(auth_config=auth_config)
if not cl.GetIssue():
parser.error('Need to upload first')
if not options.patchset:
options.patchset = cl.GetMostRecentPatchset()
if options.patchset and options.patchset != cl.GetPatchset():
print(
'\nWARNING Mismatch between local config and server. Did a previous '
'upload fail?\ngit-cl try always uses latest patchset from rietveld. '
'Continuing using\npatchset %s.\n' % options.patchset)
try:
jobs = fetch_try_jobs(auth_config, cl, options)
except BuildbucketResponseException as ex:
print('Buildbucket error: %s' % ex)
return 1
except Exception as e:
stacktrace = (''.join(traceback.format_stack()) + traceback.format_exc())
print('ERROR: Exception when trying to fetch tryjobs: %s\n%s' %
(e, stacktrace))
return 1
print_tryjobs(options, jobs)
return 0
@subcommand.usage('[new upstream branch]')
def CMDupstream(parser, args):
"""Prints or sets the name of the upstream branch, if any."""
_, args = parser.parse_args(args)
if len(args) > 1:
parser.error('Unrecognized args: %s' % ' '.join(args))
cl = Changelist()
if args:
# One arg means set upstream branch.
branch = cl.GetBranch()
RunGit(['branch', '--set-upstream', branch, args[0]])
cl = Changelist()
print('Upstream branch set to %s' % (cl.GetUpstreamBranch(),))
# Clear configured merge-base, if there is one.
git_common.remove_merge_base(branch)
else:
print(cl.GetUpstreamBranch())
return 0
def CMDweb(parser, args):
"""Opens the current CL in the web browser."""
_, args = parser.parse_args(args)
if args:
parser.error('Unrecognized args: %s' % ' '.join(args))
issue_url = Changelist().GetIssueURL()
if not issue_url:
print('ERROR No issue to open', file=sys.stderr)
return 1
webbrowser.open(issue_url)
return 0
def CMDset_commit(parser, args):
"""Sets the commit bit to trigger the Commit Queue."""
parser.add_option('-d', '--dry-run', action='store_true',
help='trigger in dry run mode')
parser.add_option('-c', '--clear', action='store_true',
help='stop CQ run, if any')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
if args:
parser.error('Unrecognized args: %s' % ' '.join(args))
if options.dry_run and options.clear:
parser.error('Make up your mind: both --dry-run and --clear not allowed')
cl = Changelist(auth_config=auth_config)
if options.clear:
state = _CQState.CLEAR
elif options.dry_run:
state = _CQState.DRY_RUN
else:
state = _CQState.COMMIT
if not cl.GetIssue():
parser.error('Must upload the issue first')
cl.SetCQState(state)
return 0
def CMDset_close(parser, args):
"""Closes the issue."""
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
if args:
parser.error('Unrecognized args: %s' % ' '.join(args))
cl = Changelist(auth_config=auth_config)
# Ensure there actually is an issue to close.
cl.GetDescription()
cl.CloseIssue()
return 0
def CMDdiff(parser, args):
"""Shows differences between local tree and last upload."""
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
if args:
parser.error('Unrecognized args: %s' % ' '.join(args))
# Uncommitted (staged and unstaged) changes will be destroyed by
# "git reset --hard" if there are merging conflicts in CMDPatchIssue().
# Staged changes would be committed along with the patch from last
# upload, hence counted toward the "last upload" side in the final
# diff output, and this is not what we want.
if git_common.is_dirty_git_tree('diff'):
return 1
cl = Changelist(auth_config=auth_config)
issue = cl.GetIssue()
branch = cl.GetBranch()
if not issue:
DieWithError('No issue found for current branch (%s)' % branch)
TMP_BRANCH = 'git-cl-diff'
base_branch = cl.GetCommonAncestorWithUpstream()
# Create a new branch based on the merge-base
RunGit(['checkout', '-q', '-b', TMP_BRANCH, base_branch])
# Clear cached branch in cl object, to avoid overwriting original CL branch
# properties.
cl.ClearBranch()
try:
rtn = cl.CMDPatchIssue(issue, reject=False, nocommit=False, directory=None)
if rtn != 0:
RunGit(['reset', '--hard'])
return rtn
# Switch back to starting branch and diff against the temporary
# branch containing the latest rietveld patch.
subprocess2.check_call(['git', 'diff', TMP_BRANCH, branch, '--'])
finally:
RunGit(['checkout', '-q', branch])
RunGit(['branch', '-D', TMP_BRANCH])
return 0
def CMDowners(parser, args):
"""Interactively find the owners for reviewing."""
parser.add_option(
'--no-color',
action='store_true',
help='Use this option to disable color output')
auth.add_auth_options(parser)
options, args = parser.parse_args(args)
auth_config = auth.extract_auth_config_from_options(options)
author = RunGit(['config', 'user.email']).strip() or None
cl = Changelist(auth_config=auth_config)
if args:
if len(args) > 1:
parser.error('Unknown args')
base_branch = args[0]
else:
# Default to diffing against the common ancestor of the upstream branch.
base_branch = cl.GetCommonAncestorWithUpstream()
change = cl.GetChange(base_branch, None)
return owners_finder.OwnersFinder(
[f.LocalPath() for f in
cl.GetChange(base_branch, None).AffectedFiles()],
change.RepositoryRoot(), author,
fopen=file, os_path=os.path, glob=glob.glob,
disable_color=options.no_color).run()
def BuildGitDiffCmd(diff_type, upstream_commit, args):
"""Generates a diff command."""
# Generate diff for the current branch's changes.
diff_cmd = ['diff', '--no-ext-diff', '--no-prefix', diff_type,
upstream_commit, '--' ]
if args:
for arg in args:
if os.path.isdir(arg) or os.path.isfile(arg):
diff_cmd.append(arg)
else:
DieWithError('Argument "%s" is not a file or a directory' % arg)
return diff_cmd
def MatchingFileType(file_name, extensions):
"""Returns true if the file name ends with one of the given extensions."""
return bool([ext for ext in extensions if file_name.lower().endswith(ext)])
@subcommand.usage('[files or directories to diff]')
def CMDformat(parser, args):
"""Runs auto-formatting tools (clang-format etc.) on the diff."""
CLANG_EXTS = ['.cc', '.cpp', '.h', '.mm', '.proto', '.java']
GN_EXTS = ['.gn', '.gni', '.typemap']
parser.add_option('--full', action='store_true',
help='Reformat the full content of all touched files')
parser.add_option('--dry-run', action='store_true',
help='Don\'t modify any file on disk.')
parser.add_option('--python', action='store_true',
help='Format python code with yapf (experimental).')
parser.add_option('--diff', action='store_true',
help='Print diff to stdout rather than modifying files.')
opts, args = parser.parse_args(args)
# git diff generates paths against the root of the repository. Change
# to that directory so clang-format can find files even within subdirs.
rel_base_path = settings.GetRelativeRoot()
if rel_base_path:
os.chdir(rel_base_path)
# Grab the merge-base commit, i.e. the upstream commit of the current
# branch when it was created or the last time it was rebased. This is
# to cover the case where the user may have called "git fetch origin",
# moving the origin branch to a newer commit, but hasn't rebased yet.
upstream_commit = None
cl = Changelist()
upstream_branch = cl.GetUpstreamBranch()
if upstream_branch:
upstream_commit = RunGit(['merge-base', 'HEAD', upstream_branch])
upstream_commit = upstream_commit.strip()
if not upstream_commit:
DieWithError('Could not find base commit for this branch. '
'Are you in detached state?')
changed_files_cmd = BuildGitDiffCmd('--name-only', upstream_commit, args)
diff_output = RunGit(changed_files_cmd)
diff_files = diff_output.splitlines()
# Filter out files deleted by this CL
diff_files = [x for x in diff_files if os.path.isfile(x)]
clang_diff_files = [x for x in diff_files if MatchingFileType(x, CLANG_EXTS)]
python_diff_files = [x for x in diff_files if MatchingFileType(x, ['.py'])]
dart_diff_files = [x for x in diff_files if MatchingFileType(x, ['.dart'])]
gn_diff_files = [x for x in diff_files if MatchingFileType(x, GN_EXTS)]
top_dir = os.path.normpath(
RunGit(["rev-parse", "--show-toplevel"]).rstrip('\n'))
# Set to 2 to signal to CheckPatchFormatted() that this patch isn't
# formatted. This is used to block during the presubmit.
return_value = 0
if clang_diff_files:
# Locate the clang-format binary in the checkout
try:
clang_format_tool = clang_format.FindClangFormatToolInChromiumTree()
except clang_format.NotFoundError as e:
DieWithError(e)
if opts.full:
cmd = [clang_format_tool]
if not opts.dry_run and not opts.diff:
cmd.append('-i')
stdout = RunCommand(cmd + clang_diff_files, cwd=top_dir)
if opts.diff:
sys.stdout.write(stdout)
else:
env = os.environ.copy()
env['PATH'] = str(os.path.dirname(clang_format_tool))
try:
script = clang_format.FindClangFormatScriptInChromiumTree(
'clang-format-diff.py')
except clang_format.NotFoundError as e:
DieWithError(e)
cmd = [sys.executable, script, '-p0']
if not opts.dry_run and not opts.diff:
cmd.append('-i')
diff_cmd = BuildGitDiffCmd('-U0', upstream_commit, clang_diff_files)
diff_output = RunGit(diff_cmd)
stdout = RunCommand(cmd, stdin=diff_output, cwd=top_dir, env=env)
if opts.diff:
sys.stdout.write(stdout)
if opts.dry_run and len(stdout) > 0:
return_value = 2
# Similar code to above, but using yapf on .py files rather than clang-format
# on C/C++ files
if opts.python:
yapf_tool = gclient_utils.FindExecutable('yapf')
if yapf_tool is None:
DieWithError('yapf not found in PATH')
if opts.full:
if python_diff_files:
cmd = [yapf_tool]
if not opts.dry_run and not opts.diff:
cmd.append('-i')
stdout = RunCommand(cmd + python_diff_files, cwd=top_dir)
if opts.diff:
sys.stdout.write(stdout)
else:
# TODO(sbc): yapf --lines mode still has some issues.
# https://github.com/google/yapf/issues/154
DieWithError('--python currently only works with --full')
# Dart's formatter does not have the nice property of only operating on
# modified chunks, so hard code full.
if dart_diff_files:
try:
command = [dart_format.FindDartFmtToolInChromiumTree()]
if not opts.dry_run and not opts.diff:
command.append('-w')
command.extend(dart_diff_files)
stdout = RunCommand(command, cwd=top_dir)
if opts.dry_run and stdout:
return_value = 2
except dart_format.NotFoundError as e:
print('Warning: Unable to check Dart code formatting. Dart SDK not '
'found in this checkout. Files in other languages are still '
'formatted.')
# Format GN build files. Always run on full build files for canonical form.
if gn_diff_files:
cmd = ['gn', 'format']
if not opts.dry_run and not opts.diff:
cmd.append('--in-place')
for gn_diff_file in gn_diff_files:
stdout = RunCommand(cmd + [gn_diff_file],
shell=sys.platform == 'win32',
cwd=top_dir)
if opts.diff:
sys.stdout.write(stdout)
return return_value
@subcommand.usage('<codereview url or issue id>')
def CMDcheckout(parser, args):
"""Checks out a branch associated with a given Rietveld or Gerrit issue."""
_, args = parser.parse_args(args)
if len(args) != 1:
parser.print_help()
return 1
issue_arg = ParseIssueNumberArgument(args[0])
if not issue_arg.valid:
parser.print_help()
return 1
target_issue = str(issue_arg.issue)
def find_issues(issueprefix):
output = RunGit(['config', '--local', '--get-regexp',
r'branch\..*\.%s' % issueprefix],
error_ok=True)
for key, issue in [x.split() for x in output.splitlines()]:
if issue == target_issue:
yield re.sub(r'branch\.(.*)\.%s' % issueprefix, r'\1', key)
branches = []
for cls in _CODEREVIEW_IMPLEMENTATIONS.values():
branches.extend(find_issues(cls.IssueSettingSuffix()))
if len(branches) == 0:
print('No branch found for issue %s.' % target_issue)
return 1
if len(branches) == 1:
RunGit(['checkout', branches[0]])
else:
print('Multiple branches match issue %s:' % target_issue)
for i in range(len(branches)):
print('%d: %s' % (i, branches[i]))
which = raw_input('Choose by index: ')
try:
RunGit(['checkout', branches[int(which)]])
except (IndexError, ValueError):
print('Invalid selection, not checking out any branch.')
return 1
return 0
def CMDlol(parser, args):
# This command is intentionally undocumented.
print(zlib.decompress(base64.b64decode(
'eNptkLEOwyAMRHe+wupCIqW57v0Vq84WqWtXyrcXnCBsmgMJ+/SSAxMZgRB6NzE'
'E2ObgCKJooYdu4uAQVffUEoE1sRQLxAcqzd7uK2gmStrll1ucV3uZyaY5sXyDd9'
'JAnN+lAXsOMJ90GANAi43mq5/VeeacylKVgi8o6F1SC63FxnagHfJUTfUYdCR/W'
'Ofe+0dHL7PicpytKP750Fh1q2qnLVof4w8OZWNY')))
return 0
class OptionParser(optparse.OptionParser):
"""Creates the option parse and add --verbose support."""
def __init__(self, *args, **kwargs):
optparse.OptionParser.__init__(
self, *args, prog='git cl', version=__version__, **kwargs)
self.add_option(
'-v', '--verbose', action='count', default=0,
help='Use 2 times for more debugging info')
def parse_args(self, args=None, values=None):
options, args = optparse.OptionParser.parse_args(self, args, values)
levels = [logging.WARNING, logging.INFO, logging.DEBUG]
logging.basicConfig(level=levels[min(options.verbose, len(levels) - 1)])
return options, args
def main(argv):
if sys.hexversion < 0x02060000:
print('\nYour python version %s is unsupported, please upgrade.\n' %
(sys.version.split(' ', 1)[0],), file=sys.stderr)
return 2
# Reload settings.
global settings
settings = Settings()
colorize_CMDstatus_doc()
dispatcher = subcommand.CommandDispatcher(__name__)
try:
return dispatcher.execute(OptionParser(), argv)
except auth.AuthenticationError as e:
DieWithError(str(e))
except urllib2.HTTPError as e:
if e.code != 500:
raise
DieWithError(
('AppEngine is misbehaving and returned HTTP %d, again. Keep faith '
'and retry or visit go/isgaeup.\n%s') % (e.code, str(e)))
return 0
if __name__ == '__main__':
# These affect sys.stdout so do it outside of main() to simplify mocks in
# unit testing.
fix_encoding.fix_encoding()
setup_color.init()
try:
sys.exit(main(sys.argv[1:]))
except KeyboardInterrupt:
sys.stderr.write('interrupted\n')
sys.exit(1)
| git_cl.py | 190,006 | Contains a parsed form of the change description.
Changelist works with one changelist in local branch.
Supports two codereview backends: Rietveld or Gerrit, selected at object
creation.
Notes:
* Not safe for concurrent multi-{thread,process} use.
* Caches values from current branch. Therefore, re-use after branch change
with care.
Creates the option parse and add --verbose support.
Enum for states of CL with respect to Commit Queue.
Abstract base class encapsulating codereview specifics of a changelist.
Return True if specified branch exists.
Generates a diff command.
Fetches and applies the issue patch from codereview to local branch.
Fetches and applies the issue.
Arguments:
parsed_issue_arg: instance of _ParsedIssueNumberArgument.
reject: if True, reject the failed patch instead of switching to 3-way
merge. Rietveld only.
nocommit: do not commit the patch, thus leave the tree dirty. Rietveld
only.
directory: switch to directory before applying the patch. Rietveld only.
Uploads a change to codereview.
Uploads a change to codereview.
Upload the patch to Rietveld.
Upload the current branch to Gerrit.
Archives and deletes branches associated with closed changelists.
Gets or sets base-url for this branch.
Checks out a branch associated with a given Rietveld or Gerrit issue.
Shows or posts review comments for any changelist.
Edits configuration for this tree.
Commits the current changelist via git-svn.
Brings up the editor for the current CL's description.
Shows differences between local tree and last upload.
Runs auto-formatting tools (clang-format etc.) on the diff.
Sets or displays the current code review issue number.
Pass issue number 0 to clear the current issue.
Commits the current changelist via git.
Runs cpplint on the current changelist.
Interactively find the owners for reviewing.
Patches in a code review.
Runs presubmit tests on the current changelist.
Rebases current branch on top of svn repo.
Closes the issue.
Sets the commit bit to trigger the Commit Queue.
Show status of changelists.
Colors are used to tell the state of the CL unless --fast is used:
- Red not sent for review or broken
- Blue waiting for review
- Yellow waiting for you to reply to review
- Green LGTM'ed
- Magenta in the commit queue
- Cyan was committed, branch can be deleted
Also see 'git cl comments'.
Shows the status of the tree.
Triggers try jobs through BuildBucket.
Uploads the current changelist to codereview.
Can skip dependency patchset uploads for a branch by running:
git config branch.branch_name.skip-deps-uploads True
To unset run:
git config --unset branch.branch_name.skip-deps-uploads
Can also set the above globally by using the --global flag.
Prints or sets the name of the upstream branch, if any.
Opens the current CL in the web browser.
Clears cached branch data of this object.
Closes the issue.
Pulls out the commit log to use as a base for the CL description.
Download and install Gerrit commit-msg hook.
Args:
force: True to update hooks. False to install hooks if not present.
Best effort check that user is authenticated with codereview server.
Arguments:
force: whether to skip confirmation questions.
Best effort check that user is authenticated with Rietveld server.
Best effort check that user is authenticated with Gerrit server.
Fetches and returns description from the codereview server.
Returns a tuple containing remote and remote ref,
e.g. 'origin', 'refs/heads/master'
Finds the given file starting in the cwd and going up.
Only looks up to the top of the repository unless an
'inherit-review-settings-ok' file exists in the root of the repository.
Returns Ixxxxxx...xxx change id.
Works the same way as
https://gerrit-review.googlesource.com/tools/hooks/commit-msg
but can be called on demand on all platforms.
The basic idea is to generate git hash of a state of the tree, original commit
message, author/committer info and timestamps.
Returns a list of reviewers approving the change.
Note: not necessarily committers.
Returns a list of reviewers approving the change.
Note: not necessarily committers.
Returns the short branch name, e.g. 'master'.
Returns the full branch name, e.g. 'refs/heads/master'.
For a given builder, fetch the master from AE if available.
Return the users cc'd on this CL.
Return is a string suitable for passing to gcl with the --cc flag.
Return the users cc'd on this CL excluding default ones.
Returns server URL without end slash, like "https://codereview.com".
Returns git config setting for the codereview server.
Returns the git setting that stores this change's rietveld server.
Returns the git setting that stores this change's Gerrit server.
Returns current branch or None.
For refs/heads/* branches, returns just last part. For others, full ref.
Returns branch ref (e.g., refs/heads/master) or None.
Return True if EnsureAuthenticated should not be done for Gerrit
uploads.
Return the configured base URL from branch.<branchname>.baseurl.
Returns None if it is not set.
Return the editor specified in the git config, or None if none is.
If this checkout is from a local git mirror, return a Mirror object.
Return the configured git-svn remote URL parsed from git svn info.
Returns None if it is not set.
Return true if this repo is assosiated with gerrit code review system.
Return true if this repo looks like it's using git-svn.
Returns true if specified branch should skip dep uploads.
Returns the issue number as a int or None if not set.
Get the URL for a particular issue.
Returns the most recent patchset number from the codereview site.
Returns the patchset number as a int or None if not set.
Return the configured remote URL, e.g. 'git://example.org/foo.git/'.
Returns None if there is no remote.
Prompt the user for settings.
Return true if uploads to Gerrit should be squashed by default.
Return True or False if codereview.settings should be overridden.
Returns None if no override has been defined.
Apply a rough heuristic to give a simple summary of an issue's review
or CQ status, assuming adherence to a common workflow.
Returns None if no issue for this branch, or specific string keywords.
Apply a rough heuristic to give a simple summary of an issue's review
or CQ status, assuming adherence to a common workflow.
Returns None if no issue for this branch, or one of the following keywords:
* 'error' - error from review tool (including deleted issues)
* 'unsent' - not sent for review
* 'waiting' - waiting for review
* 'reply' - waiting for owner to reply to review
* 'lgtm' - LGTM from at least one approved reviewer
* 'commit' - in the commit queue
* 'closed' - closed
Apply a rough heuristic to give a simple summary of an issue's review
or CQ status, assuming adherence to a common workflow.
Returns None if no issue for this branch, or one of the following keywords:
* 'error' - error from review tool (including deleted issues)
* 'unsent' - no reviewers added
* 'waiting' - waiting for review
* 'reply' - waiting for owner to reply to review
* 'not lgtm' - Code-Review -2 from at least one approved reviewer
* 'lgtm' - Code-Review +2 from at least one approved reviewer
* 'commit' - in the commit queue
* 'closed' - abandoned
Computes the remote branch ref to use for the CL.
Args:
remote (str): The git remote for the CL.
remote_branch (str): The git remote branch for the CL.
target_branch (str): The target branch specified by the user.
pending_prefix (str): The pending prefix from the settings.
Fetches the tree status and returns either 'open', 'closed',
'unknown' or 'unset'.
Fetches the tree status from a json url and returns the message
with the reason for the tree to be opened or closed.
Checks git repo status and ensures diff is from local commits.
True if retrying push won't help.
Returns name of git config setting which stores issue number for a given
branch.
Updates the settings from a codereview.settings file, if available.
Parse a codereview.settings file and updates hooks.
Return the corresponding git ref if |base_url| together with |glob_spec|
matches the full |url|.
If |allow_wildcards| is true, |glob_spec| can contain wildcards (see below).
Returns true if the file name ends with one of the given extensions.
Parses the issue argument and returns _ParsedIssueNumberArgument.
Parses url and returns instance of _ParsedIssueNumberArgument or None if
failed.
Returns name of git config setting which stores issue number.
Return the git setting that stores this change's most recent patchset.
Return the git setting that stores this change's most recent patchset.
Fetches pending_ref, cherry-picks current HEAD on top of it, pushes.
Returns:
(retcode of last operation, output log of last operation).
Returns an upload.RpcServer() to access this review's rietveld instance.
Returns stdout.
Returns stdout, suppresses stderr and ignores the return code.
Returns return code and stdout.
Calls sys.exit() if the hook fails; returns a HookResults otherwise.
Common code for CMDland and CmdDCommit
In case of Gerrit, uses Gerrit REST api to "submit" the issue, which pushes
upstream and closes the issue automatically and atomically.
Otherwise (in case of Rietveld):
Squashes branch into a single commit.
Updates changelog with metadata (e.g. pointer to review).
Pushes/dcommits the code upstream.
Updates review and closes.
Update the CQ state for latest patchset.
Issue must have been already uploaded and known.
Update the CQ state for latest patchset.
Issue must have been already uploaded and known.
Sets the Commit-Queue label assuming canonical CQ config for Gerrit.
Patchset must match.
Set this branch's issue. If issue isn't given, clears the issue.
Set this branch's patchset. If patchset=0, clears the patchset.
Set the list of email addresses that should be cc'd based on the changed
files in this CL.
Convert a name like 'refs/heads/foo' to just 'foo'.
Update the description on codereview site.
Re-commits using the current message, assumes the commit hook is in
place.
Returns git host to be used when uploading change to Gerrit.
Which branch-specific properties to erase when unsettin issue.
Which branch-specific properties to erase when unsetting issue.
Which branch-specific properties to erase when unsetting issue.
Create a new ChangeList instance.
If issue is given, the codereview must be given too.
If `codereview` is given, it must be 'rietveld' or 'gerrit'.
Otherwise, it's decided based on current configuration of the local branch,
with default being 'rietveld' for backwards compatibility.
See _load_codereview_impl for more details.
**kwargs will be passed directly to codereview implementation.
Appends --gerrit and --rietveld options to force specific codereview.
Retries requests to buildbucket service and returns parsed json content.
Convert user-specified master name to full master name.
Buildbucket uses full master name(master.tryserver.chromium.linux) as bucket
name, while the developers always use shortened master name
(tryserver.chromium.linux) by stripping off the prefix 'master.'. This
function does the conversion for buildbucket migration.
Adds a footer line to the description.
Differentiates legacy "KEY=xxx" footers (used to be called tags) and
Gerrit's footers in the form of "Footer-Key: footer any value" and ensures
that Gerrit footers are always at the end.
Fixes a list so that comma separated items are put as individual items.
So that "--reviewers joe@c,john@c --reviewers joa@c" results in
options.reviewers == sorted(['joe@c', 'john@c', 'joa@c']).
Maps a Changelist status to color, for CMDstatus and other tools.
To be called once in main() to add colors to git cl status help.
Fetches tryjobs from buildbucket.
Returns a map from build id to build info as json dictionary.
Retrieves the reviewers that approved a CL from the issue properties with
messages.
Note that the list may contain reviewers that are not committer, thus are not
considered by the CQ.
Returns a blocking iterable of (cl, status) for given branches.
If fine_grained is true, this will fetch CL statuses from the server.
Otherwise, simply indicate if there's a matching url for the given branches.
If max_processes is specified, it is used as the maximum number of processes
to spawn to fetch CL status from the server. Otherwise 1 process per branch is
spawned.
See GetStatus() for a list of possible statuses.
Retrieves the list of reviewers.
Checks fname is a #! script.
Pop matching builds from `builds` dict and print them.
Prints statistics about the change to the user.
Prints nicely result of fetch_try_jobs.
Asks the user to update the description.
Send a job to run on LUCI.
Rewrites the R=/TBR= line(s) as a single line each.
Uploads CLs of local branches that are dependents of the current branch.
If the local branch dependency tree looks like:
test1 -> test2.1 -> test3.1
-> test3.2
-> test2.2 -> test3.3
and you run "git cl upload --dependencies" from test1 then "git cl upload" is
run on the dependent branches in this order:
test2.1, test3.1, test3.2, test2.2, test3.3
Note: This function does not rebase your local dependent branches. Use it when
you make a change to the parent branch that will not conflict with its
dependent branches, and you would like their dependencies updated in
Rietveld.
urllib is broken for SSL connections via a proxy therefore we
can't use urllib.urlretrieve().
A git-command for integrating reviews on Rietveld and Gerrit.
!/usr/bin/env python Copyright (c) 2012 The Chromium Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. Copyright (C) 2008 Evan Martin <martine@danga.com> pylint: disable=F0401,W0611 Valid extensions for files we want to lint. Shortcut since it quickly becomes redundant. Initialized in main() 'cat' is a magical git string that disables pagers on all platforms. When the subprocess fails, it returns None. That triggers a ValueError when trying to unpack the return value into (out, code). Hide the exception. If a value couldn't be evaluated, treat it as a string. Buildbucket could return an error even if status==200. status >= 500 means transient failures. TODO(hinoka et al): add support for other properties. Currently, this completely ignores testfilter and other properties. Get the message on how to login. Make a copy, because we'll be modifying builds dictionary. The last section is just in case buildbucket API changes OR there is a bug. Parse specs like "branches/*/src:refs/remotes/svn/*" or "branches/{472,597,648}/src:refs/remotes/svn/*". Escape and replace surrounding braces with parentheses and commas with pipe symbols. Parse specs like "trunk/src:refs/remotes/origin/trunk". --no-ext-diff is broken in some versions of Git, so try to work around this by overriding the environment (but there is still a problem if the git config key "diff.external" is used). The only value that actually changes the behavior is autoupdate = "false". Everything else means "true". Use the /dev/null print_func to avoid terminal spew in WaitForRealCommit. If PENDING_REF_PREFIX is set then it's a pure git repo no matter what. If you have any "svn-remote.*" config keys, we think you're using svn. Try to figure out which remote branch we're based on. Strategy: 1) iterate through our branch history and find the svn URL. 2) find the svn-remote that fetches from the URL. regexp matching the git-svn line that contains the URL. We don't want to go through all of history, so read a line from the pipe at a time. The -100 is an arbitrary limit so we don't search forever. Cut pipe. Default is squash now (http://crbug.com/611892c23). See also http://crbug.com/611892c23 Poke settings so we get the "configure your server" message if necessary. Happens when git_cl.py is used as a utility library. Automatic selection based on issue number set for a current branch. Rietveld takes precedence over Gerrit. Whether we find issue or not, we are doing the lookup. No issue is set for this branch, so decide based on repo-wide settings. Poke the lazy loader. Fall back on trying a git-svn upstream branch. Else, try to guess the origin remote. Fall back on origin/master if it exits. Fall back on origin/trunk if it exists. Generally a shared git-svn clone Verify the commit we're diffing against is in our current branch. List the commits inside the diff, and verify they are all local. URL is dependent on the current directory. If URL is pointing to a local directory, it is probably a git cache. Reset it regardless. It doesn't hurt. We use the sha1 of HEAD as a name of this change. Need to pass a relative path for msysgit. If the change was never uploaded, use the log messages of all commits up to the branch point, as git cl upload will prefill the description with these log messages. Assume url. TODO(ukai): is it ok for gerrit case? Default to diffing against common ancestor of upstream branch Make sure authenticated to codereview before running potentially expensive hooks. It is a fast, best efforts check. Codereview still can reject the authentication during the actual upload. Apply watchlists on upload. Set the reviewer list now so that presubmit checks can access it. Run post upload hooks, if specified. Upload all dependencies if specified. Remove the dependencies flag from args so that we do not end up in a loop. Forward methods to codereview specific implementation. This is because lots of untested code accesses Rietveld-specific stuff directly, and it's hard to fix for sure. So, just let it work, and fix on a case by case basis. instance of Changelist Forward methods to changelist. TODO(tandrii): maybe clean up _GerritChangelistImpl and _RietveldChangelistImpl to avoid this hack? This is an unfortunate Rietveld-embeddedness in presubmit. For non-Rietveld codereviews, this probably should return a dummy object. None is valid return value, otherwise presubmit_support.GerritAccessor. If we're on a branch then get the server potentially associated with that branch. Issue is closed. Issue is in the commit queue. Was LGTM'ed. Skip CQ messages that don't require owner's action. This message always follows prior messages from CQ, so skip this too. This is probably a CQ messages warranting user attention. No message was sent. Non-LGTM reply from non-owner and not CQ bot. TODO(maruel): Use apply_issue.py PatchIssue should never be called with a dirty tree. It is up to the caller to check this, but just in case we assert here since the consequences of the caller not checking this could be dire. Switch up to the top-level directory, if necessary, in preparation for applying the patch. Git patches have a/ at the beginning of source paths. We strip that out with a sed script rather than the -p flag to patch so we can feed either Git or svn-style patches into the same apply command. re.sub() should be used but flags=re.MULTILINE is only in python 2.7. We use "git apply" to apply the patch instead of "patch" so that we can pick up file adds. The --index flag means: also insert into the index (so we catch adds). If we had an issue, commit the current state and register the issue. Typical url: https://domain/<issue_number>[/[other]] Rietveld patch: https://domain/download/issue<number>_<patchset>.diff Don't ask about untracked files. We check this before applying rietveld.private assuming that in rietveld.cc only addresses which we can send private CLs to are listed if rietveld.private is set, and so we should ignore rietveld.cc only when --private is specified explicitly on the command line. Include the upstream repo's URL in the change -- this is useful for projects that have their source spread across multiple repos. Look for dependent patchsets. See crbug.com/480453 for more details. A local branch is being tracked. If we got an exception after the user typed a description for their change, back up the description before re-raising. auth_config is Rietveld thing, kept here to preserve interface only. Lazily cached values. e.g. https://chromium-review.googlesource.com e.g. chromium-review.googlesource.com Lazy load of configs. Abbreviated domain like "chromium" instead of chromium.googlesource.com. This happens for internal stuff http://crbug.com/614312. If we're on a branch then get the server potentially associated with that branch. We assume repo to be hosted on Gerrit, and hence Gerrit server has "-review" suffix for lowest level subdomain. For projects with unusual authentication schemes. See http://crbug.com/603378. Lazy-loader to identify Gerrit and Git hosts. This is a hack to make presubmit_support think that rietveld is not defined, yet still ensure that calls directly result in a decent exception message below. Vote value is a stringified integer, which we expect from 0 to 2. Some reply from non-owner. Note: git diff outputs nothing if there is no diff. Use current revision by default. Gerrit's new UI is https://domain/c/<issue_number>[/[patchset]] But current GWT UI is https://domain//c/<issue_number>[/[patchset]] Short urls like https://domain/<issue_number> can be used, but don't allow specifying the patchset (you'd 404), but we allow that here. Crude attempt to distinguish Gerrit Codereview hook from potentially custom developer made one. Load default for user, repo, squash=true, in this order. We assume the remote called "origin" is the one we want. It is probably not worthwhile to support different workflows. TODO(tandrii): deperecate this after 2016Q2. Backwards compatibility with shadow branch, which used to contain change-id for a given branch, using which we can fetch actual issue number and set it as the property of the branch, which is the new way. For readability of the blob below. End of backwards compatability. Try to get the message from a previous upload. There is already a valid footer but with different or several ids. Doing this automatically is non-trivial as we don't want to lose existing other footers, yet we want to append just 1 desired Change-Id. Thus, just create a new footer, but let user verify the new description. Continue the while loop. Sanity check of this code - we should end up with proper message footer. Generate the Change-Id automatically. If our upstream branch is local, we base our squashed commit on its squashed version. Check the squashed hash of the parent. Verify that the upstream branch has been uploaded too, otherwise Gerrit will create additional CLs when uploading. TODO(tandrii): remove "old depot_tools" part on April 12, 2016. Extra options that can be specified at push time. Doc: https://gerrit-review.googlesource.com/Documentation/user-upload.html Per doc, spaces must be converted to underscores, and Gerrit will do the reverse on its side. Flush after every line: useful for seeing progress when running as recipe. TODO(tandrii): maybe allow configurability in codereview.settings or by self-discovery of label config for this CL using REST API. www.logilab.org/ticket/89786 pylint: disable=E0202 Get the set of R= and TBR= lines and remove them from the desciption. Construct new unified R= and TBR= lines. Put the new lines in the description where the old first R= line was. Strip off comments. Line is a gerrit footer in the form: Footer-Key: any value. Thus, must be appended observing Gerrit footer rules. git_footers.split_footers ensures that there is an empty line before actual (gerrit) footers, if any. We have to keep it that way. No need for separator if there are no gerrit_footers. Only server setting is required. Other settings can be absent. In that case, we ignore errors raised during option deletion attempt.should be of the formPUSH_URL_CONFIG: url.ssh://gitrw.chromium.org.pushinsteadofORIGIN_URL_CONFIG: http://src.chromium.org/git TODO(bpastene) Remove once a cleaner fix to crbug.com/600473 presents itself. TODO(tandrii): add Gerrit support as part of http://crbug.com/603116. Load code review settings and download hooks (if available). Silence upload.py otherwise it becomes unwieldly. Process one branch synchronously to work through authentication, then spawn processes to process all the other branches in parallel. Exit early if there was only one branch to fetch. Add any branches that failed to fetch. Do not use GetApprovingReviewers(), since it requires an HTTP request. Create a dictionary of all local branches to the branches that are dependent on it. Add a default patchset title to all upload calls in Rietveld. Record all dependents that failed to upload. Go through all dependents, checkout the branch and upload. pylint: disable=W0702 Swap back to the original root branch. The issue probably doesn't exist anymore. Extract whitespaces first and the leading '-'. Reverse issue lookup. Hope for the best! Access to a protected member _XX of a client class pylint: disable=W0212 Change the current working directory before calling lint so that it shows the correct base. Process cpplints arguments if any. Default to diffing against the common ancestor of the upstream branch. Note: Gerrit's commit-hook actually cleans message of some lines and whitespace. This code is not doing this, but it clearly won't decrease entropy. Cannonicalize branch references to the equivalent local full symbolic refs, which are then translated into the remote full symbolic refs below. This is a branch path but not one we recognize; use as-is. Handle the refs that need to land in different refs. Create the true path to the remote branch. Does the following translation: * refs/remotes/origin/refs/diff/test -> refs/diff/test * refs/remotes/origin/master -> refs/heads/master * refs/remotes/branch-heads/test -> refs/branch-heads/test If a pending prefix exists then replace refs/ with it. For sanity of test expectations, do this otherwise lazy-loading *now*. When submodules are added to the repo, we expect there to be a single non-git-svn merge commit at remote HEAD with a signature comment. TODO(tandrii): refactor this into _RietveldChangelistImpl method. This could be implemented, but it requires sending a new patch to Gerrit, as Gerrit unlike Rietveld versions messages with patchsets. Besides, Gerrit has the ability to change the commit message on submit automatically, thus there is no need to support this option (so far?). Default to merging against our best guess of the upstream branch. This rev-list syntax means "show all commits not in my branch that are in base_branch". This is the revision `svn dcommit` will commit on top of. If the base_head is a submodule merge commit, the first parent of the base_head should be a git-svn commit, which is what we're interested in. Check the tree status if the tree status URL is set. Keep a separate copy for the commit message, because the commit message contains the link to the Rietveld issue, while the Rietveld message contains the commit viewvc url. Keep a separate copy for the commit message. Xcode won't linkify this URL unless there is a non-whitespace character after it. Add a period on a new line to circumvent this. Also add a space before the period to make sure that Gitiles continues to correctly resolve the URL. We want to squash all this branch's commits into one commit with the proper description. We do this by doing a "reset --soft" to the base branch (which keeps the working copy the same), then dcommitting that. If origin/master has a submodule merge commit, we'll also need to cherry-pick the squashed commit onto a branch based on the git-svn head. Delete the branches if they exist. We might be in a directory that's present in this branch but not in the trunk. Move up to the top of the tree so that git commands that expect a valid CWD won't fail after we check out the merge branch. Stuff our change into the merge branch. We wrap in a try...finally block so if anything goes wrong, we clean up the branches. If not using refs/pending/heads/* at all, or target ref is already set to pending, then push to the target ref directly. Cherry-pick the change on top of pending ref and then push it. dcommit the merge branch. Allow forcing https commit URLs for some projects that don't allow committing to http URLs (like Google Code). And then swap back to the original branch and clean up. We set pushed_to_pending to False, since it made it all the way to the real ref. Fetch. Retry fetch errors. Try to cherry pick. Abort on merge conflicts. Applied cleanly, try to push now. Retry on error (flake or non-ff push). Success. If it looks like previous commits were mirrored with git-svn. TODO(tandrii): kill this post SVN migration with https://codereview.chromium.org/2076683002 We don't want uncommitted changes mixed up with the patch. Provide a wrapper for git svn rebase to help avoid accidental git svn dcommit. It's the only command that doesn't use parser at all since we just defer execution to git-svn. Make sure that all properties are prop=value pairs. Code below assumes Rietveld issue. TODO(tandrii): actually implement for Gerrit http://crbug.com/599931. Process --bot. Get try masters from PRESUBMIT.py files. Fall back to deprecated method: get try slaves from PRESUBMIT.py files. Get try masters from cq.cfg if any. TODO(tandrii): some (but very few) projects store cq.cfg in different location. Skip presubmit builders, because these will fail without LGTM. TODO(machenbach): The old style command-line options don't support multiple try masters yet. Return a master map with one master to be backwards compatible. The master name defaults to an empty string, which will cause the master not to be set on rietveld (deprecated). One arg means set upstream branch. Clear configured merge-base, if there is one. Ensure there actually is an issue to close. Uncommitted (staged and unstaged) changes will be destroyed by "git reset --hard" if there are merging conflicts in CMDPatchIssue(). Staged changes would be committed along with the patch from last upload, hence counted toward the "last upload" side in the final diff output, and this is not what we want. Create a new branch based on the merge-base Clear cached branch in cl object, to avoid overwriting original CL branch properties. Switch back to starting branch and diff against the temporary branch containing the latest rietveld patch. Default to diffing against the common ancestor of the upstream branch. Generate diff for the current branch's changes. git diff generates paths against the root of the repository. Change to that directory so clang-format can find files even within subdirs. Grab the merge-base commit, i.e. the upstream commit of the current branch when it was created or the last time it was rebased. This is to cover the case where the user may have called "git fetch origin", moving the origin branch to a newer commit, but hasn't rebased yet. Filter out files deleted by this CL Set to 2 to signal to CheckPatchFormatted() that this patch isn't formatted. This is used to block during the presubmit. Locate the clang-format binary in the checkout Similar code to above, but using yapf on .py files rather than clang-format on C/C++ files TODO(sbc): yapf --lines mode still has some issues. https://github.com/google/yapf/issues/154 Dart's formatter does not have the nice property of only operating on modified chunks, so hard code full. Format GN build files. Always run on full build files for canonical form. This command is intentionally undocumented. Reload settings. These affect sys.stdout so do it outside of main() to simplify mocks in unit testing. | 31,633 | en | 0.854541 |
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
# Copyright (c) 2010 Citrix Systems, Inc.
# Copyright (c) 2011 Piston Cloud Computing, Inc
# Copyright (c) 2012 University Of Minho
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A connection to a hypervisor through libvirt.
Supports KVM, LXC, QEMU, UML, and XEN.
"""
import errno
import functools
import glob
import mmap
import os
import shutil
import socket
import sys
import tempfile
import threading
import time
import uuid
import eventlet
from eventlet import greenio
from eventlet import greenthread
from eventlet import patcher
from eventlet import tpool
from eventlet import util as eventlet_util
from lxml import etree
from oslo.config import cfg
import six
from nova.api.metadata import base as instance_metadata
from nova import block_device
from nova.compute import flavors
from nova.compute import power_state
from nova.compute import task_states
from nova.compute import utils as compute_utils
from nova.compute import vm_mode
from nova import context as nova_context
from nova import exception
from nova.i18n import _
from nova.i18n import _LE
from nova.i18n import _LI
from nova.i18n import _LW
from nova import image
from nova import objects
from nova.openstack.common import excutils
from nova.openstack.common import fileutils
from nova.openstack.common import importutils
from nova.openstack.common import jsonutils
from nova.openstack.common import log as logging
from nova.openstack.common import loopingcall
from nova.openstack.common import processutils
from nova.openstack.common import units
from nova.openstack.common import xmlutils
from nova.pci import pci_manager
from nova.pci import pci_utils
from nova.pci import pci_whitelist
from nova import rpc
from nova import utils
from nova import version
from nova.virt import block_device as driver_block_device
from nova.virt import configdrive
from nova.virt.disk import api as disk
from nova.virt.disk.vfs import guestfs
from nova.virt import driver
from nova.virt import event as virtevent
from nova.virt import firewall
from nova.virt import hardware
from nova.virt.libvirt import blockinfo
from nova.virt.libvirt import config as vconfig
from nova.virt.libvirt import firewall as libvirt_firewall
from nova.virt.libvirt import imagebackend
from nova.virt.libvirt import imagecache
from nova.virt.libvirt import lvm
from nova.virt.libvirt import rbd
from nova.virt.libvirt import utils as libvirt_utils
from nova.virt.libvirt import vif as libvirt_vif
from nova.virt import netutils
from nova.virt import watchdog_actions
from nova import volume
from nova.volume import encryptors
native_threading = patcher.original("threading")
native_Queue = patcher.original("Queue")
libvirt = None
LOG = logging.getLogger(__name__)
libvirt_opts = [
cfg.StrOpt('rescue_image_id',
help='Rescue ami image. This will not be used if an image id '
'is provided by the user.'),
cfg.StrOpt('rescue_kernel_id',
help='Rescue aki image'),
cfg.StrOpt('rescue_ramdisk_id',
help='Rescue ari image'),
cfg.StrOpt('virt_type',
default='kvm',
help='Libvirt domain type (valid options are: '
'kvm, lxc, qemu, uml, xen)'),
cfg.StrOpt('connection_uri',
default='',
help='Override the default libvirt URI '
'(which is dependent on virt_type)'),
cfg.BoolOpt('inject_password',
default=False,
help='Inject the admin password at boot time, '
'without an agent.'),
cfg.BoolOpt('inject_key',
default=False,
help='Inject the ssh public key at boot time'),
cfg.IntOpt('inject_partition',
default=-2,
help='The partition to inject to : '
'-2 => disable, -1 => inspect (libguestfs only), '
'0 => not partitioned, >0 => partition number'),
cfg.BoolOpt('use_usb_tablet',
default=True,
help='Sync virtual and real mouse cursors in Windows VMs'),
cfg.StrOpt('live_migration_uri',
default="qemu+tcp://%s/system",
help='Migration target URI '
'(any included "%s" is replaced with '
'the migration target hostname)'),
cfg.StrOpt('live_migration_flag',
default='VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER',
help='Migration flags to be set for live migration'),
cfg.StrOpt('block_migration_flag',
default='VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, '
'VIR_MIGRATE_NON_SHARED_INC',
help='Migration flags to be set for block migration'),
cfg.IntOpt('live_migration_bandwidth',
default=0,
help='Maximum bandwidth to be used during migration, in Mbps'),
cfg.StrOpt('snapshot_image_format',
help='Snapshot image format (valid options are : '
'raw, qcow2, vmdk, vdi). '
'Defaults to same as source image'),
cfg.ListOpt('volume_drivers',
default=[
'iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver',
'iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver',
'local=nova.virt.libvirt.volume.LibvirtVolumeDriver',
'fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver',
'rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver',
'sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver',
'nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver',
'aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver',
'glusterfs='
'nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver',
'fibre_channel=nova.virt.libvirt.volume.'
'LibvirtFibreChannelVolumeDriver',
'scality='
'nova.virt.libvirt.volume.LibvirtScalityVolumeDriver',
],
help='Libvirt handlers for remote volumes.'),
cfg.StrOpt('disk_prefix',
help='Override the default disk prefix for the devices attached'
' to a server, which is dependent on virt_type. '
'(valid options are: sd, xvd, uvd, vd)'),
cfg.IntOpt('wait_soft_reboot_seconds',
default=120,
help='Number of seconds to wait for instance to shut down after'
' soft reboot request is made. We fall back to hard reboot'
' if instance does not shutdown within this window.'),
cfg.StrOpt('cpu_mode',
help='Set to "host-model" to clone the host CPU feature flags; '
'to "host-passthrough" to use the host CPU model exactly; '
'to "custom" to use a named CPU model; '
'to "none" to not set any CPU model. '
'If virt_type="kvm|qemu", it will default to '
'"host-model", otherwise it will default to "none"'),
cfg.StrOpt('cpu_model',
help='Set to a named libvirt CPU model (see names listed '
'in /usr/share/libvirt/cpu_map.xml). Only has effect if '
'cpu_mode="custom" and virt_type="kvm|qemu"'),
cfg.StrOpt('snapshots_directory',
default='$instances_path/snapshots',
help='Location where libvirt driver will store snapshots '
'before uploading them to image service'),
cfg.StrOpt('xen_hvmloader_path',
default='/usr/lib/xen/boot/hvmloader',
help='Location where the Xen hvmloader is kept'),
cfg.ListOpt('disk_cachemodes',
default=[],
help='Specific cachemodes to use for different disk types '
'e.g: file=directsync,block=none'),
cfg.StrOpt('rng_dev_path',
help='A path to a device that will be used as source of '
'entropy on the host. Permitted options are: '
'/dev/random or /dev/hwrng'),
]
CONF = cfg.CONF
CONF.register_opts(libvirt_opts, 'libvirt')
CONF.import_opt('host', 'nova.netconf')
CONF.import_opt('my_ip', 'nova.netconf')
CONF.import_opt('default_ephemeral_format', 'nova.virt.driver')
CONF.import_opt('use_cow_images', 'nova.virt.driver')
CONF.import_opt('live_migration_retry_count', 'nova.compute.manager')
CONF.import_opt('vncserver_proxyclient_address', 'nova.vnc')
CONF.import_opt('server_proxyclient_address', 'nova.spice', group='spice')
CONF.import_opt('vcpu_pin_set', 'nova.virt.hardware')
CONF.import_opt('vif_plugging_is_fatal', 'nova.virt.driver')
CONF.import_opt('vif_plugging_timeout', 'nova.virt.driver')
DEFAULT_FIREWALL_DRIVER = "%s.%s" % (
libvirt_firewall.__name__,
libvirt_firewall.IptablesFirewallDriver.__name__)
MAX_CONSOLE_BYTES = 100 * units.Ki
# The libvirt driver will prefix any disable reason codes with this string.
DISABLE_PREFIX = 'AUTO: '
# Disable reason for the service which was enabled or disabled without reason
DISABLE_REASON_UNDEFINED = 'None'
def patch_tpool_proxy():
"""eventlet.tpool.Proxy doesn't work with old-style class in __str__()
or __repr__() calls. See bug #962840 for details.
We perform a monkey patch to replace those two instance methods.
"""
def str_method(self):
return str(self._obj)
def repr_method(self):
return repr(self._obj)
tpool.Proxy.__str__ = str_method
tpool.Proxy.__repr__ = repr_method
patch_tpool_proxy()
VIR_DOMAIN_NOSTATE = 0
VIR_DOMAIN_RUNNING = 1
VIR_DOMAIN_BLOCKED = 2
VIR_DOMAIN_PAUSED = 3
VIR_DOMAIN_SHUTDOWN = 4
VIR_DOMAIN_SHUTOFF = 5
VIR_DOMAIN_CRASHED = 6
VIR_DOMAIN_PMSUSPENDED = 7
LIBVIRT_POWER_STATE = {
VIR_DOMAIN_NOSTATE: power_state.NOSTATE,
VIR_DOMAIN_RUNNING: power_state.RUNNING,
# NOTE(maoy): The DOMAIN_BLOCKED state is only valid in Xen.
# It means that the VM is running and the vCPU is idle. So,
# we map it to RUNNING
VIR_DOMAIN_BLOCKED: power_state.RUNNING,
VIR_DOMAIN_PAUSED: power_state.PAUSED,
# NOTE(maoy): The libvirt API doc says that DOMAIN_SHUTDOWN
# means the domain is being shut down. So technically the domain
# is still running. SHUTOFF is the real powered off state.
# But we will map both to SHUTDOWN anyway.
# http://libvirt.org/html/libvirt-libvirt.html
VIR_DOMAIN_SHUTDOWN: power_state.SHUTDOWN,
VIR_DOMAIN_SHUTOFF: power_state.SHUTDOWN,
VIR_DOMAIN_CRASHED: power_state.CRASHED,
VIR_DOMAIN_PMSUSPENDED: power_state.SUSPENDED,
}
MIN_LIBVIRT_VERSION = (0, 9, 11)
# When the above version matches/exceeds this version
# delete it & corresponding code using it
MIN_LIBVIRT_DEVICE_CALLBACK_VERSION = (1, 1, 1)
# Live snapshot requirements
REQ_HYPERVISOR_LIVESNAPSHOT = "QEMU"
# TODO(sdague): this should be 1.0.0, but hacked to set 1.3.0 until
# https://bugs.launchpad.net/nova/+bug/1334398
# can be diagnosed & resolved
MIN_LIBVIRT_LIVESNAPSHOT_VERSION = (1, 3, 0)
MIN_QEMU_LIVESNAPSHOT_VERSION = (1, 3, 0)
# block size tuning requirements
MIN_LIBVIRT_BLOCKIO_VERSION = (0, 10, 2)
# BlockJobInfo management requirement
MIN_LIBVIRT_BLOCKJOBINFO_VERSION = (1, 1, 1)
# Relative block commit (feature is detected,
# this version is only used for messaging)
MIN_LIBVIRT_BLOCKCOMMIT_RELATIVE_VERSION = (1, 2, 7)
def libvirt_error_handler(context, err):
# Just ignore instead of default outputting to stderr.
pass
class LibvirtDriver(driver.ComputeDriver):
capabilities = {
"has_imagecache": True,
"supports_recreate": True,
}
def __init__(self, virtapi, read_only=False):
super(LibvirtDriver, self).__init__(virtapi)
global libvirt
if libvirt is None:
libvirt = importutils.import_module('libvirt')
self._skip_list_all_domains = False
self._host_state = None
self._initiator = None
self._fc_wwnns = None
self._fc_wwpns = None
self._wrapped_conn = None
self._wrapped_conn_lock = threading.Lock()
self._caps = None
self._vcpu_total = 0
self.read_only = read_only
self.firewall_driver = firewall.load_driver(
DEFAULT_FIREWALL_DRIVER,
self.virtapi,
get_connection=self._get_connection)
self.vif_driver = libvirt_vif.LibvirtGenericVIFDriver(
self._get_connection)
self.volume_drivers = driver.driver_dict_from_config(
CONF.libvirt.volume_drivers, self)
self.dev_filter = pci_whitelist.get_pci_devices_filter()
self._event_queue = None
self._disk_cachemode = None
self.image_cache_manager = imagecache.ImageCacheManager()
self.image_backend = imagebackend.Backend(CONF.use_cow_images)
self.disk_cachemodes = {}
self.valid_cachemodes = ["default",
"none",
"writethrough",
"writeback",
"directsync",
"unsafe",
]
self._conn_supports_start_paused = CONF.libvirt.virt_type in ('kvm',
'qemu')
for mode_str in CONF.libvirt.disk_cachemodes:
disk_type, sep, cache_mode = mode_str.partition('=')
if cache_mode not in self.valid_cachemodes:
LOG.warn(_LW('Invalid cachemode %(cache_mode)s specified '
'for disk type %(disk_type)s.'),
{'cache_mode': cache_mode, 'disk_type': disk_type})
continue
self.disk_cachemodes[disk_type] = cache_mode
self._volume_api = volume.API()
self._image_api = image.API()
@property
def disk_cachemode(self):
if self._disk_cachemode is None:
# We prefer 'none' for consistent performance, host crash
# safety & migration correctness by avoiding host page cache.
# Some filesystems (eg GlusterFS via FUSE) don't support
# O_DIRECT though. For those we fallback to 'writethrough'
# which gives host crash safety, and is safe for migration
# provided the filesystem is cache coherent (cluster filesystems
# typically are, but things like NFS are not).
self._disk_cachemode = "none"
if not self._supports_direct_io(CONF.instances_path):
self._disk_cachemode = "writethrough"
return self._disk_cachemode
@property
def host_state(self):
if not self._host_state:
self._host_state = HostState(self)
return self._host_state
def _set_cache_mode(self, conf):
"""Set cache mode on LibvirtConfigGuestDisk object."""
try:
source_type = conf.source_type
driver_cache = conf.driver_cache
except AttributeError:
return
cache_mode = self.disk_cachemodes.get(source_type,
driver_cache)
conf.driver_cache = cache_mode
@staticmethod
def _conn_has_min_version(conn, lv_ver=None, hv_ver=None, hv_type=None):
try:
if lv_ver is not None:
libvirt_version = conn.getLibVersion()
if libvirt_version < utils.convert_version_to_int(lv_ver):
return False
if hv_ver is not None:
hypervisor_version = conn.getVersion()
if hypervisor_version < utils.convert_version_to_int(hv_ver):
return False
if hv_type is not None:
hypervisor_type = conn.getType()
if hypervisor_type != hv_type:
return False
return True
except Exception:
return False
def _has_min_version(self, lv_ver=None, hv_ver=None, hv_type=None):
return self._conn_has_min_version(self._conn, lv_ver, hv_ver, hv_type)
def _native_thread(self):
"""Receives async events coming in from libvirtd.
This is a native thread which runs the default
libvirt event loop implementation. This processes
any incoming async events from libvirtd and queues
them for later dispatch. This thread is only
permitted to use libvirt python APIs, and the
driver.queue_event method. In particular any use
of logging is forbidden, since it will confuse
eventlet's greenthread integration
"""
while True:
libvirt.virEventRunDefaultImpl()
def _dispatch_thread(self):
"""Dispatches async events coming in from libvirtd.
This is a green thread which waits for events to
arrive from the libvirt event loop thread. This
then dispatches the events to the compute manager.
"""
while True:
self._dispatch_events()
@staticmethod
def _event_lifecycle_callback(conn, dom, event, detail, opaque):
"""Receives lifecycle events from libvirt.
NB: this method is executing in a native thread, not
an eventlet coroutine. It can only invoke other libvirt
APIs, or use self.queue_event(). Any use of logging APIs
in particular is forbidden.
"""
self = opaque
uuid = dom.UUIDString()
transition = None
if event == libvirt.VIR_DOMAIN_EVENT_STOPPED:
transition = virtevent.EVENT_LIFECYCLE_STOPPED
elif event == libvirt.VIR_DOMAIN_EVENT_STARTED:
transition = virtevent.EVENT_LIFECYCLE_STARTED
elif event == libvirt.VIR_DOMAIN_EVENT_SUSPENDED:
transition = virtevent.EVENT_LIFECYCLE_PAUSED
elif event == libvirt.VIR_DOMAIN_EVENT_RESUMED:
transition = virtevent.EVENT_LIFECYCLE_RESUMED
if transition is not None:
self._queue_event(virtevent.LifecycleEvent(uuid, transition))
def _queue_event(self, event):
"""Puts an event on the queue for dispatch.
This method is called by the native event thread to
put events on the queue for later dispatch by the
green thread. Any use of logging APIs is forbidden.
"""
if self._event_queue is None:
return
# Queue the event...
self._event_queue.put(event)
# ...then wakeup the green thread to dispatch it
c = ' '.encode()
self._event_notify_send.write(c)
self._event_notify_send.flush()
def _dispatch_events(self):
"""Wait for & dispatch events from native thread
Blocks until native thread indicates some events
are ready. Then dispatches all queued events.
"""
# Wait to be notified that there are some
# events pending
try:
_c = self._event_notify_recv.read(1)
assert _c
except ValueError:
return # will be raised when pipe is closed
# Process as many events as possible without
# blocking
last_close_event = None
while not self._event_queue.empty():
try:
event = self._event_queue.get(block=False)
if isinstance(event, virtevent.LifecycleEvent):
self.emit_event(event)
elif 'conn' in event and 'reason' in event:
last_close_event = event
except native_Queue.Empty:
pass
if last_close_event is None:
return
conn = last_close_event['conn']
# get_new_connection may already have disabled the host,
# in which case _wrapped_conn is None.
with self._wrapped_conn_lock:
if conn == self._wrapped_conn:
reason = last_close_event['reason']
_error = _("Connection to libvirt lost: %s") % reason
LOG.warn(_error)
self._wrapped_conn = None
# Disable compute service to avoid
# new instances of being scheduled on this host.
self._set_host_enabled(False, disable_reason=_error)
def _init_events_pipe(self):
"""Create a self-pipe for the native thread to synchronize on.
This code is taken from the eventlet tpool module, under terms
of the Apache License v2.0.
"""
self._event_queue = native_Queue.Queue()
try:
rpipe, wpipe = os.pipe()
self._event_notify_send = greenio.GreenPipe(wpipe, 'wb', 0)
self._event_notify_recv = greenio.GreenPipe(rpipe, 'rb', 0)
except (ImportError, NotImplementedError):
# This is Windows compatibility -- use a socket instead
# of a pipe because pipes don't really exist on Windows.
sock = eventlet_util.__original_socket__(socket.AF_INET,
socket.SOCK_STREAM)
sock.bind(('localhost', 0))
sock.listen(50)
csock = eventlet_util.__original_socket__(socket.AF_INET,
socket.SOCK_STREAM)
csock.connect(('localhost', sock.getsockname()[1]))
nsock, addr = sock.accept()
self._event_notify_send = nsock.makefile('wb', 0)
gsock = greenio.GreenSocket(csock)
self._event_notify_recv = gsock.makefile('rb', 0)
def _init_events(self):
"""Initializes the libvirt events subsystem.
This requires running a native thread to provide the
libvirt event loop integration. This forwards events
to a green thread which does the actual dispatching.
"""
self._init_events_pipe()
LOG.debug("Starting native event thread")
event_thread = native_threading.Thread(target=self._native_thread)
event_thread.setDaemon(True)
event_thread.start()
LOG.debug("Starting green dispatch thread")
eventlet.spawn(self._dispatch_thread)
def _do_quality_warnings(self):
"""Warn about untested driver configurations.
This will log a warning message about untested driver or host arch
configurations to indicate to administrators that the quality is
unknown. Currently, only qemu or kvm on intel 32- or 64-bit systems
is tested upstream.
"""
caps = self._get_host_capabilities()
arch = caps.host.cpu.arch
if (CONF.libvirt.virt_type not in ('qemu', 'kvm') or
arch not in ('i686', 'x86_64')):
LOG.warn(_LW('The libvirt driver is not tested on '
'%(type)s/%(arch)s by the OpenStack project and '
'thus its quality can not be ensured. For more '
'information, see: https://wiki.openstack.org/wiki/'
'HypervisorSupportMatrix'),
{'type': CONF.libvirt.virt_type, 'arch': arch})
def init_host(self, host):
# NOTE(dkliban): Error handler needs to be registered before libvirt
# connection is used for the first time. Otherwise, the
# handler does not get registered.
libvirt.registerErrorHandler(libvirt_error_handler, None)
libvirt.virEventRegisterDefaultImpl()
self._do_quality_warnings()
# Stop libguestfs using KVM unless we're also configured
# to use this. This solves problem where people need to
# stop Nova use of KVM because nested-virt is broken
if CONF.libvirt.virt_type != "kvm":
guestfs.force_tcg()
if not self._has_min_version(MIN_LIBVIRT_VERSION):
major = MIN_LIBVIRT_VERSION[0]
minor = MIN_LIBVIRT_VERSION[1]
micro = MIN_LIBVIRT_VERSION[2]
LOG.error(_LE('Nova requires libvirt version '
'%(major)i.%(minor)i.%(micro)i or greater.'),
{'major': major, 'minor': minor, 'micro': micro})
self._init_events()
def _get_new_connection(self):
# call with _wrapped_conn_lock held
LOG.debug('Connecting to libvirt: %s', self.uri())
wrapped_conn = None
try:
wrapped_conn = self._connect(self.uri(), self.read_only)
finally:
# Enabling the compute service, in case it was disabled
# since the connection was successful.
disable_reason = DISABLE_REASON_UNDEFINED
if not wrapped_conn:
disable_reason = 'Failed to connect to libvirt'
self._set_host_enabled(bool(wrapped_conn), disable_reason)
self._wrapped_conn = wrapped_conn
self._skip_list_all_domains = False
try:
LOG.debug("Registering for lifecycle events %s", self)
wrapped_conn.domainEventRegisterAny(
None,
libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE,
self._event_lifecycle_callback,
self)
except Exception as e:
LOG.warn(_LW("URI %(uri)s does not support events: %(error)s"),
{'uri': self.uri(), 'error': e})
try:
LOG.debug("Registering for connection events: %s", str(self))
wrapped_conn.registerCloseCallback(self._close_callback, None)
except (TypeError, AttributeError) as e:
# NOTE: The registerCloseCallback of python-libvirt 1.0.1+
# is defined with 3 arguments, and the above registerClose-
# Callback succeeds. However, the one of python-libvirt 1.0.0
# is defined with 4 arguments and TypeError happens here.
# Then python-libvirt 0.9 does not define a method register-
# CloseCallback.
LOG.debug("The version of python-libvirt does not support "
"registerCloseCallback or is too old: %s", e)
except libvirt.libvirtError as e:
LOG.warn(_LW("URI %(uri)s does not support connection"
" events: %(error)s"),
{'uri': self.uri(), 'error': e})
return wrapped_conn
def _get_connection(self):
# multiple concurrent connections are protected by _wrapped_conn_lock
with self._wrapped_conn_lock:
wrapped_conn = self._wrapped_conn
if not wrapped_conn or not self._test_connection(wrapped_conn):
wrapped_conn = self._get_new_connection()
return wrapped_conn
_conn = property(_get_connection)
def _close_callback(self, conn, reason, opaque):
close_info = {'conn': conn, 'reason': reason}
self._queue_event(close_info)
@staticmethod
def _test_connection(conn):
try:
conn.getLibVersion()
return True
except libvirt.libvirtError as e:
if (e.get_error_code() in (libvirt.VIR_ERR_SYSTEM_ERROR,
libvirt.VIR_ERR_INTERNAL_ERROR) and
e.get_error_domain() in (libvirt.VIR_FROM_REMOTE,
libvirt.VIR_FROM_RPC)):
LOG.debug('Connection to libvirt broke')
return False
raise
@staticmethod
def uri():
if CONF.libvirt.virt_type == 'uml':
uri = CONF.libvirt.connection_uri or 'uml:///system'
elif CONF.libvirt.virt_type == 'xen':
uri = CONF.libvirt.connection_uri or 'xen:///'
elif CONF.libvirt.virt_type == 'lxc':
uri = CONF.libvirt.connection_uri or 'lxc:///'
else:
uri = CONF.libvirt.connection_uri or 'qemu:///system'
return uri
@staticmethod
def _connect_auth_cb(creds, opaque):
if len(creds) == 0:
return 0
raise exception.NovaException(
_("Can not handle authentication request for %d credentials")
% len(creds))
@staticmethod
def _connect(uri, read_only):
auth = [[libvirt.VIR_CRED_AUTHNAME,
libvirt.VIR_CRED_ECHOPROMPT,
libvirt.VIR_CRED_REALM,
libvirt.VIR_CRED_PASSPHRASE,
libvirt.VIR_CRED_NOECHOPROMPT,
libvirt.VIR_CRED_EXTERNAL],
LibvirtDriver._connect_auth_cb,
None]
try:
flags = 0
if read_only:
flags = libvirt.VIR_CONNECT_RO
# tpool.proxy_call creates a native thread. Due to limitations
# with eventlet locking we cannot use the logging API inside
# the called function.
return tpool.proxy_call(
(libvirt.virDomain, libvirt.virConnect),
libvirt.openAuth, uri, auth, flags)
except libvirt.libvirtError as ex:
LOG.exception(_LE("Connection to libvirt failed: %s"), ex)
payload = dict(ip=LibvirtDriver.get_host_ip_addr(),
method='_connect',
reason=ex)
rpc.get_notifier('compute').error(nova_context.get_admin_context(),
'compute.libvirt.error',
payload)
raise exception.HypervisorUnavailable(host=CONF.host)
def instance_exists(self, instance):
"""Efficient override of base instance_exists method."""
try:
self._lookup_by_name(instance.name)
return True
except exception.NovaException:
return False
def _list_instance_domains_fast(self, only_running=True):
# The modern (>= 0.9.13) fast way - 1 single API call for all domains
flags = libvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE
if not only_running:
flags = flags | libvirt.VIR_CONNECT_LIST_DOMAINS_INACTIVE
return self._conn.listAllDomains(flags)
def _list_instance_domains_slow(self, only_running=True):
# The legacy (< 0.9.13) slow way - O(n) API call for n domains
uuids = []
doms = []
# Redundant numOfDomains check is for libvirt bz #836647
if self._conn.numOfDomains() > 0:
for id in self._conn.listDomainsID():
try:
dom = self._lookup_by_id(id)
doms.append(dom)
uuids.append(dom.UUIDString())
except exception.InstanceNotFound:
continue
if only_running:
return doms
for name in self._conn.listDefinedDomains():
try:
dom = self._lookup_by_name(name)
if dom.UUIDString() not in uuids:
doms.append(dom)
except exception.InstanceNotFound:
continue
return doms
def _list_instance_domains(self, only_running=True, only_guests=True):
"""Get a list of libvirt.Domain objects for nova instances
:param only_running: True to only return running instances
:param only_guests: True to filter out any host domain (eg Dom-0)
Query libvirt to a get a list of all libvirt.Domain objects
that correspond to nova instances. If the only_running parameter
is true this list will only include active domains, otherwise
inactive domains will be included too. If the only_guests parameter
is true the list will have any "host" domain (aka Xen Domain-0)
filtered out.
:returns: list of libvirt.Domain objects
"""
if not self._skip_list_all_domains:
try:
alldoms = self._list_instance_domains_fast(only_running)
except (libvirt.libvirtError, AttributeError) as ex:
LOG.info(_LI("Unable to use bulk domain list APIs, "
"falling back to slow code path: %(ex)s"),
{'ex': ex})
self._skip_list_all_domains = True
if self._skip_list_all_domains:
# Old libvirt, or a libvirt driver which doesn't
# implement the new API
alldoms = self._list_instance_domains_slow(only_running)
doms = []
for dom in alldoms:
if only_guests and dom.ID() == 0:
continue
doms.append(dom)
return doms
def list_instances(self):
names = []
for dom in self._list_instance_domains(only_running=False):
names.append(dom.name())
return names
def list_instance_uuids(self):
uuids = []
for dom in self._list_instance_domains(only_running=False):
uuids.append(dom.UUIDString())
return uuids
def plug_vifs(self, instance, network_info):
"""Plug VIFs into networks."""
for vif in network_info:
self.vif_driver.plug(instance, vif)
def _unplug_vifs(self, instance, network_info, ignore_errors):
"""Unplug VIFs from networks."""
for vif in network_info:
try:
self.vif_driver.unplug(instance, vif)
except exception.NovaException:
if not ignore_errors:
raise
def unplug_vifs(self, instance, network_info):
self._unplug_vifs(instance, network_info, False)
def _teardown_container(self, instance):
inst_path = libvirt_utils.get_instance_path(instance)
container_dir = os.path.join(inst_path, 'rootfs')
rootfs_dev = instance.system_metadata.get('rootfs_device_name')
disk.teardown_container(container_dir, rootfs_dev)
def _destroy(self, instance):
try:
virt_dom = self._lookup_by_name(instance['name'])
except exception.InstanceNotFound:
virt_dom = None
# If the instance is already terminated, we're still happy
# Otherwise, destroy it
old_domid = -1
if virt_dom is not None:
try:
old_domid = virt_dom.ID()
virt_dom.destroy()
# NOTE(GuanQiang): teardown container to avoid resource leak
if CONF.libvirt.virt_type == 'lxc':
self._teardown_container(instance)
except libvirt.libvirtError as e:
is_okay = False
errcode = e.get_error_code()
if errcode == libvirt.VIR_ERR_OPERATION_INVALID:
# If the instance is already shut off, we get this:
# Code=55 Error=Requested operation is not valid:
# domain is not running
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
if state == power_state.SHUTDOWN:
is_okay = True
elif errcode == libvirt.VIR_ERR_OPERATION_TIMEOUT:
LOG.warn(_LW("Cannot destroy instance, operation time "
"out"),
instance=instance)
reason = _("operation time out")
raise exception.InstancePowerOffFailure(reason=reason)
if not is_okay:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Error from libvirt during destroy. '
'Code=%(errcode)s Error=%(e)s'),
{'errcode': errcode, 'e': e},
instance=instance)
def _wait_for_destroy(expected_domid):
"""Called at an interval until the VM is gone."""
# NOTE(vish): If the instance disappears during the destroy
# we ignore it so the cleanup can still be
# attempted because we would prefer destroy to
# never fail.
try:
dom_info = self.get_info(instance)
state = dom_info['state']
new_domid = dom_info['id']
except exception.InstanceNotFound:
LOG.warning(_LW("During wait destroy, instance disappeared."),
instance=instance)
raise loopingcall.LoopingCallDone()
if state == power_state.SHUTDOWN:
LOG.info(_LI("Instance destroyed successfully."),
instance=instance)
raise loopingcall.LoopingCallDone()
# NOTE(wangpan): If the instance was booted again after destroy,
# this may be a endless loop, so check the id of
# domain here, if it changed and the instance is
# still running, we should destroy it again.
# see https://bugs.launchpad.net/nova/+bug/1111213 for more details
if new_domid != expected_domid:
LOG.info(_LI("Instance may be started again."),
instance=instance)
kwargs['is_running'] = True
raise loopingcall.LoopingCallDone()
kwargs = {'is_running': False}
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_destroy,
old_domid)
timer.start(interval=0.5).wait()
if kwargs['is_running']:
LOG.info(_LI("Going to destroy instance again."),
instance=instance)
self._destroy(instance)
def destroy(self, context, instance, network_info, block_device_info=None,
destroy_disks=True, migrate_data=None):
self._destroy(instance)
self.cleanup(context, instance, network_info, block_device_info,
destroy_disks, migrate_data)
def _undefine_domain(self, instance):
try:
virt_dom = self._lookup_by_name(instance['name'])
except exception.InstanceNotFound:
virt_dom = None
if virt_dom:
try:
try:
virt_dom.undefineFlags(
libvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE)
except libvirt.libvirtError:
LOG.debug("Error from libvirt during undefineFlags."
" Retrying with undefine", instance=instance)
virt_dom.undefine()
except AttributeError:
# NOTE(vish): Older versions of libvirt don't support
# undefine flags, so attempt to do the
# right thing.
try:
if virt_dom.hasManagedSaveImage(0):
virt_dom.managedSaveRemove(0)
except AttributeError:
pass
virt_dom.undefine()
except libvirt.libvirtError as e:
with excutils.save_and_reraise_exception():
errcode = e.get_error_code()
LOG.error(_LE('Error from libvirt during undefine. '
'Code=%(errcode)s Error=%(e)s'),
{'errcode': errcode, 'e': e}, instance=instance)
def cleanup(self, context, instance, network_info, block_device_info=None,
destroy_disks=True, migrate_data=None, destroy_vifs=True):
self._undefine_domain(instance)
if destroy_vifs:
self._unplug_vifs(instance, network_info, True)
retry = True
while retry:
try:
self.firewall_driver.unfilter_instance(instance,
network_info=network_info)
except libvirt.libvirtError as e:
try:
state = self.get_info(instance)['state']
except exception.InstanceNotFound:
state = power_state.SHUTDOWN
if state != power_state.SHUTDOWN:
LOG.warn(_LW("Instance may be still running, destroy "
"it again."), instance=instance)
self._destroy(instance)
else:
retry = False
errcode = e.get_error_code()
LOG.exception(_LE('Error from libvirt during unfilter. '
'Code=%(errcode)s Error=%(e)s'),
{'errcode': errcode, 'e': e},
instance=instance)
reason = "Error unfiltering instance."
raise exception.InstanceTerminationFailure(reason=reason)
except Exception:
retry = False
raise
else:
retry = False
# FIXME(wangpan): if the instance is booted again here, such as the
# the soft reboot operation boot it here, it will
# become "running deleted", should we check and destroy
# it at the end of this method?
# NOTE(vish): we disconnect from volumes regardless
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
for vol in block_device_mapping:
connection_info = vol['connection_info']
disk_dev = vol['mount_device'].rpartition("/")[2]
if ('data' in connection_info and
'volume_id' in connection_info['data']):
volume_id = connection_info['data']['volume_id']
encryption = encryptors.get_encryption_metadata(
context, self._volume_api, volume_id, connection_info)
if encryption:
# The volume must be detached from the VM before
# disconnecting it from its encryptor. Otherwise, the
# encryptor may report that the volume is still in use.
encryptor = self._get_volume_encryptor(connection_info,
encryption)
encryptor.detach_volume(**encryption)
try:
self._disconnect_volume(connection_info, disk_dev)
except Exception as exc:
with excutils.save_and_reraise_exception() as ctxt:
if destroy_disks:
# Don't block on Volume errors if we're trying to
# delete the instance as we may be patially created
# or deleted
ctxt.reraise = False
LOG.warn(_LW("Ignoring Volume Error on vol %(vol_id)s "
"during delete %(exc)s"),
{'vol_id': vol.get('volume_id'), 'exc': exc},
instance=instance)
if destroy_disks or (
migrate_data and migrate_data.get('is_shared_block_storage',
False)):
self._delete_instance_files(instance)
if destroy_disks:
self._cleanup_lvm(instance)
# NOTE(haomai): destroy volumes if needed
if CONF.libvirt.images_type == 'rbd':
self._cleanup_rbd(instance)
@staticmethod
def _get_rbd_driver():
return rbd.RBDDriver(
pool=CONF.libvirt.images_rbd_pool,
ceph_conf=CONF.libvirt.images_rbd_ceph_conf,
rbd_user=CONF.libvirt.rbd_user)
def _cleanup_rbd(self, instance):
LibvirtDriver._get_rbd_driver().cleanup_volumes(instance)
def _cleanup_lvm(self, instance):
"""Delete all LVM disks for given instance object."""
disks = self._lvm_disks(instance)
if disks:
lvm.remove_volumes(disks)
def _lvm_disks(self, instance):
"""Returns all LVM disks for given instance object."""
if CONF.libvirt.images_volume_group:
vg = os.path.join('/dev', CONF.libvirt.images_volume_group)
if not os.path.exists(vg):
return []
pattern = '%s_' % instance['uuid']
# TODO(sdague): remove in Juno
def belongs_to_instance_legacy(disk):
# We don't want to leak old disks, but at the same time, we
# don't want to do an unsafe thing. So we will only handle
# the old filter if it's the system default still.
pattern = '%s_' % instance['name']
if disk.startswith(pattern):
if CONF.instance_name_template == 'instance-%08x':
return True
else:
LOG.warn(_LW('Volume %(disk)s possibly unsafe to '
'remove, please clean up manually'),
{'disk': disk})
return False
def belongs_to_instance(disk):
return disk.startswith(pattern)
def fullpath(name):
return os.path.join(vg, name)
logical_volumes = lvm.list_volumes(vg)
disk_names = filter(belongs_to_instance, logical_volumes)
# TODO(sdague): remove in Juno
disk_names.extend(
filter(belongs_to_instance_legacy, logical_volumes)
)
disks = map(fullpath, disk_names)
return disks
return []
def get_volume_connector(self, instance):
if not self._initiator:
self._initiator = libvirt_utils.get_iscsi_initiator()
if not self._initiator:
LOG.debug('Could not determine iscsi initiator name',
instance=instance)
if not self._fc_wwnns:
self._fc_wwnns = libvirt_utils.get_fc_wwnns()
if not self._fc_wwnns or len(self._fc_wwnns) == 0:
LOG.debug('Could not determine fibre channel '
'world wide node names',
instance=instance)
if not self._fc_wwpns:
self._fc_wwpns = libvirt_utils.get_fc_wwpns()
if not self._fc_wwpns or len(self._fc_wwpns) == 0:
LOG.debug('Could not determine fibre channel '
'world wide port names',
instance=instance)
connector = {'ip': CONF.my_ip,
'host': CONF.host}
if self._initiator:
connector['initiator'] = self._initiator
if self._fc_wwnns and self._fc_wwpns:
connector["wwnns"] = self._fc_wwnns
connector["wwpns"] = self._fc_wwpns
return connector
def _cleanup_resize(self, instance, network_info):
# NOTE(wangpan): we get the pre-grizzly instance path firstly,
# so the backup dir of pre-grizzly instance can
# be deleted correctly with grizzly or later nova.
pre_grizzly_name = libvirt_utils.get_instance_path(instance,
forceold=True)
target = pre_grizzly_name + '_resize'
if not os.path.exists(target):
target = libvirt_utils.get_instance_path(instance) + '_resize'
if os.path.exists(target):
# Deletion can fail over NFS, so retry the deletion as required.
# Set maximum attempt as 5, most test can remove the directory
# for the second time.
utils.execute('rm', '-rf', target, delay_on_retry=True,
attempts=5)
if instance['host'] != CONF.host:
self._undefine_domain(instance)
self.unplug_vifs(instance, network_info)
self.firewall_driver.unfilter_instance(instance, network_info)
def _connect_volume(self, connection_info, disk_info):
driver_type = connection_info.get('driver_volume_type')
if driver_type not in self.volume_drivers:
raise exception.VolumeDriverNotFound(driver_type=driver_type)
driver = self.volume_drivers[driver_type]
return driver.connect_volume(connection_info, disk_info)
def _disconnect_volume(self, connection_info, disk_dev):
driver_type = connection_info.get('driver_volume_type')
if driver_type not in self.volume_drivers:
raise exception.VolumeDriverNotFound(driver_type=driver_type)
driver = self.volume_drivers[driver_type]
return driver.disconnect_volume(connection_info, disk_dev)
def _get_volume_encryptor(self, connection_info, encryption):
encryptor = encryptors.get_volume_encryptor(connection_info,
**encryption)
return encryptor
def attach_volume(self, context, connection_info, instance, mountpoint,
disk_bus=None, device_type=None, encryption=None):
instance_name = instance['name']
virt_dom = self._lookup_by_name(instance_name)
disk_dev = mountpoint.rpartition("/")[2]
bdm = {
'device_name': disk_dev,
'disk_bus': disk_bus,
'device_type': device_type}
# Note(cfb): If the volume has a custom block size, check that
# that we are using QEMU/KVM and libvirt >= 0.10.2. The
# presence of a block size is considered mandatory by
# cinder so we fail if we can't honor the request.
data = {}
if ('data' in connection_info):
data = connection_info['data']
if ('logical_block_size' in data or 'physical_block_size' in data):
if ((CONF.libvirt.virt_type != "kvm" and
CONF.libvirt.virt_type != "qemu")):
msg = _("Volume sets block size, but the current "
"libvirt hypervisor '%s' does not support custom "
"block size") % CONF.libvirt.virt_type
raise exception.InvalidHypervisorType(msg)
if not self._has_min_version(MIN_LIBVIRT_BLOCKIO_VERSION):
ver = ".".join([str(x) for x in MIN_LIBVIRT_BLOCKIO_VERSION])
msg = _("Volume sets block size, but libvirt '%s' or later is "
"required.") % ver
raise exception.Invalid(msg)
disk_info = blockinfo.get_info_from_bdm(CONF.libvirt.virt_type, bdm)
conf = self._connect_volume(connection_info, disk_info)
self._set_cache_mode(conf)
try:
# NOTE(vish): We can always affect config because our
# domains are persistent, but we should only
# affect live if the domain is running.
flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
if state in (power_state.RUNNING, power_state.PAUSED):
flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE
# cache device_path in connection_info -- required by encryptors
if 'data' in connection_info:
connection_info['data']['device_path'] = conf.source_path
if encryption:
encryptor = self._get_volume_encryptor(connection_info,
encryption)
encryptor.attach_volume(context, **encryption)
virt_dom.attachDeviceFlags(conf.to_xml(), flags)
except Exception as ex:
if isinstance(ex, libvirt.libvirtError):
errcode = ex.get_error_code()
if errcode == libvirt.VIR_ERR_OPERATION_FAILED:
self._disconnect_volume(connection_info, disk_dev)
raise exception.DeviceIsBusy(device=disk_dev)
with excutils.save_and_reraise_exception():
self._disconnect_volume(connection_info, disk_dev)
def _swap_volume(self, domain, disk_path, new_path, resize_to):
"""Swap existing disk with a new block device."""
# Save a copy of the domain's persistent XML file
xml = domain.XMLDesc(
libvirt.VIR_DOMAIN_XML_INACTIVE |
libvirt.VIR_DOMAIN_XML_SECURE)
# Abort is an idempotent operation, so make sure any block
# jobs which may have failed are ended.
try:
domain.blockJobAbort(disk_path, 0)
except Exception:
pass
try:
# NOTE (rmk): blockRebase cannot be executed on persistent
# domains, so we need to temporarily undefine it.
# If any part of this block fails, the domain is
# re-defined regardless.
if domain.isPersistent():
domain.undefine()
# Start copy with VIR_DOMAIN_REBASE_REUSE_EXT flag to
# allow writing to existing external volume file
domain.blockRebase(disk_path, new_path, 0,
libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY |
libvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT)
while self._wait_for_block_job(domain, disk_path):
time.sleep(0.5)
domain.blockJobAbort(disk_path,
libvirt.VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT)
if resize_to:
# NOTE(alex_xu): domain.blockJobAbort isn't sync call. This
# is bug in libvirt. So we need waiting for the pivot is
# finished. libvirt bug #1119173
while self._wait_for_block_job(domain, disk_path,
wait_for_job_clean=True):
time.sleep(0.5)
domain.blockResize(disk_path, resize_to * units.Gi / units.Ki)
finally:
self._conn.defineXML(xml)
def swap_volume(self, old_connection_info,
new_connection_info, instance, mountpoint, resize_to):
instance_name = instance['name']
virt_dom = self._lookup_by_name(instance_name)
disk_dev = mountpoint.rpartition("/")[2]
xml = self._get_disk_xml(virt_dom.XMLDesc(0), disk_dev)
if not xml:
raise exception.DiskNotFound(location=disk_dev)
disk_info = {
'dev': disk_dev,
'bus': blockinfo.get_disk_bus_for_disk_dev(
CONF.libvirt.virt_type, disk_dev),
'type': 'disk',
}
conf = self._connect_volume(new_connection_info, disk_info)
if not conf.source_path:
self._disconnect_volume(new_connection_info, disk_dev)
raise NotImplementedError(_("Swap only supports host devices"))
self._swap_volume(virt_dom, disk_dev, conf.source_path, resize_to)
self._disconnect_volume(old_connection_info, disk_dev)
@staticmethod
def _get_disk_xml(xml, device):
"""Returns the xml for the disk mounted at device."""
try:
doc = etree.fromstring(xml)
except Exception:
return None
ret = doc.findall('./devices/disk')
for node in ret:
for child in node.getchildren():
if child.tag == 'target':
if child.get('dev') == device:
return etree.tostring(node)
def _get_existing_domain_xml(self, instance, network_info,
block_device_info=None):
try:
virt_dom = self._lookup_by_name(instance['name'])
xml = virt_dom.XMLDesc(0)
except exception.InstanceNotFound:
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance,
block_device_info)
xml = self._get_guest_xml(nova_context.get_admin_context(),
instance, network_info, disk_info,
block_device_info=block_device_info)
return xml
def detach_volume(self, connection_info, instance, mountpoint,
encryption=None):
instance_name = instance['name']
disk_dev = mountpoint.rpartition("/")[2]
try:
virt_dom = self._lookup_by_name(instance_name)
xml = self._get_disk_xml(virt_dom.XMLDesc(0), disk_dev)
if not xml:
raise exception.DiskNotFound(location=disk_dev)
else:
# NOTE(vish): We can always affect config because our
# domains are persistent, but we should only
# affect live if the domain is running.
flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
if state in (power_state.RUNNING, power_state.PAUSED):
flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE
virt_dom.detachDeviceFlags(xml, flags)
if encryption:
# The volume must be detached from the VM before
# disconnecting it from its encryptor. Otherwise, the
# encryptor may report that the volume is still in use.
encryptor = self._get_volume_encryptor(connection_info,
encryption)
encryptor.detach_volume(**encryption)
except exception.InstanceNotFound:
# NOTE(zhaoqin): If the instance does not exist, _lookup_by_name()
# will throw InstanceNotFound exception. Need to
# disconnect volume under this circumstance.
LOG.warn(_LW("During detach_volume, instance disappeared."))
except libvirt.libvirtError as ex:
# NOTE(vish): This is called to cleanup volumes after live
# migration, so we should still disconnect even if
# the instance doesn't exist here anymore.
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
# NOTE(vish):
LOG.warn(_LW("During detach_volume, instance disappeared."))
else:
raise
self._disconnect_volume(connection_info, disk_dev)
def attach_interface(self, instance, image_meta, vif):
virt_dom = self._lookup_by_name(instance['name'])
flavor = objects.Flavor.get_by_id(
nova_context.get_admin_context(read_deleted='yes'),
instance['instance_type_id'])
self.vif_driver.plug(instance, vif)
self.firewall_driver.setup_basic_filtering(instance, [vif])
cfg = self.vif_driver.get_config(instance, vif, image_meta,
flavor, CONF.libvirt.virt_type)
try:
flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
if state == power_state.RUNNING or state == power_state.PAUSED:
flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE
virt_dom.attachDeviceFlags(cfg.to_xml(), flags)
except libvirt.libvirtError:
LOG.error(_LE('attaching network adapter failed.'),
instance=instance)
self.vif_driver.unplug(instance, vif)
raise exception.InterfaceAttachFailed(
instance_uuid=instance['uuid'])
def detach_interface(self, instance, vif):
virt_dom = self._lookup_by_name(instance['name'])
flavor = objects.Flavor.get_by_id(
nova_context.get_admin_context(read_deleted='yes'),
instance['instance_type_id'])
cfg = self.vif_driver.get_config(instance, vif, None, flavor,
CONF.libvirt.virt_type)
try:
self.vif_driver.unplug(instance, vif)
flags = libvirt.VIR_DOMAIN_AFFECT_CONFIG
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
if state == power_state.RUNNING or state == power_state.PAUSED:
flags |= libvirt.VIR_DOMAIN_AFFECT_LIVE
virt_dom.detachDeviceFlags(cfg.to_xml(), flags)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
LOG.warn(_LW("During detach_interface, "
"instance disappeared."),
instance=instance)
else:
LOG.error(_LE('detaching network adapter failed.'),
instance=instance)
raise exception.InterfaceDetachFailed(
instance_uuid=instance['uuid'])
def _create_snapshot_metadata(self, base, instance, img_fmt, snp_name):
metadata = {'is_public': False,
'status': 'active',
'name': snp_name,
'properties': {
'kernel_id': instance['kernel_id'],
'image_location': 'snapshot',
'image_state': 'available',
'owner_id': instance['project_id'],
'ramdisk_id': instance['ramdisk_id'],
}
}
if instance['os_type']:
metadata['properties']['os_type'] = instance['os_type']
# NOTE(vish): glance forces ami disk format to be ami
if base.get('disk_format') == 'ami':
metadata['disk_format'] = 'ami'
else:
metadata['disk_format'] = img_fmt
metadata['container_format'] = base.get('container_format', 'bare')
return metadata
def snapshot(self, context, instance, image_id, update_task_state):
"""Create snapshot from a running VM instance.
This command only works with qemu 0.14+
"""
try:
virt_dom = self._lookup_by_name(instance['name'])
except exception.InstanceNotFound:
raise exception.InstanceNotRunning(instance_id=instance['uuid'])
base_image_ref = instance['image_ref']
base = compute_utils.get_image_metadata(
context, self._image_api, base_image_ref, instance)
snapshot = self._image_api.get(context, image_id)
disk_path = libvirt_utils.find_disk(virt_dom)
source_format = libvirt_utils.get_disk_type(disk_path)
image_format = CONF.libvirt.snapshot_image_format or source_format
# NOTE(bfilippov): save lvm and rbd as raw
if image_format == 'lvm' or image_format == 'rbd':
image_format = 'raw'
metadata = self._create_snapshot_metadata(base,
instance,
image_format,
snapshot['name'])
snapshot_name = uuid.uuid4().hex
state = LIBVIRT_POWER_STATE[virt_dom.info()[0]]
# NOTE(rmk): Live snapshots require QEMU 1.3 and Libvirt 1.0.0.
# These restrictions can be relaxed as other configurations
# can be validated.
if self._has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION,
MIN_QEMU_LIVESNAPSHOT_VERSION,
REQ_HYPERVISOR_LIVESNAPSHOT) \
and not source_format == "lvm" and not source_format == 'rbd':
live_snapshot = True
# Abort is an idempotent operation, so make sure any block
# jobs which may have failed are ended. This operation also
# confirms the running instance, as opposed to the system as a
# whole, has a new enough version of the hypervisor (bug 1193146).
try:
virt_dom.blockJobAbort(disk_path, 0)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_CONFIG_UNSUPPORTED:
live_snapshot = False
else:
pass
else:
live_snapshot = False
# NOTE(rmk): We cannot perform live snapshots when a managedSave
# file is present, so we will use the cold/legacy method
# for instances which are shutdown.
if state == power_state.SHUTDOWN:
live_snapshot = False
# NOTE(dkang): managedSave does not work for LXC
if CONF.libvirt.virt_type != 'lxc' and not live_snapshot:
if state == power_state.RUNNING or state == power_state.PAUSED:
self._detach_pci_devices(virt_dom,
pci_manager.get_instance_pci_devs(instance))
virt_dom.managedSave(0)
snapshot_backend = self.image_backend.snapshot(disk_path,
image_type=source_format)
if live_snapshot:
LOG.info(_LI("Beginning live snapshot process"),
instance=instance)
else:
LOG.info(_LI("Beginning cold snapshot process"),
instance=instance)
update_task_state(task_state=task_states.IMAGE_PENDING_UPLOAD)
snapshot_directory = CONF.libvirt.snapshots_directory
fileutils.ensure_tree(snapshot_directory)
with utils.tempdir(dir=snapshot_directory) as tmpdir:
try:
out_path = os.path.join(tmpdir, snapshot_name)
if live_snapshot:
# NOTE(xqueralt): libvirt needs o+x in the temp directory
os.chmod(tmpdir, 0o701)
self._live_snapshot(virt_dom, disk_path, out_path,
image_format)
else:
snapshot_backend.snapshot_extract(out_path, image_format)
finally:
new_dom = None
# NOTE(dkang): because previous managedSave is not called
# for LXC, _create_domain must not be called.
if CONF.libvirt.virt_type != 'lxc' and not live_snapshot:
if state == power_state.RUNNING:
new_dom = self._create_domain(domain=virt_dom)
elif state == power_state.PAUSED:
new_dom = self._create_domain(domain=virt_dom,
launch_flags=libvirt.VIR_DOMAIN_START_PAUSED)
if new_dom is not None:
self._attach_pci_devices(new_dom,
pci_manager.get_instance_pci_devs(instance))
LOG.info(_LI("Snapshot extracted, beginning image upload"),
instance=instance)
# Upload that image to the image service
update_task_state(task_state=task_states.IMAGE_UPLOADING,
expected_state=task_states.IMAGE_PENDING_UPLOAD)
with libvirt_utils.file_open(out_path) as image_file:
self._image_api.update(context,
image_id,
metadata,
image_file)
LOG.info(_LI("Snapshot image upload complete"),
instance=instance)
@staticmethod
def _wait_for_block_job(domain, disk_path, abort_on_error=False,
wait_for_job_clean=False):
"""Wait for libvirt block job to complete.
Libvirt may return either cur==end or an empty dict when
the job is complete, depending on whether the job has been
cleaned up by libvirt yet, or not.
:returns: True if still in progress
False if completed
"""
status = domain.blockJobInfo(disk_path, 0)
if status == -1 and abort_on_error:
msg = _('libvirt error while requesting blockjob info.')
raise exception.NovaException(msg)
try:
cur = status.get('cur', 0)
end = status.get('end', 0)
except Exception:
return False
if wait_for_job_clean:
job_ended = not status
else:
job_ended = cur == end
return not job_ended
def _live_snapshot(self, domain, disk_path, out_path, image_format):
"""Snapshot an instance without downtime."""
# Save a copy of the domain's persistent XML file
xml = domain.XMLDesc(
libvirt.VIR_DOMAIN_XML_INACTIVE |
libvirt.VIR_DOMAIN_XML_SECURE)
# Abort is an idempotent operation, so make sure any block
# jobs which may have failed are ended.
try:
domain.blockJobAbort(disk_path, 0)
except Exception:
pass
# NOTE (rmk): We are using shallow rebases as a workaround to a bug
# in QEMU 1.3. In order to do this, we need to create
# a destination image with the original backing file
# and matching size of the instance root disk.
src_disk_size = libvirt_utils.get_disk_size(disk_path)
src_back_path = libvirt_utils.get_disk_backing_file(disk_path,
basename=False)
disk_delta = out_path + '.delta'
libvirt_utils.create_cow_image(src_back_path, disk_delta,
src_disk_size)
try:
# NOTE (rmk): blockRebase cannot be executed on persistent
# domains, so we need to temporarily undefine it.
# If any part of this block fails, the domain is
# re-defined regardless.
if domain.isPersistent():
domain.undefine()
# NOTE (rmk): Establish a temporary mirror of our root disk and
# issue an abort once we have a complete copy.
domain.blockRebase(disk_path, disk_delta, 0,
libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY |
libvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT |
libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW)
while self._wait_for_block_job(domain, disk_path):
time.sleep(0.5)
domain.blockJobAbort(disk_path, 0)
libvirt_utils.chown(disk_delta, os.getuid())
finally:
self._conn.defineXML(xml)
# Convert the delta (CoW) image with a backing file to a flat
# image with no backing file.
libvirt_utils.extract_snapshot(disk_delta, 'qcow2',
out_path, image_format)
def _volume_snapshot_update_status(self, context, snapshot_id, status):
"""Send a snapshot status update to Cinder.
This method captures and logs exceptions that occur
since callers cannot do anything useful with these exceptions.
Operations on the Cinder side waiting for this will time out if
a failure occurs sending the update.
:param context: security context
:param snapshot_id: id of snapshot being updated
:param status: new status value
"""
try:
self._volume_api.update_snapshot_status(context,
snapshot_id,
status)
except Exception:
LOG.exception(_LE('Failed to send updated snapshot status '
'to volume service.'))
def _volume_snapshot_create(self, context, instance, domain,
volume_id, snapshot_id, new_file):
"""Perform volume snapshot.
:param domain: VM that volume is attached to
:param volume_id: volume UUID to snapshot
:param snapshot_id: UUID of snapshot being created
:param new_file: relative path to new qcow2 file present on share
"""
xml = domain.XMLDesc(0)
xml_doc = etree.fromstring(xml)
device_info = vconfig.LibvirtConfigGuest()
device_info.parse_dom(xml_doc)
disks_to_snap = [] # to be snapshotted by libvirt
network_disks_to_snap = [] # network disks (netfs, gluster, etc.)
disks_to_skip = [] # local disks not snapshotted
for guest_disk in device_info.devices:
if (guest_disk.root_name != 'disk'):
continue
if (guest_disk.target_dev is None):
continue
if (guest_disk.serial is None or guest_disk.serial != volume_id):
disks_to_skip.append(guest_disk.target_dev)
continue
# disk is a Cinder volume with the correct volume_id
disk_info = {
'dev': guest_disk.target_dev,
'serial': guest_disk.serial,
'current_file': guest_disk.source_path,
'source_protocol': guest_disk.source_protocol,
'source_name': guest_disk.source_name,
'source_hosts': guest_disk.source_hosts,
'source_ports': guest_disk.source_ports
}
# Determine path for new_file based on current path
if disk_info['current_file'] is not None:
current_file = disk_info['current_file']
new_file_path = os.path.join(os.path.dirname(current_file),
new_file)
disks_to_snap.append((current_file, new_file_path))
elif disk_info['source_protocol'] in ('gluster', 'netfs'):
network_disks_to_snap.append((disk_info, new_file))
if not disks_to_snap and not network_disks_to_snap:
msg = _('Found no disk to snapshot.')
raise exception.NovaException(msg)
snapshot = vconfig.LibvirtConfigGuestSnapshot()
for current_name, new_filename in disks_to_snap:
snap_disk = vconfig.LibvirtConfigGuestSnapshotDisk()
snap_disk.name = current_name
snap_disk.source_path = new_filename
snap_disk.source_type = 'file'
snap_disk.snapshot = 'external'
snap_disk.driver_name = 'qcow2'
snapshot.add_disk(snap_disk)
for disk_info, new_filename in network_disks_to_snap:
snap_disk = vconfig.LibvirtConfigGuestSnapshotDisk()
snap_disk.name = disk_info['dev']
snap_disk.source_type = 'network'
snap_disk.source_protocol = disk_info['source_protocol']
snap_disk.snapshot = 'external'
snap_disk.source_path = new_filename
old_dir = disk_info['source_name'].split('/')[0]
snap_disk.source_name = '%s/%s' % (old_dir, new_filename)
snap_disk.source_hosts = disk_info['source_hosts']
snap_disk.source_ports = disk_info['source_ports']
snapshot.add_disk(snap_disk)
for dev in disks_to_skip:
snap_disk = vconfig.LibvirtConfigGuestSnapshotDisk()
snap_disk.name = dev
snap_disk.snapshot = 'no'
snapshot.add_disk(snap_disk)
snapshot_xml = snapshot.to_xml()
LOG.debug("snap xml: %s", snapshot_xml)
snap_flags = (libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY |
libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA |
libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT)
QUIESCE = libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE
try:
domain.snapshotCreateXML(snapshot_xml,
snap_flags | QUIESCE)
return
except libvirt.libvirtError:
LOG.exception(_LE('Unable to create quiesced VM snapshot, '
'attempting again with quiescing disabled.'))
try:
domain.snapshotCreateXML(snapshot_xml, snap_flags)
except libvirt.libvirtError:
LOG.exception(_LE('Unable to create VM snapshot, '
'failing volume_snapshot operation.'))
raise
def _volume_refresh_connection_info(self, context, instance, volume_id):
bdm = objects.BlockDeviceMapping.get_by_volume_id(context,
volume_id)
driver_bdm = driver_block_device.DriverVolumeBlockDevice(bdm)
driver_bdm.refresh_connection_info(context, instance,
self._volume_api, self)
def volume_snapshot_create(self, context, instance, volume_id,
create_info):
"""Create snapshots of a Cinder volume via libvirt.
:param instance: VM instance object reference
:param volume_id: id of volume being snapshotted
:param create_info: dict of information used to create snapshots
- snapshot_id : ID of snapshot
- type : qcow2 / <other>
- new_file : qcow2 file created by Cinder which
becomes the VM's active image after
the snapshot is complete
"""
LOG.debug("volume_snapshot_create: create_info: %(c_info)s",
{'c_info': create_info}, instance=instance)
try:
virt_dom = self._lookup_by_name(instance.name)
except exception.InstanceNotFound:
raise exception.InstanceNotRunning(instance_id=instance.uuid)
if create_info['type'] != 'qcow2':
raise exception.NovaException(_('Unknown type: %s') %
create_info['type'])
snapshot_id = create_info.get('snapshot_id', None)
if snapshot_id is None:
raise exception.NovaException(_('snapshot_id required '
'in create_info'))
try:
self._volume_snapshot_create(context, instance, virt_dom,
volume_id, snapshot_id,
create_info['new_file'])
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE('Error occurred during '
'volume_snapshot_create, '
'sending error status to Cinder.'))
self._volume_snapshot_update_status(
context, snapshot_id, 'error')
self._volume_snapshot_update_status(
context, snapshot_id, 'creating')
def _wait_for_snapshot():
snapshot = self._volume_api.get_snapshot(context, snapshot_id)
if snapshot.get('status') != 'creating':
self._volume_refresh_connection_info(context, instance,
volume_id)
raise loopingcall.LoopingCallDone()
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_snapshot)
timer.start(interval=0.5).wait()
def _volume_snapshot_delete(self, context, instance, volume_id,
snapshot_id, delete_info=None):
"""Note:
if file being merged into == active image:
do a blockRebase (pull) operation
else:
do a blockCommit operation
Files must be adjacent in snap chain.
:param instance: instance object reference
:param volume_id: volume UUID
:param snapshot_id: snapshot UUID (unused currently)
:param delete_info: {
'type': 'qcow2',
'file_to_merge': 'a.img',
'merge_target_file': 'b.img' or None (if merging file_to_merge into
active image)
}
Libvirt blockjob handling required for this method is broken
in versions of libvirt that do not contain:
http://libvirt.org/git/?p=libvirt.git;h=0f9e67bfad (1.1.1)
(Patch is pending in 1.0.5-maint branch as well, but we cannot detect
libvirt 1.0.5.5 vs. 1.0.5.6 here.)
"""
if not self._has_min_version(MIN_LIBVIRT_BLOCKJOBINFO_VERSION):
ver = '.'.join([str(x) for x in MIN_LIBVIRT_BLOCKJOBINFO_VERSION])
msg = _("Libvirt '%s' or later is required for online deletion "
"of volume snapshots.") % ver
raise exception.Invalid(msg)
LOG.debug('volume_snapshot_delete: delete_info: %s', delete_info)
if delete_info['type'] != 'qcow2':
msg = _('Unknown delete_info type %s') % delete_info['type']
raise exception.NovaException(msg)
try:
virt_dom = self._lookup_by_name(instance.name)
except exception.InstanceNotFound:
raise exception.InstanceNotRunning(instance_id=instance.uuid)
# Find dev name
my_dev = None
active_disk = None
xml = virt_dom.XMLDesc(0)
xml_doc = etree.fromstring(xml)
device_info = vconfig.LibvirtConfigGuest()
device_info.parse_dom(xml_doc)
active_disk_object = None
for guest_disk in device_info.devices:
if (guest_disk.root_name != 'disk'):
continue
if (guest_disk.target_dev is None or guest_disk.serial is None):
continue
if guest_disk.serial == volume_id:
my_dev = guest_disk.target_dev
active_disk = guest_disk.source_path
active_protocol = guest_disk.source_protocol
active_disk_object = guest_disk
break
if my_dev is None or (active_disk is None and active_protocol is None):
msg = _('Disk with id: %s '
'not found attached to instance.') % volume_id
LOG.debug('Domain XML: %s', xml)
raise exception.NovaException(msg)
LOG.debug("found device at %s", my_dev)
def _get_snap_dev(filename, backing_store):
if filename is None:
msg = _('filename cannot be None')
raise exception.NovaException(msg)
# libgfapi delete
LOG.debug("XML: %s" % xml)
LOG.debug("active disk object: %s" % active_disk_object)
# determine reference within backing store for desired image
filename_to_merge = filename
matched_name = None
b = backing_store
index = None
current_filename = active_disk_object.source_name.split('/')[1]
if current_filename == filename_to_merge:
return my_dev + '[0]'
while b is not None:
source_filename = b.source_name.split('/')[1]
if source_filename == filename_to_merge:
LOG.debug('found match: %s' % b.source_name)
matched_name = b.source_name
index = b.index
break
b = b.backing_store
if matched_name is None:
msg = _('no match found for %s') % (filename_to_merge)
raise exception.NovaException(msg)
LOG.debug('index of match (%s) is %s' % (b.source_name, index))
my_snap_dev = '%s[%s]' % (my_dev, index)
return my_snap_dev
if delete_info['merge_target_file'] is None:
# pull via blockRebase()
# Merge the most recent snapshot into the active image
rebase_disk = my_dev
rebase_flags = 0
rebase_base = delete_info['file_to_merge'] # often None
if active_protocol is not None:
rebase_base = _get_snap_dev(delete_info['file_to_merge'],
active_disk_object.backing_store)
rebase_bw = 0
LOG.debug('disk: %(disk)s, base: %(base)s, '
'bw: %(bw)s, flags: %(flags)s',
{'disk': rebase_disk,
'base': rebase_base,
'bw': rebase_bw,
'flags': rebase_flags})
result = virt_dom.blockRebase(rebase_disk, rebase_base,
rebase_bw, rebase_flags)
if result == 0:
LOG.debug('blockRebase started successfully')
while self._wait_for_block_job(virt_dom, my_dev,
abort_on_error=True):
LOG.debug('waiting for blockRebase job completion')
time.sleep(0.5)
else:
# commit with blockCommit()
my_snap_base = None
my_snap_top = None
commit_disk = my_dev
commit_flags = 0
if active_protocol is not None:
my_snap_base = _get_snap_dev(delete_info['merge_target_file'],
active_disk_object.backing_store)
my_snap_top = _get_snap_dev(delete_info['file_to_merge'],
active_disk_object.backing_store)
try:
commit_flags |= libvirt.VIR_DOMAIN_BLOCK_COMMIT_RELATIVE
except AttributeError:
ver = '.'.join(
[str(x) for x in
MIN_LIBVIRT_BLOCKCOMMIT_RELATIVE_VERSION])
msg = _("Relative blockcommit support was not detected. "
"Libvirt '%s' or later is required for online "
"deletion of network storage-backed volume "
"snapshots.") % ver
raise exception.Invalid(msg)
commit_base = my_snap_base or delete_info['merge_target_file']
commit_top = my_snap_top or delete_info['file_to_merge']
bandwidth = 0
LOG.debug('will call blockCommit with commit_disk=%(commit_disk)s '
'commit_base=%(commit_base)s '
'commit_top=%(commit_top)s '
% {'commit_disk': commit_disk,
'commit_base': commit_base,
'commit_top': commit_top})
result = virt_dom.blockCommit(commit_disk, commit_base, commit_top,
bandwidth, commit_flags)
if result == 0:
LOG.debug('blockCommit started successfully')
while self._wait_for_block_job(virt_dom, my_dev,
abort_on_error=True):
LOG.debug('waiting for blockCommit job completion')
time.sleep(0.5)
def volume_snapshot_delete(self, context, instance, volume_id, snapshot_id,
delete_info):
try:
self._volume_snapshot_delete(context, instance, volume_id,
snapshot_id, delete_info=delete_info)
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE('Error occurred during '
'volume_snapshot_delete, '
'sending error status to Cinder.'))
self._volume_snapshot_update_status(
context, snapshot_id, 'error_deleting')
self._volume_snapshot_update_status(context, snapshot_id, 'deleting')
self._volume_refresh_connection_info(context, instance, volume_id)
def reboot(self, context, instance, network_info, reboot_type,
block_device_info=None, bad_volumes_callback=None):
"""Reboot a virtual machine, given an instance reference."""
if reboot_type == 'SOFT':
# NOTE(vish): This will attempt to do a graceful shutdown/restart.
try:
soft_reboot_success = self._soft_reboot(instance)
except libvirt.libvirtError as e:
LOG.debug("Instance soft reboot failed: %s", e)
soft_reboot_success = False
if soft_reboot_success:
LOG.info(_LI("Instance soft rebooted successfully."),
instance=instance)
return
else:
LOG.warn(_LW("Failed to soft reboot instance. "
"Trying hard reboot."),
instance=instance)
return self._hard_reboot(context, instance, network_info,
block_device_info)
def _soft_reboot(self, instance):
"""Attempt to shutdown and restart the instance gracefully.
We use shutdown and create here so we can return if the guest
responded and actually rebooted. Note that this method only
succeeds if the guest responds to acpi. Therefore we return
success or failure so we can fall back to a hard reboot if
necessary.
:returns: True if the reboot succeeded
"""
dom = self._lookup_by_name(instance["name"])
state = LIBVIRT_POWER_STATE[dom.info()[0]]
old_domid = dom.ID()
# NOTE(vish): This check allows us to reboot an instance that
# is already shutdown.
if state == power_state.RUNNING:
dom.shutdown()
# NOTE(vish): This actually could take slightly longer than the
# FLAG defines depending on how long the get_info
# call takes to return.
self._prepare_pci_devices_for_use(
pci_manager.get_instance_pci_devs(instance))
for x in xrange(CONF.libvirt.wait_soft_reboot_seconds):
dom = self._lookup_by_name(instance["name"])
state = LIBVIRT_POWER_STATE[dom.info()[0]]
new_domid = dom.ID()
# NOTE(ivoks): By checking domain IDs, we make sure we are
# not recreating domain that's already running.
if old_domid != new_domid:
if state in [power_state.SHUTDOWN,
power_state.CRASHED]:
LOG.info(_LI("Instance shutdown successfully."),
instance=instance)
self._create_domain(domain=dom)
timer = loopingcall.FixedIntervalLoopingCall(
self._wait_for_running, instance)
timer.start(interval=0.5).wait()
return True
else:
LOG.info(_LI("Instance may have been rebooted during soft "
"reboot, so return now."), instance=instance)
return True
greenthread.sleep(1)
return False
def _hard_reboot(self, context, instance, network_info,
block_device_info=None):
"""Reboot a virtual machine, given an instance reference.
Performs a Libvirt reset (if supported) on the domain.
If Libvirt reset is unavailable this method actually destroys and
re-creates the domain to ensure the reboot happens, as the guest
OS cannot ignore this action.
If xml is set, it uses the passed in xml in place of the xml from the
existing domain.
"""
self._destroy(instance)
# Get the system metadata from the instance
system_meta = utils.instance_sys_meta(instance)
# Convert the system metadata to image metadata
image_meta = utils.get_image_from_system_metadata(system_meta)
if not image_meta:
image_ref = instance.get('image_ref')
image_meta = compute_utils.get_image_metadata(context,
self._image_api,
image_ref,
instance)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance,
block_device_info,
image_meta)
# NOTE(vish): This could generate the wrong device_format if we are
# using the raw backend and the images don't exist yet.
# The create_images_and_backing below doesn't properly
# regenerate raw backend images, however, so when it
# does we need to (re)generate the xml after the images
# are in place.
xml = self._get_guest_xml(context, instance, network_info, disk_info,
image_meta=image_meta,
block_device_info=block_device_info,
write_to_disk=True)
# NOTE (rmk): Re-populate any missing backing files.
disk_info_json = self._get_instance_disk_info(instance['name'], xml,
block_device_info)
instance_dir = libvirt_utils.get_instance_path(instance)
self._create_images_and_backing(context, instance, instance_dir,
disk_info_json)
# Initialize all the necessary networking, block devices and
# start the instance.
self._create_domain_and_network(context, xml, instance, network_info,
block_device_info, reboot=True,
vifs_already_plugged=True)
self._prepare_pci_devices_for_use(
pci_manager.get_instance_pci_devs(instance))
def _wait_for_reboot():
"""Called at an interval until the VM is running again."""
state = self.get_info(instance)['state']
if state == power_state.RUNNING:
LOG.info(_LI("Instance rebooted successfully."),
instance=instance)
raise loopingcall.LoopingCallDone()
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_reboot)
timer.start(interval=0.5).wait()
def pause(self, instance):
"""Pause VM instance."""
dom = self._lookup_by_name(instance['name'])
dom.suspend()
def unpause(self, instance):
"""Unpause paused VM instance."""
dom = self._lookup_by_name(instance['name'])
dom.resume()
def power_off(self, instance):
"""Power off the specified instance."""
self._destroy(instance)
def power_on(self, context, instance, network_info,
block_device_info=None):
"""Power on the specified instance."""
# We use _hard_reboot here to ensure that all backing files,
# network, and block device connections, etc. are established
# and available before we attempt to start the instance.
self._hard_reboot(context, instance, network_info, block_device_info)
def suspend(self, instance):
"""Suspend the specified instance."""
dom = self._lookup_by_name(instance['name'])
self._detach_pci_devices(dom,
pci_manager.get_instance_pci_devs(instance))
dom.managedSave(0)
def resume(self, context, instance, network_info, block_device_info=None):
"""resume the specified instance."""
xml = self._get_existing_domain_xml(instance, network_info,
block_device_info)
dom = self._create_domain_and_network(context, xml, instance,
network_info, block_device_info=block_device_info,
vifs_already_plugged=True)
self._attach_pci_devices(dom,
pci_manager.get_instance_pci_devs(instance))
def resume_state_on_host_boot(self, context, instance, network_info,
block_device_info=None):
"""resume guest state when a host is booted."""
# Check if the instance is running already and avoid doing
# anything if it is.
try:
domain = self._lookup_by_name(instance['name'])
state = LIBVIRT_POWER_STATE[domain.info()[0]]
ignored_states = (power_state.RUNNING,
power_state.SUSPENDED,
power_state.NOSTATE,
power_state.PAUSED)
if state in ignored_states:
return
except exception.NovaException:
pass
# Instance is not up and could be in an unknown state.
# Be as absolute as possible about getting it back into
# a known and running state.
self._hard_reboot(context, instance, network_info, block_device_info)
def rescue(self, context, instance, network_info, image_meta,
rescue_password):
"""Loads a VM using rescue images.
A rescue is normally performed when something goes wrong with the
primary images and data needs to be corrected/recovered. Rescuing
should not edit or over-ride the original image, only allow for
data recovery.
"""
instance_dir = libvirt_utils.get_instance_path(instance)
unrescue_xml = self._get_existing_domain_xml(instance, network_info)
unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
libvirt_utils.write_to_file(unrescue_xml_path, unrescue_xml)
if image_meta is not None:
rescue_image_id = image_meta.get('id')
else:
rescue_image_id = None
rescue_images = {
'image_id': (rescue_image_id or
CONF.libvirt.rescue_image_id or instance.image_ref),
'kernel_id': (CONF.libvirt.rescue_kernel_id or
instance.kernel_id),
'ramdisk_id': (CONF.libvirt.rescue_ramdisk_id or
instance.ramdisk_id),
}
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance,
None,
image_meta,
rescue=True)
self._create_image(context, instance,
disk_info['mapping'],
'.rescue', rescue_images,
network_info=network_info,
admin_pass=rescue_password)
xml = self._get_guest_xml(context, instance, network_info, disk_info,
image_meta, rescue=rescue_images,
write_to_disk=True)
self._destroy(instance)
self._create_domain(xml)
def unrescue(self, instance, network_info):
"""Reboot the VM which is being rescued back into primary images.
"""
instance_dir = libvirt_utils.get_instance_path(instance)
unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
xml = libvirt_utils.load_file(unrescue_xml_path)
virt_dom = self._lookup_by_name(instance.name)
self._destroy(instance)
self._create_domain(xml, virt_dom)
libvirt_utils.file_delete(unrescue_xml_path)
rescue_files = os.path.join(instance_dir, "*.rescue")
for rescue_file in glob.iglob(rescue_files):
libvirt_utils.file_delete(rescue_file)
def poll_rebooting_instances(self, timeout, instances):
pass
def _enable_hairpin(self, xml):
interfaces = self._get_interfaces(xml)
for interface in interfaces:
utils.execute('tee',
'/sys/class/net/%s/brport/hairpin_mode' % interface,
process_input='1',
run_as_root=True,
check_exit_code=[0, 1])
# NOTE(ilyaalekseyev): Implementation like in multinics
# for xenapi(tr3buchet)
def spawn(self, context, instance, image_meta, injected_files,
admin_password, network_info=None, block_device_info=None):
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance,
block_device_info,
image_meta)
self._create_image(context, instance,
disk_info['mapping'],
network_info=network_info,
block_device_info=block_device_info,
files=injected_files,
admin_pass=admin_password)
xml = self._get_guest_xml(context, instance, network_info,
disk_info, image_meta,
block_device_info=block_device_info,
write_to_disk=True)
self._create_domain_and_network(context, xml, instance, network_info,
block_device_info)
LOG.debug("Instance is running", instance=instance)
def _wait_for_boot():
"""Called at an interval until the VM is running."""
state = self.get_info(instance)['state']
if state == power_state.RUNNING:
LOG.info(_LI("Instance spawned successfully."),
instance=instance)
raise loopingcall.LoopingCallDone()
timer = loopingcall.FixedIntervalLoopingCall(_wait_for_boot)
timer.start(interval=0.5).wait()
def _flush_libvirt_console(self, pty):
out, err = utils.execute('dd',
'if=%s' % pty,
'iflag=nonblock',
run_as_root=True,
check_exit_code=False)
return out
def _append_to_file(self, data, fpath):
LOG.info(_LI('data: %(data)r, fpath: %(fpath)r'),
{'data': data, 'fpath': fpath})
with open(fpath, 'a+') as fp:
fp.write(data)
return fpath
def get_console_output(self, context, instance):
virt_dom = self._lookup_by_name(instance.name)
xml = virt_dom.XMLDesc(0)
tree = etree.fromstring(xml)
console_types = {}
# NOTE(comstud): We want to try 'file' types first, then try 'pty'
# types. We can't use Python 2.7 syntax of:
# tree.find("./devices/console[@type='file']/source")
# because we need to support 2.6.
console_nodes = tree.findall('./devices/console')
for console_node in console_nodes:
console_type = console_node.get('type')
console_types.setdefault(console_type, [])
console_types[console_type].append(console_node)
# If the guest has a console logging to a file prefer to use that
if console_types.get('file'):
for file_console in console_types.get('file'):
source_node = file_console.find('./source')
if source_node is None:
continue
path = source_node.get("path")
if not path:
continue
libvirt_utils.chown(path, os.getuid())
with libvirt_utils.file_open(path, 'rb') as fp:
log_data, remaining = utils.last_bytes(fp,
MAX_CONSOLE_BYTES)
if remaining > 0:
LOG.info(_LI('Truncated console log returned, '
'%d bytes ignored'), remaining,
instance=instance)
return log_data
# Try 'pty' types
if console_types.get('pty'):
for pty_console in console_types.get('pty'):
source_node = pty_console.find('./source')
if source_node is None:
continue
pty = source_node.get("path")
if not pty:
continue
break
else:
msg = _("Guest does not have a console available")
raise exception.NovaException(msg)
self._chown_console_log_for_instance(instance)
data = self._flush_libvirt_console(pty)
console_log = self._get_console_log_path(instance)
fpath = self._append_to_file(data, console_log)
with libvirt_utils.file_open(fpath, 'rb') as fp:
log_data, remaining = utils.last_bytes(fp, MAX_CONSOLE_BYTES)
if remaining > 0:
LOG.info(_LI('Truncated console log returned, '
'%d bytes ignored'),
remaining, instance=instance)
return log_data
@staticmethod
def get_host_ip_addr():
return CONF.my_ip
def get_vnc_console(self, context, instance):
def get_vnc_port_for_instance(instance_name):
virt_dom = self._lookup_by_name(instance_name)
xml = virt_dom.XMLDesc(0)
dom = xmlutils.safe_minidom_parse_string(xml)
for graphic in dom.getElementsByTagName('graphics'):
if graphic.getAttribute('type') == 'vnc':
return graphic.getAttribute('port')
# NOTE(rmk): We had VNC consoles enabled but the instance in
# question is not actually listening for connections.
raise exception.ConsoleTypeUnavailable(console_type='vnc')
port = get_vnc_port_for_instance(instance.name)
host = CONF.vncserver_proxyclient_address
return {'host': host, 'port': port, 'internal_access_path': None}
def get_spice_console(self, context, instance):
def get_spice_ports_for_instance(instance_name):
virt_dom = self._lookup_by_name(instance_name)
xml = virt_dom.XMLDesc(0)
# TODO(sleepsonthefloor): use etree instead of minidom
dom = xmlutils.safe_minidom_parse_string(xml)
for graphic in dom.getElementsByTagName('graphics'):
if graphic.getAttribute('type') == 'spice':
return (graphic.getAttribute('port'),
graphic.getAttribute('tlsPort'))
# NOTE(rmk): We had Spice consoles enabled but the instance in
# question is not actually listening for connections.
raise exception.ConsoleTypeUnavailable(console_type='spice')
ports = get_spice_ports_for_instance(instance['name'])
host = CONF.spice.server_proxyclient_address
return {'host': host, 'port': ports[0],
'tlsPort': ports[1], 'internal_access_path': None}
@staticmethod
def _supports_direct_io(dirpath):
if not hasattr(os, 'O_DIRECT'):
LOG.debug("This python runtime does not support direct I/O")
return False
testfile = os.path.join(dirpath, ".directio.test")
hasDirectIO = True
try:
f = os.open(testfile, os.O_CREAT | os.O_WRONLY | os.O_DIRECT)
# Check is the write allowed with 512 byte alignment
align_size = 512
m = mmap.mmap(-1, align_size)
m.write(r"x" * align_size)
os.write(f, m)
os.close(f)
LOG.debug("Path '%(path)s' supports direct I/O",
{'path': dirpath})
except OSError as e:
if e.errno == errno.EINVAL:
LOG.debug("Path '%(path)s' does not support direct I/O: "
"'%(ex)s'", {'path': dirpath, 'ex': str(e)})
hasDirectIO = False
else:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error on '%(path)s' while checking "
"direct I/O: '%(ex)s'"),
{'path': dirpath, 'ex': str(e)})
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error on '%(path)s' while checking direct I/O: "
"'%(ex)s'"), {'path': dirpath, 'ex': str(e)})
finally:
try:
os.unlink(testfile)
except Exception:
pass
return hasDirectIO
@staticmethod
def _create_local(target, local_size, unit='G',
fs_format=None, label=None):
"""Create a blank image of specified size."""
libvirt_utils.create_image('raw', target,
'%d%c' % (local_size, unit))
def _create_ephemeral(self, target, ephemeral_size,
fs_label, os_type, is_block_dev=False,
max_size=None, specified_fs=None):
if not is_block_dev:
self._create_local(target, ephemeral_size)
# Run as root only for block devices.
disk.mkfs(os_type, fs_label, target, run_as_root=is_block_dev,
specified_fs=specified_fs)
@staticmethod
def _create_swap(target, swap_mb, max_size=None):
"""Create a swap file of specified size."""
libvirt_utils.create_image('raw', target, '%dM' % swap_mb)
utils.mkfs('swap', target)
@staticmethod
def _get_console_log_path(instance):
return os.path.join(libvirt_utils.get_instance_path(instance),
'console.log')
@staticmethod
def _get_disk_config_path(instance, suffix=''):
return os.path.join(libvirt_utils.get_instance_path(instance),
'disk.config' + suffix)
def _chown_console_log_for_instance(self, instance):
console_log = self._get_console_log_path(instance)
if os.path.exists(console_log):
libvirt_utils.chown(console_log, os.getuid())
def _chown_disk_config_for_instance(self, instance):
disk_config = self._get_disk_config_path(instance)
if os.path.exists(disk_config):
libvirt_utils.chown(disk_config, os.getuid())
@staticmethod
def _is_booted_from_volume(instance, disk_mapping):
"""Determines whether the VM is booting from volume
Determines whether the disk mapping indicates that the VM
is booting from a volume.
"""
return ((not bool(instance.get('image_ref')))
or 'disk' not in disk_mapping)
def _inject_data(self, instance, network_info, admin_pass, files, suffix):
"""Injects data in a disk image
Helper used for injecting data in a disk image file system.
Keyword arguments:
instance -- a dict that refers instance specifications
network_info -- a dict that refers network speficications
admin_pass -- a string used to set an admin password
files -- a list of files needs to be injected
suffix -- a string used as an image name suffix
"""
# Handles the partition need to be used.
target_partition = None
if not instance['kernel_id']:
target_partition = CONF.libvirt.inject_partition
if target_partition == 0:
target_partition = None
if CONF.libvirt.virt_type == 'lxc':
target_partition = None
# Handles the key injection.
if CONF.libvirt.inject_key and instance.get('key_data'):
key = str(instance['key_data'])
else:
key = None
# Handles the admin password injection.
if not CONF.libvirt.inject_password:
admin_pass = None
# Handles the network injection.
net = netutils.get_injected_network_template(
network_info, libvirt_virt_type=CONF.libvirt.virt_type)
# Handles the metadata injection
metadata = instance.get('metadata')
image_type = CONF.libvirt.images_type
if any((key, net, metadata, admin_pass, files)):
injection_image = self.image_backend.image(
instance,
'disk' + suffix,
image_type)
img_id = instance['image_ref']
if not injection_image.check_image_exists():
LOG.warn(_LW('Image %s not found on disk storage. '
'Continue without injecting data'),
injection_image.path, instance=instance)
return
try:
disk.inject_data(injection_image.path,
key, net, metadata, admin_pass, files,
partition=target_partition,
use_cow=CONF.use_cow_images,
mandatory=('files',))
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Error injecting data into image '
'%(img_id)s (%(e)s)'),
{'img_id': img_id, 'e': e},
instance=instance)
def _create_image(self, context, instance,
disk_mapping, suffix='',
disk_images=None, network_info=None,
block_device_info=None, files=None,
admin_pass=None, inject_files=True):
if not suffix:
suffix = ''
booted_from_volume = self._is_booted_from_volume(
instance, disk_mapping)
def image(fname, image_type=CONF.libvirt.images_type):
return self.image_backend.image(instance,
fname + suffix, image_type)
def raw(fname):
return image(fname, image_type='raw')
# ensure directories exist and are writable
fileutils.ensure_tree(libvirt_utils.get_instance_path(instance))
LOG.info(_LI('Creating image'), instance=instance)
# NOTE(dprince): for rescue console.log may already exist... chown it.
self._chown_console_log_for_instance(instance)
# NOTE(yaguang): For evacuate disk.config already exist in shared
# storage, chown it.
self._chown_disk_config_for_instance(instance)
# NOTE(vish): No need add the suffix to console.log
libvirt_utils.write_to_file(
self._get_console_log_path(instance), '', 7)
if not disk_images:
disk_images = {'image_id': instance['image_ref'],
'kernel_id': instance['kernel_id'],
'ramdisk_id': instance['ramdisk_id']}
if disk_images['kernel_id']:
fname = imagecache.get_cache_fname(disk_images, 'kernel_id')
raw('kernel').cache(fetch_func=libvirt_utils.fetch_image,
context=context,
filename=fname,
image_id=disk_images['kernel_id'],
user_id=instance['user_id'],
project_id=instance['project_id'])
if disk_images['ramdisk_id']:
fname = imagecache.get_cache_fname(disk_images, 'ramdisk_id')
raw('ramdisk').cache(fetch_func=libvirt_utils.fetch_image,
context=context,
filename=fname,
image_id=disk_images['ramdisk_id'],
user_id=instance['user_id'],
project_id=instance['project_id'])
inst_type = flavors.extract_flavor(instance)
# NOTE(ndipanov): Even if disk_mapping was passed in, which
# currently happens only on rescue - we still don't want to
# create a base image.
if not booted_from_volume:
root_fname = imagecache.get_cache_fname(disk_images, 'image_id')
size = instance['root_gb'] * units.Gi
if size == 0 or suffix == '.rescue':
size = None
backend = image('disk')
if backend.SUPPORTS_CLONE:
def clone_fallback_to_fetch(*args, **kwargs):
try:
backend.clone(context, disk_images['image_id'])
except exception.ImageUnacceptable:
libvirt_utils.fetch_image(*args, **kwargs)
fetch_func = clone_fallback_to_fetch
else:
fetch_func = libvirt_utils.fetch_image
backend.cache(fetch_func=fetch_func,
context=context,
filename=root_fname,
size=size,
image_id=disk_images['image_id'],
user_id=instance['user_id'],
project_id=instance['project_id'])
# Lookup the filesystem type if required
os_type_with_default = disk.get_fs_type_for_os_type(
instance['os_type'])
ephemeral_gb = instance['ephemeral_gb']
if 'disk.local' in disk_mapping:
disk_image = image('disk.local')
fn = functools.partial(self._create_ephemeral,
fs_label='ephemeral0',
os_type=instance["os_type"],
is_block_dev=disk_image.is_block_dev)
fname = "ephemeral_%s_%s" % (ephemeral_gb, os_type_with_default)
size = ephemeral_gb * units.Gi
disk_image.cache(fetch_func=fn,
filename=fname,
size=size,
ephemeral_size=ephemeral_gb)
for idx, eph in enumerate(driver.block_device_info_get_ephemerals(
block_device_info)):
disk_image = image(blockinfo.get_eph_disk(idx))
specified_fs = eph.get('guest_format')
if specified_fs and not self.is_supported_fs_format(specified_fs):
msg = _("%s format is not supported") % specified_fs
raise exception.InvalidBDMFormat(details=msg)
fn = functools.partial(self._create_ephemeral,
fs_label='ephemeral%d' % idx,
os_type=instance["os_type"],
is_block_dev=disk_image.is_block_dev)
size = eph['size'] * units.Gi
fname = "ephemeral_%s_%s" % (eph['size'], os_type_with_default)
disk_image.cache(
fetch_func=fn,
filename=fname,
size=size,
ephemeral_size=eph['size'],
specified_fs=specified_fs)
if 'disk.swap' in disk_mapping:
mapping = disk_mapping['disk.swap']
swap_mb = 0
swap = driver.block_device_info_get_swap(block_device_info)
if driver.swap_is_usable(swap):
swap_mb = swap['swap_size']
elif (inst_type['swap'] > 0 and
not block_device.volume_in_mapping(
mapping['dev'], block_device_info)):
swap_mb = inst_type['swap']
if swap_mb > 0:
size = swap_mb * units.Mi
image('disk.swap').cache(fetch_func=self._create_swap,
filename="swap_%s" % swap_mb,
size=size,
swap_mb=swap_mb)
# Config drive
if configdrive.required_by(instance):
LOG.info(_LI('Using config drive'), instance=instance)
extra_md = {}
if admin_pass:
extra_md['admin_pass'] = admin_pass
inst_md = instance_metadata.InstanceMetadata(instance,
content=files, extra_md=extra_md, network_info=network_info)
with configdrive.ConfigDriveBuilder(instance_md=inst_md) as cdb:
configdrive_path = self._get_disk_config_path(instance, suffix)
LOG.info(_LI('Creating config drive at %(path)s'),
{'path': configdrive_path}, instance=instance)
try:
cdb.make_drive(configdrive_path)
except processutils.ProcessExecutionError as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Creating config drive failed '
'with error: %s'),
e, instance=instance)
# File injection only if needed
elif inject_files and CONF.libvirt.inject_partition != -2:
if booted_from_volume:
LOG.warn(_LW('File injection into a boot from volume '
'instance is not supported'), instance=instance)
self._inject_data(
instance, network_info, admin_pass, files, suffix)
if CONF.libvirt.virt_type == 'uml':
libvirt_utils.chown(image('disk').path, 'root')
def _prepare_pci_devices_for_use(self, pci_devices):
# kvm , qemu support managed mode
# In managed mode, the configured device will be automatically
# detached from the host OS drivers when the guest is started,
# and then re-attached when the guest shuts down.
if CONF.libvirt.virt_type != 'xen':
# we do manual detach only for xen
return
try:
for dev in pci_devices:
libvirt_dev_addr = dev['hypervisor_name']
libvirt_dev = \
self._conn.nodeDeviceLookupByName(libvirt_dev_addr)
# Note(yjiang5) Spelling for 'dettach' is correct, see
# http://libvirt.org/html/libvirt-libvirt.html.
libvirt_dev.dettach()
# Note(yjiang5): A reset of one PCI device may impact other
# devices on the same bus, thus we need two separated loops
# to detach and then reset it.
for dev in pci_devices:
libvirt_dev_addr = dev['hypervisor_name']
libvirt_dev = \
self._conn.nodeDeviceLookupByName(libvirt_dev_addr)
libvirt_dev.reset()
except libvirt.libvirtError as exc:
raise exception.PciDevicePrepareFailed(id=dev['id'],
instance_uuid=
dev['instance_uuid'],
reason=str(exc))
def _detach_pci_devices(self, dom, pci_devs):
# for libvirt version < 1.1.1, this is race condition
# so forbid detach if not had this version
if not self._has_min_version(MIN_LIBVIRT_DEVICE_CALLBACK_VERSION):
if pci_devs:
reason = (_("Detaching PCI devices with libvirt < %(ver)s"
" is not permitted") %
{'ver': MIN_LIBVIRT_DEVICE_CALLBACK_VERSION})
raise exception.PciDeviceDetachFailed(reason=reason,
dev=pci_devs)
try:
for dev in pci_devs:
dom.detachDeviceFlags(self._get_guest_pci_device(dev).to_xml(),
libvirt.VIR_DOMAIN_AFFECT_LIVE)
# after detachDeviceFlags returned, we should check the dom to
# ensure the detaching is finished
xml = dom.XMLDesc(0)
xml_doc = etree.fromstring(xml)
guest_config = vconfig.LibvirtConfigGuest()
guest_config.parse_dom(xml_doc)
for hdev in [d for d in guest_config.devices
if isinstance(d, vconfig.LibvirtConfigGuestHostdevPCI)]:
hdbsf = [hdev.domain, hdev.bus, hdev.slot, hdev.function]
dbsf = pci_utils.parse_address(dev['address'])
if [int(x, 16) for x in hdbsf] ==\
[int(x, 16) for x in dbsf]:
raise exception.PciDeviceDetachFailed(reason=
"timeout",
dev=dev)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
LOG.warn(_LW("Instance disappeared while detaching "
"a PCI device from it."))
else:
raise
def _attach_pci_devices(self, dom, pci_devs):
try:
for dev in pci_devs:
dom.attachDevice(self._get_guest_pci_device(dev).to_xml())
except libvirt.libvirtError:
LOG.error(_LE('Attaching PCI devices %(dev)s to %(dom)s failed.'),
{'dev': pci_devs, 'dom': dom.ID()})
raise
def _set_host_enabled(self, enabled,
disable_reason=DISABLE_REASON_UNDEFINED):
"""Enables / Disables the compute service on this host.
This doesn't override non-automatic disablement with an automatic
setting; thereby permitting operators to keep otherwise
healthy hosts out of rotation.
"""
status_name = {True: 'disabled',
False: 'enabled'}
disable_service = not enabled
ctx = nova_context.get_admin_context()
try:
service = objects.Service.get_by_compute_host(ctx, CONF.host)
if service.disabled != disable_service:
# Note(jang): this is a quick fix to stop operator-
# disabled compute hosts from re-enabling themselves
# automatically. We prefix any automatic reason code
# with a fixed string. We only re-enable a host
# automatically if we find that string in place.
# This should probably be replaced with a separate flag.
if not service.disabled or (
service.disabled_reason and
service.disabled_reason.startswith(DISABLE_PREFIX)):
service.disabled = disable_service
service.disabled_reason = (
DISABLE_PREFIX + disable_reason
if disable_service else DISABLE_REASON_UNDEFINED)
service.save()
LOG.debug('Updating compute service status to %s',
status_name[disable_service])
else:
LOG.debug('Not overriding manual compute service '
'status with: %s',
status_name[disable_service])
except exception.ComputeHostNotFound:
LOG.warn(_LW('Cannot update service status on host: %s,'
'since it is not registered.'), CONF.host)
except Exception:
LOG.warn(_LW('Cannot update service status on host: %s,'
'due to an unexpected exception.'), CONF.host,
exc_info=True)
def _get_host_capabilities(self):
"""Returns an instance of config.LibvirtConfigCaps representing
the capabilities of the host.
"""
if not self._caps:
xmlstr = self._conn.getCapabilities()
self._caps = vconfig.LibvirtConfigCaps()
self._caps.parse_str(xmlstr)
if hasattr(libvirt, 'VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES'):
try:
features = self._conn.baselineCPU(
[self._caps.host.cpu.to_xml()],
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
# FIXME(wangpan): the return value of baselineCPU should be
# None or xml string, but libvirt has a bug
# of it from 1.1.2 which is fixed in 1.2.0,
# this -1 checking should be removed later.
if features and features != -1:
cpu = vconfig.LibvirtConfigCPU()
cpu.parse_str(features)
self._caps.host.cpu.features = cpu.features
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_SUPPORT:
LOG.warn(_LW("URI %(uri)s does not support full set"
" of host capabilities: " "%(error)s"),
{'uri': self.uri(), 'error': ex})
else:
raise
return self._caps
def _get_host_uuid(self):
"""Returns a UUID representing the host."""
caps = self._get_host_capabilities()
return caps.host.uuid
def _get_guest_cpu_model_config(self):
mode = CONF.libvirt.cpu_mode
model = CONF.libvirt.cpu_model
if (CONF.libvirt.virt_type == "kvm" or
CONF.libvirt.virt_type == "qemu"):
if mode is None:
mode = "host-model"
if mode == "none":
return vconfig.LibvirtConfigGuestCPU()
else:
if mode is None or mode == "none":
return None
if ((CONF.libvirt.virt_type != "kvm" and
CONF.libvirt.virt_type != "qemu")):
msg = _("Config requested an explicit CPU model, but "
"the current libvirt hypervisor '%s' does not "
"support selecting CPU models") % CONF.libvirt.virt_type
raise exception.Invalid(msg)
if mode == "custom" and model is None:
msg = _("Config requested a custom CPU model, but no "
"model name was provided")
raise exception.Invalid(msg)
elif mode != "custom" and model is not None:
msg = _("A CPU model name should not be set when a "
"host CPU model is requested")
raise exception.Invalid(msg)
LOG.debug("CPU mode '%(mode)s' model '%(model)s' was chosen",
{'mode': mode, 'model': (model or "")})
cpu = vconfig.LibvirtConfigGuestCPU()
cpu.mode = mode
cpu.model = model
return cpu
def _get_guest_cpu_config(self, flavor, image):
cpu = self._get_guest_cpu_model_config()
if cpu is None:
return None
topology = hardware.VirtCPUTopology.get_best_config(flavor,
image)
cpu.sockets = topology.sockets
cpu.cores = topology.cores
cpu.threads = topology.threads
return cpu
def _get_guest_disk_config(self, instance, name, disk_mapping, inst_type,
image_type=None):
image = self.image_backend.image(instance,
name,
image_type)
disk_info = disk_mapping[name]
return image.libvirt_info(disk_info['bus'],
disk_info['dev'],
disk_info['type'],
self.disk_cachemode,
inst_type['extra_specs'],
self._get_hypervisor_version())
def _get_guest_storage_config(self, instance, image_meta,
disk_info,
rescue, block_device_info,
inst_type):
devices = []
disk_mapping = disk_info['mapping']
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
if CONF.libvirt.virt_type == "lxc":
fs = vconfig.LibvirtConfigGuestFilesys()
fs.source_type = "mount"
fs.source_dir = os.path.join(
libvirt_utils.get_instance_path(instance), 'rootfs')
devices.append(fs)
else:
if rescue:
diskrescue = self._get_guest_disk_config(instance,
'disk.rescue',
disk_mapping,
inst_type)
devices.append(diskrescue)
diskos = self._get_guest_disk_config(instance,
'disk',
disk_mapping,
inst_type)
devices.append(diskos)
else:
if 'disk' in disk_mapping:
diskos = self._get_guest_disk_config(instance,
'disk',
disk_mapping,
inst_type)
devices.append(diskos)
if 'disk.local' in disk_mapping:
disklocal = self._get_guest_disk_config(instance,
'disk.local',
disk_mapping,
inst_type)
devices.append(disklocal)
instance.default_ephemeral_device = (
block_device.prepend_dev(disklocal.target_dev))
instance.save()
for idx, eph in enumerate(
driver.block_device_info_get_ephemerals(
block_device_info)):
diskeph = self._get_guest_disk_config(
instance,
blockinfo.get_eph_disk(idx),
disk_mapping, inst_type)
devices.append(diskeph)
if 'disk.swap' in disk_mapping:
diskswap = self._get_guest_disk_config(instance,
'disk.swap',
disk_mapping,
inst_type)
devices.append(diskswap)
instance.default_swap_device = (
block_device.prepend_dev(diskswap.target_dev))
instance.save()
for vol in block_device_mapping:
connection_info = vol['connection_info']
vol_dev = block_device.prepend_dev(vol['mount_device'])
info = disk_mapping[vol_dev]
cfg = self._connect_volume(connection_info, info)
devices.append(cfg)
vol['connection_info'] = connection_info
vol.save()
if 'disk.config' in disk_mapping:
diskconfig = self._get_guest_disk_config(instance,
'disk.config',
disk_mapping,
inst_type,
'raw')
devices.append(diskconfig)
for d in devices:
self._set_cache_mode(d)
if (image_meta and
image_meta.get('properties', {}).get('hw_scsi_model')):
hw_scsi_model = image_meta['properties']['hw_scsi_model']
scsi_controller = vconfig.LibvirtConfigGuestController()
scsi_controller.type = 'scsi'
scsi_controller.model = hw_scsi_model
devices.append(scsi_controller)
return devices
def _get_guest_config_sysinfo(self, instance):
sysinfo = vconfig.LibvirtConfigGuestSysinfo()
sysinfo.system_manufacturer = version.vendor_string()
sysinfo.system_product = version.product_string()
sysinfo.system_version = version.version_string_with_package()
sysinfo.system_serial = self._get_host_uuid()
sysinfo.system_uuid = instance['uuid']
return sysinfo
def _get_guest_pci_device(self, pci_device):
dbsf = pci_utils.parse_address(pci_device['address'])
dev = vconfig.LibvirtConfigGuestHostdevPCI()
dev.domain, dev.bus, dev.slot, dev.function = dbsf
# only kvm support managed mode
if CONF.libvirt.virt_type in ('xen',):
dev.managed = 'no'
if CONF.libvirt.virt_type in ('kvm', 'qemu'):
dev.managed = 'yes'
return dev
def _get_guest_config_meta(self, context, instance, flavor):
"""Get metadata config for guest."""
meta = vconfig.LibvirtConfigGuestMetaNovaInstance()
meta.package = version.version_string_with_package()
meta.name = instance["display_name"]
meta.creationTime = time.time()
if instance["image_ref"] not in ("", None):
meta.roottype = "image"
meta.rootid = instance["image_ref"]
if context is not None:
ometa = vconfig.LibvirtConfigGuestMetaNovaOwner()
ometa.userid = context.user_id
ometa.username = context.user_name
ometa.projectid = context.project_id
ometa.projectname = context.project_name
meta.owner = ometa
fmeta = vconfig.LibvirtConfigGuestMetaNovaFlavor()
fmeta.name = flavor.name
fmeta.memory = flavor.memory_mb
fmeta.vcpus = flavor.vcpus
fmeta.ephemeral = flavor.ephemeral_gb
fmeta.disk = flavor.root_gb
fmeta.swap = flavor.swap
meta.flavor = fmeta
return meta
def _get_guest_config(self, instance, network_info, image_meta,
disk_info, rescue=None, block_device_info=None,
context=None):
"""Get config data for parameters.
:param rescue: optional dictionary that should contain the key
'ramdisk_id' if a ramdisk is needed for the rescue image and
'kernel_id' if a kernel is needed for the rescue image.
"""
flavor = objects.Flavor.get_by_id(
nova_context.get_admin_context(read_deleted='yes'),
instance['instance_type_id'])
inst_path = libvirt_utils.get_instance_path(instance)
disk_mapping = disk_info['mapping']
img_meta_prop = image_meta.get('properties', {}) if image_meta else {}
CONSOLE = "console=tty0 console=ttyS0"
guest = vconfig.LibvirtConfigGuest()
guest.virt_type = CONF.libvirt.virt_type
guest.name = instance['name']
guest.uuid = instance['uuid']
# We are using default unit for memory: KiB
guest.memory = flavor.memory_mb * units.Ki
guest.vcpus = flavor.vcpus
guest.cpuset = hardware.get_vcpu_pin_set()
guest.metadata.append(self._get_guest_config_meta(context,
instance,
flavor))
cputuning = ['shares', 'period', 'quota']
for name in cputuning:
key = "quota:cpu_" + name
if key in flavor.extra_specs:
if guest.cputune is None:
guest.cputune = vconfig.LibvirtConfigGuestCPUTune()
setattr(guest.cputune, name,
int(flavor.extra_specs[key]))
guest.cpu = self._get_guest_cpu_config(flavor, image_meta)
if 'root' in disk_mapping:
root_device_name = block_device.prepend_dev(
disk_mapping['root']['dev'])
else:
root_device_name = None
if root_device_name:
# NOTE(yamahata):
# for nova.api.ec2.cloud.CloudController.get_metadata()
instance.root_device_name = root_device_name
instance.save()
guest.os_type = vm_mode.get_from_instance(instance)
if guest.os_type is None:
if CONF.libvirt.virt_type == "lxc":
guest.os_type = vm_mode.EXE
elif CONF.libvirt.virt_type == "uml":
guest.os_type = vm_mode.UML
elif CONF.libvirt.virt_type == "xen":
guest.os_type = vm_mode.XEN
else:
guest.os_type = vm_mode.HVM
if CONF.libvirt.virt_type == "xen" and guest.os_type == vm_mode.HVM:
guest.os_loader = CONF.libvirt.xen_hvmloader_path
if CONF.libvirt.virt_type in ("kvm", "qemu"):
caps = self._get_host_capabilities()
if caps.host.cpu.arch in ("i686", "x86_64"):
guest.sysinfo = self._get_guest_config_sysinfo(instance)
guest.os_smbios = vconfig.LibvirtConfigGuestSMBIOS()
# The underlying machine type can be set as an image attribute,
# or otherwise based on some architecture specific defaults
if (image_meta is not None and image_meta.get('properties') and
image_meta['properties'].get('hw_machine_type')
is not None):
guest.os_mach_type = \
image_meta['properties']['hw_machine_type']
else:
# For ARM systems we will default to vexpress-a15 for armv7
# and virt for aarch64
if caps.host.cpu.arch == "armv7l":
guest.os_mach_type = "vexpress-a15"
if caps.host.cpu.arch == "aarch64":
guest.os_mach_type = "virt"
if CONF.libvirt.virt_type == "lxc":
guest.os_init_path = "/sbin/init"
guest.os_cmdline = CONSOLE
elif CONF.libvirt.virt_type == "uml":
guest.os_kernel = "/usr/bin/linux"
guest.os_root = root_device_name
else:
if rescue:
if rescue.get('kernel_id'):
guest.os_kernel = os.path.join(inst_path, "kernel.rescue")
if CONF.libvirt.virt_type == "xen":
guest.os_cmdline = "ro root=%s" % root_device_name
else:
guest.os_cmdline = ("root=%s %s" % (root_device_name,
CONSOLE))
if CONF.libvirt.virt_type == "qemu":
guest.os_cmdline += " no_timer_check"
if rescue.get('ramdisk_id'):
guest.os_initrd = os.path.join(inst_path, "ramdisk.rescue")
elif instance['kernel_id']:
guest.os_kernel = os.path.join(inst_path, "kernel")
if CONF.libvirt.virt_type == "xen":
guest.os_cmdline = "ro root=%s" % root_device_name
else:
guest.os_cmdline = ("root=%s %s" % (root_device_name,
CONSOLE))
if CONF.libvirt.virt_type == "qemu":
guest.os_cmdline += " no_timer_check"
if instance['ramdisk_id']:
guest.os_initrd = os.path.join(inst_path, "ramdisk")
# we only support os_command_line with images with an explicit
# kernel set and don't want to break nova if there's an
# os_command_line property without a specified kernel_id param
if image_meta:
img_props = image_meta.get('properties', {})
if img_props.get('os_command_line'):
guest.os_cmdline = img_props.get('os_command_line')
else:
guest.os_boot_dev = blockinfo.get_boot_order(disk_info)
if ((CONF.libvirt.virt_type != "lxc" and
CONF.libvirt.virt_type != "uml")):
guest.acpi = True
guest.apic = True
# NOTE(mikal): Microsoft Windows expects the clock to be in
# "localtime". If the clock is set to UTC, then you can use a
# registry key to let windows know, but Microsoft says this is
# buggy in http://support.microsoft.com/kb/2687252
clk = vconfig.LibvirtConfigGuestClock()
if instance['os_type'] == 'windows':
LOG.info(_LI('Configuring timezone for windows instance to '
'localtime'), instance=instance)
clk.offset = 'localtime'
else:
clk.offset = 'utc'
guest.set_clock(clk)
if CONF.libvirt.virt_type == "kvm":
# TODO(berrange) One day this should be per-guest
# OS type configurable
tmpit = vconfig.LibvirtConfigGuestTimer()
tmpit.name = "pit"
tmpit.tickpolicy = "delay"
tmrtc = vconfig.LibvirtConfigGuestTimer()
tmrtc.name = "rtc"
tmrtc.tickpolicy = "catchup"
clk.add_timer(tmpit)
clk.add_timer(tmrtc)
arch = libvirt_utils.get_arch(image_meta)
if arch in ("i686", "x86_64"):
# NOTE(rfolco): HPET is a hardware timer for x86 arch.
# qemu -no-hpet is not supported on non-x86 targets.
tmhpet = vconfig.LibvirtConfigGuestTimer()
tmhpet.name = "hpet"
tmhpet.present = False
clk.add_timer(tmhpet)
for config in self._get_guest_storage_config(instance,
image_meta,
disk_info,
rescue,
block_device_info,
flavor):
guest.add_device(config)
for vif in network_info:
config = self.vif_driver.get_config(
instance, vif, image_meta,
flavor, CONF.libvirt.virt_type)
guest.add_device(config)
if ((CONF.libvirt.virt_type == "qemu" or
CONF.libvirt.virt_type == "kvm")):
# The QEMU 'pty' driver throws away any data if no
# client app is connected. Thus we can't get away
# with a single type=pty console. Instead we have
# to configure two separate consoles.
consolelog = vconfig.LibvirtConfigGuestSerial()
consolelog.type = "file"
consolelog.source_path = self._get_console_log_path(instance)
guest.add_device(consolelog)
consolepty = vconfig.LibvirtConfigGuestSerial()
else:
consolepty = vconfig.LibvirtConfigGuestConsole()
consolepty.type = "pty"
guest.add_device(consolepty)
# We want a tablet if VNC is enabled,
# or SPICE is enabled and the SPICE agent is disabled
# NB: this implies that if both SPICE + VNC are enabled
# at the same time, we'll get the tablet whether the
# SPICE agent is used or not.
need_usb_tablet = False
if CONF.vnc_enabled:
need_usb_tablet = CONF.libvirt.use_usb_tablet
elif CONF.spice.enabled and not CONF.spice.agent_enabled:
need_usb_tablet = CONF.libvirt.use_usb_tablet
if need_usb_tablet and guest.os_type == vm_mode.HVM:
tablet = vconfig.LibvirtConfigGuestInput()
tablet.type = "tablet"
tablet.bus = "usb"
guest.add_device(tablet)
if CONF.spice.enabled and CONF.spice.agent_enabled and \
CONF.libvirt.virt_type not in ('lxc', 'uml', 'xen'):
channel = vconfig.LibvirtConfigGuestChannel()
channel.target_name = "com.redhat.spice.0"
guest.add_device(channel)
# NB some versions of libvirt support both SPICE and VNC
# at the same time. We're not trying to second guess which
# those versions are. We'll just let libvirt report the
# errors appropriately if the user enables both.
add_video_driver = False
if ((CONF.vnc_enabled and
CONF.libvirt.virt_type not in ('lxc', 'uml'))):
graphics = vconfig.LibvirtConfigGuestGraphics()
graphics.type = "vnc"
graphics.keymap = CONF.vnc_keymap
graphics.listen = CONF.vncserver_listen
guest.add_device(graphics)
add_video_driver = True
if CONF.spice.enabled and \
CONF.libvirt.virt_type not in ('lxc', 'uml', 'xen'):
graphics = vconfig.LibvirtConfigGuestGraphics()
graphics.type = "spice"
graphics.keymap = CONF.spice.keymap
graphics.listen = CONF.spice.server_listen
guest.add_device(graphics)
add_video_driver = True
if add_video_driver:
VALID_VIDEO_DEVICES = ("vga", "cirrus", "vmvga", "xen", "qxl")
video = vconfig.LibvirtConfigGuestVideo()
# NOTE(ldbragst): The following logic sets the video.type
# depending on supported defaults given the architecture,
# virtualization type, and features. The video.type attribute can
# be overridden by the user with image_meta['properties'], which
# is carried out in the next if statement below this one.
arch = libvirt_utils.get_arch(image_meta)
if guest.os_type == vm_mode.XEN:
video.type = 'xen'
elif arch in ('ppc', 'ppc64'):
# NOTE(ldbragst): PowerKVM doesn't support 'cirrus' be default
# so use 'vga' instead when running on Power hardware.
video.type = 'vga'
elif CONF.spice.enabled:
video.type = 'qxl'
if img_meta_prop.get('hw_video_model'):
video.type = img_meta_prop.get('hw_video_model')
if (video.type not in VALID_VIDEO_DEVICES):
raise exception.InvalidVideoMode(model=video.type)
# Set video memory, only if the flavor's limit is set
video_ram = int(img_meta_prop.get('hw_video_ram', 0))
max_vram = int(flavor.extra_specs
.get('hw_video:ram_max_mb', 0))
if video_ram > max_vram:
raise exception.RequestedVRamTooHigh(req_vram=video_ram,
max_vram=max_vram)
if max_vram and video_ram:
video.vram = video_ram
guest.add_device(video)
# Qemu guest agent only support 'qemu' and 'kvm' hypervisor
if CONF.libvirt.virt_type in ('qemu', 'kvm'):
qga_enabled = False
# Enable qga only if the 'hw_qemu_guest_agent' is equal to yes
hw_qga = img_meta_prop.get('hw_qemu_guest_agent', 'no')
if hw_qga.lower() == 'yes':
LOG.debug("Qemu guest agent is enabled through image "
"metadata", instance=instance)
qga_enabled = True
if qga_enabled:
qga = vconfig.LibvirtConfigGuestChannel()
qga.type = "unix"
qga.target_name = "org.qemu.guest_agent.0"
qga.source_path = ("/var/lib/libvirt/qemu/%s.%s.sock" %
("org.qemu.guest_agent.0", instance['name']))
guest.add_device(qga)
if (img_meta_prop.get('hw_rng_model') == 'virtio' and
flavor.extra_specs.get('hw_rng:allowed',
'').lower() == 'true'):
rng_device = vconfig.LibvirtConfigGuestRng()
rate_bytes = flavor.extra_specs.get('hw_rng:rate_bytes', 0)
period = flavor.extra_specs.get('hw_rng:rate_period', 0)
if rate_bytes:
rng_device.rate_bytes = int(rate_bytes)
rng_device.rate_period = int(period)
if (CONF.libvirt.rng_dev_path and
not os.path.exists(CONF.libvirt.rng_dev_path)):
raise exception.RngDeviceNotExist(
path=CONF.libvirt.rng_dev_path)
rng_device.backend = CONF.libvirt.rng_dev_path
guest.add_device(rng_device)
if CONF.libvirt.virt_type in ('xen', 'qemu', 'kvm'):
for pci_dev in pci_manager.get_instance_pci_devs(instance):
guest.add_device(self._get_guest_pci_device(pci_dev))
else:
if len(pci_manager.get_instance_pci_devs(instance)) > 0:
raise exception.PciDeviceUnsupportedHypervisor(
type=CONF.libvirt.virt_type)
watchdog_action = flavor.extra_specs.get('hw_watchdog_action',
'disabled')
if (image_meta is not None and
image_meta.get('properties', {}).get('hw_watchdog_action')):
watchdog_action = image_meta['properties']['hw_watchdog_action']
# NB(sross): currently only actually supported by KVM/QEmu
if watchdog_action != 'disabled':
if watchdog_actions.is_valid_watchdog_action(watchdog_action):
bark = vconfig.LibvirtConfigGuestWatchdog()
bark.action = watchdog_action
guest.add_device(bark)
else:
raise exception.InvalidWatchdogAction(action=watchdog_action)
return guest
def _get_guest_xml(self, context, instance, network_info, disk_info,
image_meta=None, rescue=None,
block_device_info=None, write_to_disk=False):
if image_meta is None:
image_ref = instance['image_ref']
image_meta = compute_utils.get_image_metadata(
context, self._image_api, image_ref, instance)
# NOTE(danms): Stringifying a NetworkInfo will take a lock. Do
# this ahead of time so that we don't acquire it while also
# holding the logging lock.
network_info_str = str(network_info)
msg = ('Start _get_guest_xml '
'network_info=%(network_info)s '
'disk_info=%(disk_info)s '
'image_meta=%(image_meta)s rescue=%(rescue)s '
'block_device_info=%(block_device_info)s' %
{'network_info': network_info_str, 'disk_info': disk_info,
'image_meta': image_meta, 'rescue': rescue,
'block_device_info': block_device_info})
# NOTE(mriedem): block_device_info can contain auth_password so we
# need to sanitize the password in the message.
LOG.debug(logging.mask_password(msg), instance=instance)
conf = self._get_guest_config(instance, network_info, image_meta,
disk_info, rescue, block_device_info,
context)
xml = conf.to_xml()
if write_to_disk:
instance_dir = libvirt_utils.get_instance_path(instance)
xml_path = os.path.join(instance_dir, 'libvirt.xml')
libvirt_utils.write_to_file(xml_path, xml)
LOG.debug('End _get_guest_xml xml=%(xml)s',
{'xml': xml}, instance=instance)
return xml
def _lookup_by_id(self, instance_id):
"""Retrieve libvirt domain object given an instance id.
All libvirt error handling should be handled in this method and
relevant nova exceptions should be raised in response.
"""
try:
return self._conn.lookupByID(instance_id)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
raise exception.InstanceNotFound(instance_id=instance_id)
msg = (_("Error from libvirt while looking up %(instance_id)s: "
"[Error Code %(error_code)s] %(ex)s")
% {'instance_id': instance_id,
'error_code': error_code,
'ex': ex})
raise exception.NovaException(msg)
def _lookup_by_name(self, instance_name):
"""Retrieve libvirt domain object given an instance name.
All libvirt error handling should be handled in this method and
relevant nova exceptions should be raised in response.
"""
try:
return self._conn.lookupByName(instance_name)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
raise exception.InstanceNotFound(instance_id=instance_name)
msg = (_('Error from libvirt while looking up %(instance_name)s: '
'[Error Code %(error_code)s] %(ex)s') %
{'instance_name': instance_name,
'error_code': error_code,
'ex': ex})
raise exception.NovaException(msg)
def get_info(self, instance):
"""Retrieve information from libvirt for a specific instance name.
If a libvirt error is encountered during lookup, we might raise a
NotFound exception or Error exception depending on how severe the
libvirt error is.
"""
virt_dom = self._lookup_by_name(instance['name'])
dom_info = virt_dom.info()
return {'state': LIBVIRT_POWER_STATE[dom_info[0]],
'max_mem': dom_info[1],
'mem': dom_info[2],
'num_cpu': dom_info[3],
'cpu_time': dom_info[4],
'id': virt_dom.ID()}
def _create_domain_setup_lxc(self, instance):
inst_path = libvirt_utils.get_instance_path(instance)
container_dir = os.path.join(inst_path, 'rootfs')
fileutils.ensure_tree(container_dir)
image = self.image_backend.image(instance, 'disk')
rootfs_dev = disk.setup_container(image.path,
container_dir=container_dir,
use_cow=CONF.use_cow_images)
try:
# Save rootfs device to disconnect it when deleting the instance
if rootfs_dev:
instance.system_metadata['rootfs_device_name'] = rootfs_dev
instance.save()
except Exception:
with excutils.save_and_reraise_exception():
self._create_domain_cleanup_lxc(instance)
def _create_domain_cleanup_lxc(self, instance):
inst_path = libvirt_utils.get_instance_path(instance)
container_dir = os.path.join(inst_path, 'rootfs')
try:
state = self.get_info(instance)['state']
except exception.InstanceNotFound:
# The domain may not be present if the instance failed to start
state = None
if state == power_state.RUNNING:
# NOTE(uni): Now the container is running with its own private
# mount namespace and so there is no need to keep the container
# rootfs mounted in the host namespace
disk.clean_lxc_namespace(container_dir=container_dir)
else:
disk.teardown_container(container_dir=container_dir)
def _create_domain(self, xml=None, domain=None,
instance=None, launch_flags=0, power_on=True):
"""Create a domain.
Either domain or xml must be passed in. If both are passed, then
the domain definition is overwritten from the xml.
"""
err = None
if instance and CONF.libvirt.virt_type == 'lxc':
self._create_domain_setup_lxc(instance)
try:
if xml:
err = _LE('Error defining a domain with XML: %s') % xml
domain = self._conn.defineXML(xml)
if power_on:
err = _LE('Error launching a defined domain with XML: %s') \
% domain.XMLDesc(0)
domain.createWithFlags(launch_flags)
if not utils.is_neutron():
err = _LE('Error enabling hairpin mode with XML: %s') \
% domain.XMLDesc(0)
self._enable_hairpin(domain.XMLDesc(0))
except Exception:
with excutils.save_and_reraise_exception():
if err:
LOG.error(err)
finally:
if instance and CONF.libvirt.virt_type == 'lxc':
self._create_domain_cleanup_lxc(instance)
return domain
def _neutron_failed_callback(self, event_name, instance):
LOG.error(_LE('Neutron Reported failure on event '
'%(event)s for instance %(uuid)s'),
{'event': event_name, 'uuid': instance.uuid})
if CONF.vif_plugging_is_fatal:
raise exception.VirtualInterfaceCreateException()
def _get_neutron_events(self, network_info):
# NOTE(danms): We need to collect any VIFs that are currently
# down that we expect a down->up event for. Anything that is
# already up will not undergo that transition, and for
# anything that might be stale (cache-wise) assume it's
# already up so we don't block on it.
return [('network-vif-plugged', vif['id'])
for vif in network_info if vif.get('active', True) is False]
def _create_domain_and_network(self, context, xml, instance, network_info,
block_device_info=None, power_on=True,
reboot=False, vifs_already_plugged=False):
"""Do required network setup and create domain."""
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
for vol in block_device_mapping:
connection_info = vol['connection_info']
disk_info = blockinfo.get_info_from_bdm(
CONF.libvirt.virt_type, vol)
conf = self._connect_volume(connection_info, disk_info)
# cache device_path in connection_info -- required by encryptors
if 'data' in connection_info:
connection_info['data']['device_path'] = conf.source_path
vol['connection_info'] = connection_info
vol.save(context)
if (not reboot and 'data' in connection_info and
'volume_id' in connection_info['data']):
volume_id = connection_info['data']['volume_id']
encryption = encryptors.get_encryption_metadata(
context, self._volume_api, volume_id, connection_info)
if encryption:
encryptor = self._get_volume_encryptor(connection_info,
encryption)
encryptor.attach_volume(context, **encryption)
timeout = CONF.vif_plugging_timeout
if (self._conn_supports_start_paused and
utils.is_neutron() and not
vifs_already_plugged and power_on and timeout):
events = self._get_neutron_events(network_info)
else:
events = []
launch_flags = events and libvirt.VIR_DOMAIN_START_PAUSED or 0
domain = None
try:
with self.virtapi.wait_for_instance_event(
instance, events, deadline=timeout,
error_callback=self._neutron_failed_callback):
self.plug_vifs(instance, network_info)
self.firewall_driver.setup_basic_filtering(instance,
network_info)
self.firewall_driver.prepare_instance_filter(instance,
network_info)
domain = self._create_domain(
xml, instance=instance,
launch_flags=launch_flags,
power_on=power_on)
self.firewall_driver.apply_instance_filter(instance,
network_info)
except exception.VirtualInterfaceCreateException:
# Neutron reported failure and we didn't swallow it, so
# bail here
with excutils.save_and_reraise_exception():
if domain:
domain.destroy()
self.cleanup(context, instance, network_info=network_info,
block_device_info=block_device_info)
except eventlet.timeout.Timeout:
# We never heard from Neutron
LOG.warn(_LW('Timeout waiting for vif plugging callback for '
'instance %(uuid)s'), {'uuid': instance['uuid']})
if CONF.vif_plugging_is_fatal:
if domain:
domain.destroy()
self.cleanup(context, instance, network_info=network_info,
block_device_info=block_device_info)
raise exception.VirtualInterfaceCreateException()
# Resume only if domain has been paused
if launch_flags & libvirt.VIR_DOMAIN_START_PAUSED:
domain.resume()
return domain
def _get_all_block_devices(self):
"""Return all block devices in use on this node."""
devices = []
for dom in self._list_instance_domains():
try:
doc = etree.fromstring(dom.XMLDesc(0))
except libvirt.libvirtError as e:
LOG.warn(_LW("couldn't obtain the XML from domain:"
" %(uuid)s, exception: %(ex)s") %
{"uuid": dom.UUIDString(), "ex": e})
continue
except Exception:
continue
ret = doc.findall('./devices/disk')
for node in ret:
if node.get('type') != 'block':
continue
for child in node.getchildren():
if child.tag == 'source':
devices.append(child.get('dev'))
return devices
def _get_interfaces(self, xml):
"""Note that this function takes a domain xml.
Returns a list of all network interfaces for this instance.
"""
doc = None
try:
doc = etree.fromstring(xml)
except Exception:
return []
interfaces = []
ret = doc.findall('./devices/interface')
for node in ret:
devdst = None
for child in list(node):
if child.tag == 'target':
devdst = child.attrib['dev']
if devdst is None:
continue
interfaces.append(devdst)
return interfaces
def _get_vcpu_total(self):
"""Get available vcpu number of physical computer.
:returns: the number of cpu core instances can be used.
"""
if self._vcpu_total != 0:
return self._vcpu_total
try:
total_pcpus = self._conn.getInfo()[2]
except libvirt.libvirtError:
LOG.warn(_LW("Cannot get the number of cpu, because this "
"function is not implemented for this platform. "))
return 0
if CONF.vcpu_pin_set is None:
self._vcpu_total = total_pcpus
return self._vcpu_total
available_ids = hardware.get_vcpu_pin_set()
if available_ids[-1] >= total_pcpus:
raise exception.Invalid(_("Invalid vcpu_pin_set config, "
"out of hypervisor cpu range."))
self._vcpu_total = len(available_ids)
return self._vcpu_total
def _get_memory_mb_total(self):
"""Get the total memory size(MB) of physical computer.
:returns: the total amount of memory(MB).
"""
return self._conn.getInfo()[1]
@staticmethod
def _get_local_gb_info():
"""Get local storage info of the compute node in GB.
:returns: A dict containing:
:total: How big the overall usable filesystem is (in gigabytes)
:free: How much space is free (in gigabytes)
:used: How much space is used (in gigabytes)
"""
if CONF.libvirt.images_type == 'lvm':
info = lvm.get_volume_group_info(
CONF.libvirt.images_volume_group)
elif CONF.libvirt.images_type == 'rbd':
info = LibvirtDriver._get_rbd_driver().get_pool_info()
else:
info = libvirt_utils.get_fs_info(CONF.instances_path)
for (k, v) in info.iteritems():
info[k] = v / units.Gi
return info
def _get_vcpu_used(self):
"""Get vcpu usage number of physical computer.
:returns: The total number of vcpu(s) that are currently being used.
"""
total = 0
if CONF.libvirt.virt_type == 'lxc':
return total + 1
for dom in self._list_instance_domains():
try:
vcpus = dom.vcpus()
except libvirt.libvirtError as e:
LOG.warn(_LW("couldn't obtain the vpu count from domain id:"
" %(uuid)s, exception: %(ex)s") %
{"uuid": dom.UUIDString(), "ex": e})
else:
if vcpus is not None and len(vcpus) > 1:
total += len(vcpus[1])
# NOTE(gtt116): give other tasks a chance.
greenthread.sleep(0)
return total
def _get_memory_mb_used(self):
"""Get the used memory size(MB) of physical computer.
:returns: the total usage of memory(MB).
"""
if sys.platform.upper() not in ['LINUX2', 'LINUX3']:
return 0
with open('/proc/meminfo') as fp:
m = fp.read().split()
idx1 = m.index('MemFree:')
idx2 = m.index('Buffers:')
idx3 = m.index('Cached:')
if CONF.libvirt.virt_type == 'xen':
used = 0
for dom in self._list_instance_domains(only_guests=False):
try:
dom_mem = int(dom.info()[2])
except libvirt.libvirtError as e:
LOG.warn(_LW("couldn't obtain the memory from domain:"
" %(uuid)s, exception: %(ex)s") %
{"uuid": dom.UUIDString(), "ex": e})
continue
# skip dom0
if dom.ID() != 0:
used += dom_mem
else:
# the mem reported by dom0 is be greater of what
# it is being used
used += (dom_mem -
(int(m[idx1 + 1]) +
int(m[idx2 + 1]) +
int(m[idx3 + 1])))
# Convert it to MB
return used / units.Ki
else:
avail = (int(m[idx1 + 1]) + int(m[idx2 + 1]) + int(m[idx3 + 1]))
# Convert it to MB
return self._get_memory_mb_total() - avail / units.Ki
def _get_hypervisor_type(self):
"""Get hypervisor type.
:returns: hypervisor type (ex. qemu)
"""
return self._conn.getType()
def _get_hypervisor_version(self):
"""Get hypervisor version.
:returns: hypervisor version (ex. 12003)
"""
# NOTE(justinsb): getVersion moved between libvirt versions
# Trying to do be compatible with older versions is a lost cause
# But ... we can at least give the user a nice message
method = getattr(self._conn, 'getVersion', None)
if method is None:
raise exception.NovaException(_("libvirt version is too old"
" (does not support getVersion)"))
# NOTE(justinsb): If we wanted to get the version, we could:
# method = getattr(libvirt, 'getVersion', None)
# NOTE(justinsb): This would then rely on a proper version check
return method()
def _get_hypervisor_hostname(self):
"""Returns the hostname of the hypervisor."""
hostname = self._conn.getHostname()
if not hasattr(self, '_hypervisor_hostname'):
self._hypervisor_hostname = hostname
elif hostname != self._hypervisor_hostname:
LOG.error(_LE('Hostname has changed from %(old)s '
'to %(new)s. A restart is required to take effect.'),
{'old': self._hypervisor_hostname,
'new': hostname})
return self._hypervisor_hostname
def _get_instance_capabilities(self):
"""Get hypervisor instance capabilities
Returns a list of tuples that describe instances the
hypervisor is capable of hosting. Each tuple consists
of the triplet (arch, hypervisor_type, vm_mode).
:returns: List of tuples describing instance capabilities
"""
caps = self._get_host_capabilities()
instance_caps = list()
for g in caps.guests:
for dt in g.domtype:
instance_cap = (g.arch, dt, g.ostype)
instance_caps.append(instance_cap)
return instance_caps
def _get_cpu_info(self):
"""Get cpuinfo information.
Obtains cpu feature from virConnect.getCapabilities,
and returns as a json string.
:return: see above description
"""
caps = self._get_host_capabilities()
cpu_info = dict()
cpu_info['arch'] = caps.host.cpu.arch
cpu_info['model'] = caps.host.cpu.model
cpu_info['vendor'] = caps.host.cpu.vendor
topology = dict()
topology['sockets'] = caps.host.cpu.sockets
topology['cores'] = caps.host.cpu.cores
topology['threads'] = caps.host.cpu.threads
cpu_info['topology'] = topology
features = list()
for f in caps.host.cpu.features:
features.append(f.name)
cpu_info['features'] = features
# TODO(berrange): why do we bother converting the
# libvirt capabilities XML into a special JSON format ?
# The data format is different across all the drivers
# so we could just return the raw capabilities XML
# which 'compare_cpu' could use directly
#
# That said, arch_filter.py now seems to rely on
# the libvirt drivers format which suggests this
# data format needs to be standardized across drivers
return jsonutils.dumps(cpu_info)
def _get_pcidev_info(self, devname):
"""Returns a dict of PCI device."""
def _get_device_type(cfgdev):
"""Get a PCI device's device type.
An assignable PCI device can be a normal PCI device,
a SR-IOV Physical Function (PF), or a SR-IOV Virtual
Function (VF). Only normal PCI devices or SR-IOV VFs
are assignable, while SR-IOV PFs are always owned by
hypervisor.
Please notice that a PCI device with SR-IOV
capability but not enabled is reported as normal PCI device.
"""
for fun_cap in cfgdev.pci_capability.fun_capability:
if len(fun_cap.device_addrs) != 0:
if fun_cap.type == 'virt_functions':
return {'dev_type': 'type-PF'}
if fun_cap.type == 'phys_function':
phys_address = "%s:%s:%s.%s" % (
fun_cap.device_addrs[0][0].replace("0x", ''),
fun_cap.device_addrs[0][1].replace("0x", ''),
fun_cap.device_addrs[0][2].replace("0x", ''),
fun_cap.device_addrs[0][3].replace("0x", ''))
return {'dev_type': 'type-VF',
'phys_function': phys_address}
return {'dev_type': 'type-PCI'}
virtdev = self._conn.nodeDeviceLookupByName(devname)
xmlstr = virtdev.XMLDesc(0)
cfgdev = vconfig.LibvirtConfigNodeDevice()
cfgdev.parse_str(xmlstr)
address = "%04x:%02x:%02x.%1x" % (
cfgdev.pci_capability.domain,
cfgdev.pci_capability.bus,
cfgdev.pci_capability.slot,
cfgdev.pci_capability.function)
device = {
"dev_id": cfgdev.name,
"address": address,
"product_id": cfgdev.pci_capability.product_id[2:6],
"vendor_id": cfgdev.pci_capability.vendor_id[2:6],
}
# requirement by DataBase Model
device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device
device.update(_get_device_type(cfgdev))
return device
def _pci_device_assignable(self, device):
if device['dev_type'] == 'type-PF':
return False
return self.dev_filter.device_assignable(device)
def _get_pci_passthrough_devices(self):
"""Get host PCI devices information.
Obtains pci devices information from libvirt, and returns
as a JSON string.
Each device information is a dictionary, with mandatory keys
of 'address', 'vendor_id', 'product_id', 'dev_type', 'dev_id',
'label' and other optional device specific information.
Refer to the objects/pci_device.py for more idea of these keys.
:returns: a JSON string containaing a list of the assignable PCI
devices information
"""
# Bail early if we know we can't support `listDevices` to avoid
# repeated warnings within a periodic task
if not getattr(self, '_list_devices_supported', True):
return jsonutils.dumps([])
try:
dev_names = self._conn.listDevices('pci', 0) or []
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_SUPPORT:
self._list_devices_supported = False
LOG.warn(_LW("URI %(uri)s does not support "
"listDevices: " "%(error)s"),
{'uri': self.uri(), 'error': ex})
return jsonutils.dumps([])
else:
raise
pci_info = []
for name in dev_names:
pci_dev = self._get_pcidev_info(name)
if self._pci_device_assignable(pci_dev):
pci_info.append(pci_dev)
return jsonutils.dumps(pci_info)
def get_all_volume_usage(self, context, compute_host_bdms):
"""Return usage info for volumes attached to vms on
a given host.
"""
vol_usage = []
for instance_bdms in compute_host_bdms:
instance = instance_bdms['instance']
for bdm in instance_bdms['instance_bdms']:
vol_stats = []
mountpoint = bdm['device_name']
if mountpoint.startswith('/dev/'):
mountpoint = mountpoint[5:]
volume_id = bdm['volume_id']
LOG.debug("Trying to get stats for the volume %s",
volume_id)
vol_stats = self.block_stats(instance['name'], mountpoint)
if vol_stats:
stats = dict(volume=volume_id,
instance=instance,
rd_req=vol_stats[0],
rd_bytes=vol_stats[1],
wr_req=vol_stats[2],
wr_bytes=vol_stats[3],
flush_operations=vol_stats[4])
LOG.debug(
"Got volume usage stats for the volume=%(volume)s,"
" rd_req=%(rd_req)d, rd_bytes=%(rd_bytes)d, "
"wr_req=%(wr_req)d, wr_bytes=%(wr_bytes)d",
stats, instance=instance)
vol_usage.append(stats)
return vol_usage
def block_stats(self, instance_name, disk_id):
"""Note that this function takes an instance name."""
try:
domain = self._lookup_by_name(instance_name)
return domain.blockStats(disk_id)
except libvirt.libvirtError as e:
errcode = e.get_error_code()
LOG.info(_LI('Getting block stats failed, device might have '
'been detached. Instance=%(instance_name)s '
'Disk=%(disk)s Code=%(errcode)s Error=%(e)s'),
{'instance_name': instance_name, 'disk': disk_id,
'errcode': errcode, 'e': e})
except exception.InstanceNotFound:
LOG.info(_LI('Could not find domain in libvirt for instance %s. '
'Cannot get block stats for device'), instance_name)
def interface_stats(self, instance_name, iface_id):
"""Note that this function takes an instance name."""
domain = self._lookup_by_name(instance_name)
return domain.interfaceStats(iface_id)
def get_console_pool_info(self, console_type):
# TODO(mdragon): console proxy should be implemented for libvirt,
# in case someone wants to use it with kvm or
# such. For now return fake data.
return {'address': '127.0.0.1',
'username': 'fakeuser',
'password': 'fakepassword'}
def refresh_security_group_rules(self, security_group_id):
self.firewall_driver.refresh_security_group_rules(security_group_id)
def refresh_security_group_members(self, security_group_id):
self.firewall_driver.refresh_security_group_members(security_group_id)
def refresh_instance_security_rules(self, instance):
self.firewall_driver.refresh_instance_security_rules(instance)
def refresh_provider_fw_rules(self):
self.firewall_driver.refresh_provider_fw_rules()
def get_available_resource(self, nodename):
"""Retrieve resource information.
This method is called when nova-compute launches, and
as part of a periodic task that records the results in the DB.
:param nodename: will be put in PCI device
:returns: dictionary containing resource info
"""
# Temporary: convert supported_instances into a string, while keeping
# the RPC version as JSON. Can be changed when RPC broadcast is removed
stats = self.get_host_stats(refresh=True)
stats['supported_instances'] = jsonutils.dumps(
stats['supported_instances'])
return stats
def check_instance_shared_storage_local(self, context, instance):
dirpath = libvirt_utils.get_instance_path(instance)
if not os.path.exists(dirpath):
return None
fd, tmp_file = tempfile.mkstemp(dir=dirpath)
LOG.debug("Creating tmpfile %s to verify with other "
"compute node that the instance is on "
"the same shared storage.",
tmp_file, instance=instance)
os.close(fd)
return {"filename": tmp_file}
def check_instance_shared_storage_remote(self, context, data):
return os.path.exists(data['filename'])
def check_instance_shared_storage_cleanup(self, context, data):
fileutils.delete_if_exists(data["filename"])
def check_can_live_migrate_destination(self, context, instance,
src_compute_info, dst_compute_info,
block_migration=False,
disk_over_commit=False):
"""Check if it is possible to execute live migration.
This runs checks on the destination host, and then calls
back to the source host to check the results.
:param context: security context
:param instance: nova.db.sqlalchemy.models.Instance
:param block_migration: if true, prepare for block migration
:param disk_over_commit: if true, allow disk over commit
:returns: a dict containing:
:filename: name of the tmpfile under CONF.instances_path
:block_migration: whether this is block migration
:disk_over_commit: disk-over-commit factor on dest host
:disk_available_mb: available disk space on dest host
"""
disk_available_mb = None
if block_migration:
disk_available_gb = dst_compute_info['disk_available_least']
disk_available_mb = \
(disk_available_gb * units.Ki) - CONF.reserved_host_disk_mb
# Compare CPU
source_cpu_info = src_compute_info['cpu_info']
self._compare_cpu(source_cpu_info)
# Create file on storage, to be checked on source host
filename = self._create_shared_storage_test_file()
return {"filename": filename,
"image_type": CONF.libvirt.images_type,
"block_migration": block_migration,
"disk_over_commit": disk_over_commit,
"disk_available_mb": disk_available_mb}
def check_can_live_migrate_destination_cleanup(self, context,
dest_check_data):
"""Do required cleanup on dest host after check_can_live_migrate calls
:param context: security context
"""
filename = dest_check_data["filename"]
self._cleanup_shared_storage_test_file(filename)
def check_can_live_migrate_source(self, context, instance,
dest_check_data):
"""Check if it is possible to execute live migration.
This checks if the live migration can succeed, based on the
results from check_can_live_migrate_destination.
:param context: security context
:param instance: nova.db.sqlalchemy.models.Instance
:param dest_check_data: result of check_can_live_migrate_destination
:returns: a dict containing migration info
"""
# Checking shared storage connectivity
# if block migration, instances_paths should not be on shared storage.
source = CONF.host
dest_check_data.update({'is_shared_block_storage':
self._is_shared_block_storage(instance, dest_check_data)})
dest_check_data.update({'is_shared_instance_path':
self._is_shared_instance_path(dest_check_data)})
if dest_check_data['block_migration']:
if (dest_check_data['is_shared_block_storage'] or
dest_check_data['is_shared_instance_path']):
reason = _("Block migration can not be used "
"with shared storage.")
raise exception.InvalidLocalStorage(reason=reason, path=source)
self._assert_dest_node_has_enough_disk(context, instance,
dest_check_data['disk_available_mb'],
dest_check_data['disk_over_commit'])
elif not (dest_check_data['is_shared_block_storage'] or
dest_check_data['is_shared_instance_path']):
reason = _("Live migration can not be used "
"without shared storage.")
raise exception.InvalidSharedStorage(reason=reason, path=source)
# NOTE(mikal): include the instance directory name here because it
# doesn't yet exist on the destination but we want to force that
# same name to be used
instance_path = libvirt_utils.get_instance_path(instance,
relative=True)
dest_check_data['instance_relative_path'] = instance_path
return dest_check_data
def _is_shared_block_storage(self, instance, dest_check_data):
"""Check if all block storage of an instance can be shared
between source and destination of a live migration.
Returns true if the instance is volume backed and has no local disks,
or if the image backend is the same on source and destination and the
backend shares block storage between compute nodes.
"""
if (CONF.libvirt.images_type == dest_check_data.get('image_type') and
self.image_backend.backend().is_shared_block_storage()):
return True
if (dest_check_data.get('is_volume_backed') and
not bool(jsonutils.loads(
self.get_instance_disk_info(instance['name'])))):
# pylint: disable E1120
return True
return False
def _is_shared_instance_path(self, dest_check_data):
"""Check if instance path is shared between source and
destination of a live migration.
"""
return self._check_shared_storage_test_file(
dest_check_data["filename"])
def _assert_dest_node_has_enough_disk(self, context, instance,
available_mb, disk_over_commit):
"""Checks if destination has enough disk for block migration."""
# Libvirt supports qcow2 disk format,which is usually compressed
# on compute nodes.
# Real disk image (compressed) may enlarged to "virtual disk size",
# that is specified as the maximum disk size.
# (See qemu-img -f path-to-disk)
# Scheduler recognizes destination host still has enough disk space
# if real disk size < available disk size
# if disk_over_commit is True,
# otherwise virtual disk size < available disk size.
available = 0
if available_mb:
available = available_mb * units.Mi
ret = self.get_instance_disk_info(instance['name'])
disk_infos = jsonutils.loads(ret)
necessary = 0
if disk_over_commit:
for info in disk_infos:
necessary += int(info['disk_size'])
else:
for info in disk_infos:
necessary += int(info['virt_disk_size'])
# Check that available disk > necessary disk
if (available - necessary) < 0:
reason = (_('Unable to migrate %(instance_uuid)s: '
'Disk of instance is too large(available'
' on destination host:%(available)s '
'< need:%(necessary)s)') %
{'instance_uuid': instance['uuid'],
'available': available,
'necessary': necessary})
raise exception.MigrationPreCheckError(reason=reason)
def _compare_cpu(self, cpu_info):
"""Checks the host cpu is compatible to a cpu given by xml.
"xml" must be a part of libvirt.openAuth(...).getCapabilities().
return values follows by virCPUCompareResult.
if 0 > return value, do live migration.
'http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult'
:param cpu_info: json string of cpu feature from _get_cpu_info()
:returns:
None. if given cpu info is not compatible to this server,
raise exception.
"""
# NOTE(berendt): virConnectCompareCPU not working for Xen
if CONF.libvirt.virt_type == 'xen':
return 1
info = jsonutils.loads(cpu_info)
LOG.info(_LI('Instance launched has CPU info: %s'), cpu_info)
cpu = vconfig.LibvirtConfigCPU()
cpu.arch = info['arch']
cpu.model = info['model']
cpu.vendor = info['vendor']
cpu.sockets = info['topology']['sockets']
cpu.cores = info['topology']['cores']
cpu.threads = info['topology']['threads']
for f in info['features']:
cpu.add_feature(vconfig.LibvirtConfigCPUFeature(f))
u = "http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult"
m = _("CPU doesn't have compatibility.\n\n%(ret)s\n\nRefer to %(u)s")
# unknown character exists in xml, then libvirt complains
try:
ret = self._conn.compareCPU(cpu.to_xml(), 0)
except libvirt.libvirtError as e:
with excutils.save_and_reraise_exception():
ret = unicode(e)
LOG.error(m, {'ret': ret, 'u': u})
if ret <= 0:
LOG.error(m, {'ret': ret, 'u': u})
raise exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u})
def _create_shared_storage_test_file(self):
"""Makes tmpfile under CONF.instances_path."""
dirpath = CONF.instances_path
fd, tmp_file = tempfile.mkstemp(dir=dirpath)
LOG.debug("Creating tmpfile %s to notify to other "
"compute nodes that they should mount "
"the same storage.", tmp_file)
os.close(fd)
return os.path.basename(tmp_file)
def _check_shared_storage_test_file(self, filename):
"""Confirms existence of the tmpfile under CONF.instances_path.
Cannot confirm tmpfile return False.
"""
tmp_file = os.path.join(CONF.instances_path, filename)
if not os.path.exists(tmp_file):
return False
else:
return True
def _cleanup_shared_storage_test_file(self, filename):
"""Removes existence of the tmpfile under CONF.instances_path."""
tmp_file = os.path.join(CONF.instances_path, filename)
os.remove(tmp_file)
def ensure_filtering_rules_for_instance(self, instance, network_info):
"""Ensure that an instance's filtering rules are enabled.
When migrating an instance, we need the filtering rules to
be configured on the destination host before starting the
migration.
Also, when restarting the compute service, we need to ensure
that filtering rules exist for all running services.
"""
self.firewall_driver.setup_basic_filtering(instance, network_info)
self.firewall_driver.prepare_instance_filter(instance,
network_info)
# nwfilters may be defined in a separate thread in the case
# of libvirt non-blocking mode, so we wait for completion
timeout_count = range(CONF.live_migration_retry_count)
while timeout_count:
if self.firewall_driver.instance_filter_exists(instance,
network_info):
break
timeout_count.pop()
if len(timeout_count) == 0:
msg = _('The firewall filter for %s does not exist')
raise exception.NovaException(msg % instance.name)
greenthread.sleep(1)
def filter_defer_apply_on(self):
self.firewall_driver.filter_defer_apply_on()
def filter_defer_apply_off(self):
self.firewall_driver.filter_defer_apply_off()
def live_migration(self, context, instance, dest,
post_method, recover_method, block_migration=False,
migrate_data=None):
"""Spawning live_migration operation for distributing high-load.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param dest: destination host
:param post_method:
post operation method.
expected nova.compute.manager._post_live_migration.
:param recover_method:
recovery method when any exception occurs.
expected nova.compute.manager._rollback_live_migration.
:param block_migration: if true, do block migration.
:param migrate_data: implementation specific params
"""
greenthread.spawn(self._live_migration, context, instance, dest,
post_method, recover_method, block_migration,
migrate_data)
def _correct_listen_addr(self, old_xml_str, listen_addrs):
# NB(sross): can't just use LibvirtConfigGuest#parse_str
# here b/c it doesn't capture the entire XML
# description
xml_doc = etree.fromstring(old_xml_str)
# change over listen addresses
for dev in xml_doc.findall('./devices/graphics'):
gr_type = dev.get('type')
listen_tag = dev.find('listen')
if gr_type in ('vnc', 'spice'):
if listen_tag is not None:
listen_tag.set('address', listen_addrs[gr_type])
if dev.get('listen') is not None:
dev.set('listen', listen_addrs[gr_type])
return etree.tostring(xml_doc)
def _check_graphics_addresses_can_live_migrate(self, listen_addrs):
LOCAL_ADDRS = ('0.0.0.0', '127.0.0.1', '::', '::1')
local_vnc = CONF.vncserver_listen in LOCAL_ADDRS
local_spice = CONF.spice.server_listen in LOCAL_ADDRS
if ((CONF.vnc_enabled and not local_vnc) or
(CONF.spice.enabled and not local_spice)):
raise exception.MigrationError(
_('Your libvirt version does not support the'
' VIR_DOMAIN_XML_MIGRATABLE flag or your'
' destination node does not support'
' retrieving listen addresses. In order'
' for live migration to work properly, you'
' must configure the graphics (VNC and/or'
' SPICE) listen addresses to be either'
' the catch-all address (0.0.0.0 or ::) or'
' the local address (127.0.0.1 or ::1).'))
if listen_addrs is not None:
dest_local_vnc = listen_addrs['vnc'] in LOCAL_ADDRS
dest_local_spice = listen_addrs['spice'] in LOCAL_ADDRS
if ((CONF.vnc_enabled and not dest_local_vnc) or
(CONF.spice.enabled and not dest_local_spice)):
LOG.warn(_('Your libvirt version does not support the'
' VIR_DOMAIN_XML_MIGRATABLE flag, and the '
' graphics (VNC and/or SPICE) listen'
' addresses on the destination node do not'
' match the addresses on the source node.'
' Since the source node has listen'
' addresses set to either the catch-all'
' address (0.0.0.0 or ::) or the local'
' address (127.0.0.1 or ::1), the live'
' migration will succeed, but the VM will'
' continue to listen on the current'
' addresses.'))
def _live_migration(self, context, instance, dest, post_method,
recover_method, block_migration=False,
migrate_data=None):
"""Do live migration.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param dest: destination host
:param post_method:
post operation method.
expected nova.compute.manager._post_live_migration.
:param recover_method:
recovery method when any exception occurs.
expected nova.compute.manager._rollback_live_migration.
:param block_migration: if true, do block migration.
:param migrate_data: implementation specific params
"""
# Do live migration.
try:
if block_migration:
flaglist = CONF.libvirt.block_migration_flag.split(',')
else:
flaglist = CONF.libvirt.live_migration_flag.split(',')
flagvals = [getattr(libvirt, x.strip()) for x in flaglist]
logical_sum = reduce(lambda x, y: x | y, flagvals)
dom = self._lookup_by_name(instance["name"])
pre_live_migrate_data = (migrate_data or {}).get(
'pre_live_migration_result', {})
listen_addrs = pre_live_migrate_data.get('graphics_listen_addrs')
migratable_flag = getattr(libvirt, 'VIR_DOMAIN_XML_MIGRATABLE',
None)
if migratable_flag is None or listen_addrs is None:
self._check_graphics_addresses_can_live_migrate(listen_addrs)
dom.migrateToURI(CONF.libvirt.live_migration_uri % dest,
logical_sum,
None,
CONF.libvirt.live_migration_bandwidth)
else:
old_xml_str = dom.XMLDesc(migratable_flag)
new_xml_str = self._correct_listen_addr(old_xml_str,
listen_addrs)
dom.migrateToURI2(CONF.libvirt.live_migration_uri % dest,
None,
new_xml_str,
logical_sum,
None,
CONF.libvirt.live_migration_bandwidth)
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Live Migration failure: %s"), e,
instance=instance)
recover_method(context, instance, dest, block_migration)
# Waiting for completion of live_migration.
timer = loopingcall.FixedIntervalLoopingCall(f=None)
def wait_for_live_migration():
"""waiting for live migration completion."""
try:
self.get_info(instance)['state']
except exception.InstanceNotFound:
timer.stop()
post_method(context, instance, dest, block_migration,
migrate_data)
timer.f = wait_for_live_migration
timer.start(interval=0.5).wait()
def _fetch_instance_kernel_ramdisk(self, context, instance):
"""Download kernel and ramdisk for instance in instance directory."""
instance_dir = libvirt_utils.get_instance_path(instance)
if instance['kernel_id']:
libvirt_utils.fetch_image(context,
os.path.join(instance_dir, 'kernel'),
instance['kernel_id'],
instance['user_id'],
instance['project_id'])
if instance['ramdisk_id']:
libvirt_utils.fetch_image(context,
os.path.join(instance_dir,
'ramdisk'),
instance['ramdisk_id'],
instance['user_id'],
instance['project_id'])
def rollback_live_migration_at_destination(self, context, instance,
network_info,
block_device_info,
destroy_disks=True,
migrate_data=None):
"""Clean up destination node after a failed live migration."""
self.destroy(context, instance, network_info, block_device_info,
destroy_disks, migrate_data)
def pre_live_migration(self, context, instance, block_device_info,
network_info, disk_info, migrate_data=None):
"""Preparation live migration."""
# Steps for volume backed instance live migration w/o shared storage.
is_shared_block_storage = True
is_shared_instance_path = True
is_block_migration = True
instance_relative_path = None
if migrate_data:
is_shared_block_storage = migrate_data.get(
'is_shared_block_storage', True)
is_shared_instance_path = migrate_data.get(
'is_shared_instance_path', True)
is_block_migration = migrate_data.get('block_migration', True)
instance_relative_path = migrate_data.get('instance_relative_path')
if not (is_shared_instance_path and is_shared_block_storage):
# NOTE(mikal): live migration of instances using config drive is
# not supported because of a bug in libvirt (read only devices
# are not copied by libvirt). See bug/1246201
if configdrive.required_by(instance):
raise exception.NoLiveMigrationForConfigDriveInLibVirt()
if not is_shared_instance_path:
# NOTE(mikal): this doesn't use libvirt_utils.get_instance_path
# because we are ensuring that the same instance directory name
# is used as was at the source
if instance_relative_path:
instance_dir = os.path.join(CONF.instances_path,
instance_relative_path)
else:
instance_dir = libvirt_utils.get_instance_path(instance)
if os.path.exists(instance_dir):
raise exception.DestinationDiskExists(path=instance_dir)
os.mkdir(instance_dir)
if not is_shared_block_storage:
# Ensure images and backing files are present.
self._create_images_and_backing(context, instance,
instance_dir, disk_info)
if not (is_block_migration or is_shared_instance_path):
# NOTE(angdraug): when block storage is shared between source and
# destination and instance path isn't (e.g. volume backed or rbd
# backed instance), instance path on destination has to be prepared
# Touch the console.log file, required by libvirt.
console_file = self._get_console_log_path(instance)
libvirt_utils.file_open(console_file, 'a').close()
# if image has kernel and ramdisk, just download
# following normal way.
self._fetch_instance_kernel_ramdisk(context, instance)
# Establishing connection to volume server.
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
for vol in block_device_mapping:
connection_info = vol['connection_info']
disk_info = blockinfo.get_info_from_bdm(
CONF.libvirt.virt_type, vol)
self._connect_volume(connection_info, disk_info)
# We call plug_vifs before the compute manager calls
# ensure_filtering_rules_for_instance, to ensure bridge is set up
# Retry operation is necessary because continuously request comes,
# concurrent request occurs to iptables, then it complains.
max_retry = CONF.live_migration_retry_count
for cnt in range(max_retry):
try:
self.plug_vifs(instance, network_info)
break
except processutils.ProcessExecutionError:
if cnt == max_retry - 1:
raise
else:
LOG.warn(_LW('plug_vifs() failed %(cnt)d. Retry up to '
'%(max_retry)d.'),
{'cnt': cnt,
'max_retry': max_retry},
instance=instance)
greenthread.sleep(1)
res_data = {'graphics_listen_addrs': {}}
res_data['graphics_listen_addrs']['vnc'] = CONF.vncserver_listen
res_data['graphics_listen_addrs']['spice'] = CONF.spice.server_listen
return res_data
def _create_images_and_backing(self, context, instance, instance_dir,
disk_info_json):
""":param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param instance_dir:
instance path to use, calculated externally to handle block
migrating an instance with an old style instance path
:param disk_info_json:
json strings specified in get_instance_disk_info
"""
if not disk_info_json:
disk_info = []
else:
disk_info = jsonutils.loads(disk_info_json)
for info in disk_info:
base = os.path.basename(info['path'])
# Get image type and create empty disk image, and
# create backing file in case of qcow2.
instance_disk = os.path.join(instance_dir, base)
if not info['backing_file'] and not os.path.exists(instance_disk):
libvirt_utils.create_image(info['type'], instance_disk,
info['virt_disk_size'])
elif info['backing_file']:
# Creating backing file follows same way as spawning instances.
cache_name = os.path.basename(info['backing_file'])
image = self.image_backend.image(instance,
instance_disk,
CONF.libvirt.images_type)
if cache_name.startswith('ephemeral'):
image.cache(fetch_func=self._create_ephemeral,
fs_label=cache_name,
os_type=instance["os_type"],
filename=cache_name,
size=info['virt_disk_size'],
ephemeral_size=instance['ephemeral_gb'])
elif cache_name.startswith('swap'):
inst_type = flavors.extract_flavor(instance)
swap_mb = inst_type['swap']
image.cache(fetch_func=self._create_swap,
filename="swap_%s" % swap_mb,
size=swap_mb * units.Mi,
swap_mb=swap_mb)
else:
image.cache(fetch_func=libvirt_utils.fetch_image,
context=context,
filename=cache_name,
image_id=instance['image_ref'],
user_id=instance['user_id'],
project_id=instance['project_id'],
size=info['virt_disk_size'])
# if image has kernel and ramdisk, just download
# following normal way.
self._fetch_instance_kernel_ramdisk(context, instance)
def post_live_migration(self, context, instance, block_device_info,
migrate_data=None):
# Disconnect from volume server
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
for vol in block_device_mapping:
connection_info = vol['connection_info']
disk_dev = vol['mount_device'].rpartition("/")[2]
self._disconnect_volume(connection_info, disk_dev)
def post_live_migration_at_source(self, context, instance, network_info):
"""Unplug VIFs from networks at source.
:param context: security context
:param instance: instance object reference
:param network_info: instance network information
"""
self.unplug_vifs(instance, network_info)
def post_live_migration_at_destination(self, context,
instance,
network_info,
block_migration=False,
block_device_info=None):
"""Post operation of live migration at destination host.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param network_info: instance network information
:param block_migration: if true, post operation of block_migration.
"""
# Define migrated instance, otherwise, suspend/destroy does not work.
dom_list = self._conn.listDefinedDomains()
if instance["name"] not in dom_list:
# In case of block migration, destination does not have
# libvirt.xml
disk_info = blockinfo.get_disk_info(
CONF.libvirt.virt_type, instance, block_device_info)
xml = self._get_guest_xml(context, instance,
network_info, disk_info,
block_device_info=block_device_info,
write_to_disk=True)
self._conn.defineXML(xml)
def _get_instance_disk_info(self, instance_name, xml,
block_device_info=None):
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
volume_devices = set()
for vol in block_device_mapping:
disk_dev = vol['mount_device'].rpartition("/")[2]
volume_devices.add(disk_dev)
disk_info = []
doc = etree.fromstring(xml)
disk_nodes = doc.findall('.//devices/disk')
path_nodes = doc.findall('.//devices/disk/source')
driver_nodes = doc.findall('.//devices/disk/driver')
target_nodes = doc.findall('.//devices/disk/target')
for cnt, path_node in enumerate(path_nodes):
disk_type = disk_nodes[cnt].get('type')
path = path_node.get('file')
target = target_nodes[cnt].attrib['dev']
if not path:
LOG.debug('skipping disk for %s as it does not have a path',
instance_name)
continue
if disk_type != 'file':
LOG.debug('skipping %s since it looks like volume', path)
continue
if target in volume_devices:
LOG.debug('skipping disk %(path)s (%(target)s) as it is a '
'volume', {'path': path, 'target': target})
continue
# get the real disk size or
# raise a localized error if image is unavailable
dk_size = int(os.path.getsize(path))
disk_type = driver_nodes[cnt].get('type')
if disk_type == "qcow2":
backing_file = libvirt_utils.get_disk_backing_file(path)
virt_size = disk.get_disk_size(path)
over_commit_size = int(virt_size) - dk_size
else:
backing_file = ""
virt_size = dk_size
over_commit_size = 0
disk_info.append({'type': disk_type,
'path': path,
'virt_disk_size': virt_size,
'backing_file': backing_file,
'disk_size': dk_size,
'over_committed_disk_size': over_commit_size})
return jsonutils.dumps(disk_info)
def get_instance_disk_info(self, instance_name,
block_device_info=None):
try:
dom = self._lookup_by_name(instance_name)
xml = dom.XMLDesc(0)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
msg = (_('Error from libvirt while getting description of '
'%(instance_name)s: [Error Code %(error_code)s] '
'%(ex)s') %
{'instance_name': instance_name,
'error_code': error_code,
'ex': ex})
LOG.warn(msg)
raise exception.InstanceNotFound(instance_id=instance_name)
return self._get_instance_disk_info(instance_name, xml,
block_device_info)
def _get_disk_over_committed_size_total(self):
"""Return total over committed disk size for all instances."""
# Disk size that all instance uses : virtual_size - disk_size
disk_over_committed_size = 0
for dom in self._list_instance_domains():
try:
xml = dom.XMLDesc(0)
disk_infos = jsonutils.loads(
self._get_instance_disk_info(dom.name(), xml))
for info in disk_infos:
disk_over_committed_size += int(
info['over_committed_disk_size'])
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
LOG.warn(_LW(
'Error from libvirt while getting description of '
'%(instance_name)s: [Error Code %(error_code)s] %(ex)s'
) % {'instance_name': dom.name(),
'error_code': error_code,
'ex': ex})
except OSError as e:
if e.errno == errno.ENOENT:
LOG.warn(_LW('Periodic task is updating the host stat, '
'it is trying to get disk %(i_name)s, '
'but disk file was removed by concurrent '
'operations such as resize.'),
{'i_name': dom.name()})
if e.errno == errno.EACCES:
LOG.warn(_LW('Periodic task is updating the host stat, '
'it is trying to get disk %(i_name)s, '
'but access is denied. It is most likely '
'due to a VM that exists on the compute '
'node but is not managed by Nova.'),
{'i_name': dom.name()})
else:
raise
# NOTE(gtt116): give other tasks a chance.
greenthread.sleep(0)
return disk_over_committed_size
def unfilter_instance(self, instance, network_info):
"""See comments of same method in firewall_driver."""
self.firewall_driver.unfilter_instance(instance,
network_info=network_info)
def get_host_stats(self, refresh=False):
"""Return the current state of the host.
If 'refresh' is True, run update the stats first.
"""
return self.host_state.get_host_stats(refresh=refresh)
def get_host_cpu_stats(self):
"""Return the current CPU state of the host."""
# Extract node's CPU statistics.
stats = self._conn.getCPUStats(libvirt.VIR_NODE_CPU_STATS_ALL_CPUS, 0)
# getInfo() returns various information about the host node
# No. 3 is the expected CPU frequency.
stats["frequency"] = self._conn.getInfo()[3]
return stats
def get_host_uptime(self, host):
"""Returns the result of calling "uptime"."""
# NOTE(dprince): host seems to be ignored for this call and in
# other compute drivers as well. Perhaps we should remove it?
out, err = utils.execute('env', 'LANG=C', 'uptime')
return out
def manage_image_cache(self, context, all_instances):
"""Manage the local cache of images."""
self.image_cache_manager.update(context, all_instances)
def _cleanup_remote_migration(self, dest, inst_base, inst_base_resize,
shared_storage=False):
"""Used only for cleanup in case migrate_disk_and_power_off fails."""
try:
if os.path.exists(inst_base_resize):
utils.execute('rm', '-rf', inst_base)
utils.execute('mv', inst_base_resize, inst_base)
if not shared_storage:
utils.execute('ssh', dest, 'rm', '-rf', inst_base)
except Exception:
pass
def _is_storage_shared_with(self, dest, inst_base):
# NOTE (rmk): There are two methods of determining whether we are
# on the same filesystem: the source and dest IP are the
# same, or we create a file on the dest system via SSH
# and check whether the source system can also see it.
shared_storage = (dest == self.get_host_ip_addr())
if not shared_storage:
tmp_file = uuid.uuid4().hex + '.tmp'
tmp_path = os.path.join(inst_base, tmp_file)
try:
utils.execute('ssh', dest, 'touch', tmp_path)
if os.path.exists(tmp_path):
shared_storage = True
os.unlink(tmp_path)
else:
utils.execute('ssh', dest, 'rm', tmp_path)
except Exception:
pass
return shared_storage
def migrate_disk_and_power_off(self, context, instance, dest,
flavor, network_info,
block_device_info=None):
LOG.debug("Starting migrate_disk_and_power_off",
instance=instance)
# Checks if the migration needs a disk resize down.
for kind in ('root_gb', 'ephemeral_gb'):
if flavor[kind] < instance[kind]:
reason = _("Unable to resize disk down.")
raise exception.InstanceFaultRollback(
exception.ResizeError(reason=reason))
disk_info_text = self.get_instance_disk_info(instance['name'],
block_device_info=block_device_info)
disk_info = jsonutils.loads(disk_info_text)
# NOTE(dgenin): Migration is not implemented for LVM backed instances.
if (CONF.libvirt.images_type == 'lvm' and
not self._is_booted_from_volume(instance, disk_info_text)):
reason = "Migration is not supported for LVM backed instances"
raise exception.MigrationPreCheckError(reason)
# copy disks to destination
# rename instance dir to +_resize at first for using
# shared storage for instance dir (eg. NFS).
inst_base = libvirt_utils.get_instance_path(instance)
inst_base_resize = inst_base + "_resize"
shared_storage = self._is_storage_shared_with(dest, inst_base)
# try to create the directory on the remote compute node
# if this fails we pass the exception up the stack so we can catch
# failures here earlier
if not shared_storage:
utils.execute('ssh', dest, 'mkdir', '-p', inst_base)
self.power_off(instance)
block_device_mapping = driver.block_device_info_get_mapping(
block_device_info)
for vol in block_device_mapping:
connection_info = vol['connection_info']
disk_dev = vol['mount_device'].rpartition("/")[2]
self._disconnect_volume(connection_info, disk_dev)
try:
utils.execute('mv', inst_base, inst_base_resize)
# if we are migrating the instance with shared storage then
# create the directory. If it is a remote node the directory
# has already been created
if shared_storage:
dest = None
utils.execute('mkdir', '-p', inst_base)
active_flavor = flavors.extract_flavor(instance)
for info in disk_info:
# assume inst_base == dirname(info['path'])
img_path = info['path']
fname = os.path.basename(img_path)
from_path = os.path.join(inst_base_resize, fname)
if (fname == 'disk.swap' and
active_flavor.get('swap', 0) != flavor.get('swap', 0)):
# To properly resize the swap partition, it must be
# re-created with the proper size. This is acceptable
# because when an OS is shut down, the contents of the
# swap space are just garbage, the OS doesn't bother about
# what is in it.
# We will not copy over the swap disk here, and rely on
# finish_migration/_create_image to re-create it for us.
continue
if info['type'] == 'qcow2' and info['backing_file']:
tmp_path = from_path + "_rbase"
# merge backing file
utils.execute('qemu-img', 'convert', '-f', 'qcow2',
'-O', 'qcow2', from_path, tmp_path)
if shared_storage:
utils.execute('mv', tmp_path, img_path)
else:
libvirt_utils.copy_image(tmp_path, img_path, host=dest)
utils.execute('rm', '-f', tmp_path)
else: # raw or qcow2 with no backing file
libvirt_utils.copy_image(from_path, img_path, host=dest)
except Exception:
with excutils.save_and_reraise_exception():
self._cleanup_remote_migration(dest, inst_base,
inst_base_resize,
shared_storage)
return disk_info_text
def _wait_for_running(self, instance):
state = self.get_info(instance)['state']
if state == power_state.RUNNING:
LOG.info(_LI("Instance running successfully."), instance=instance)
raise loopingcall.LoopingCallDone()
@staticmethod
def _disk_size_from_instance(instance, info):
"""Determines the disk size from instance properties
Returns the disk size by using the disk name to determine whether it
is a root or an ephemeral disk, then by checking properties of the
instance returns the size converted to bytes.
Returns 0 if the disk name not match (disk, disk.local).
"""
fname = os.path.basename(info['path'])
if fname == 'disk':
size = instance['root_gb']
elif fname == 'disk.local':
size = instance['ephemeral_gb']
else:
size = 0
return size * units.Gi
@staticmethod
def _disk_raw_to_qcow2(path):
"""Converts a raw disk to qcow2."""
path_qcow = path + '_qcow'
utils.execute('qemu-img', 'convert', '-f', 'raw',
'-O', 'qcow2', path, path_qcow)
utils.execute('mv', path_qcow, path)
@staticmethod
def _disk_qcow2_to_raw(path):
"""Converts a qcow2 disk to raw."""
path_raw = path + '_raw'
utils.execute('qemu-img', 'convert', '-f', 'qcow2',
'-O', 'raw', path, path_raw)
utils.execute('mv', path_raw, path)
def _disk_resize(self, info, size):
"""Attempts to resize a disk to size
Attempts to resize a disk by checking the capabilities and
preparing the format, then calling disk.api.extend.
Note: Currently only support disk extend.
"""
# If we have a non partitioned image that we can extend
# then ensure we're in 'raw' format so we can extend file system.
fmt = info['type']
pth = info['path']
if (size and fmt == 'qcow2' and
disk.can_resize_image(pth, size) and
disk.is_image_partitionless(pth, use_cow=True)):
self._disk_qcow2_to_raw(pth)
fmt = 'raw'
if size:
use_cow = fmt == 'qcow2'
disk.extend(pth, size, use_cow=use_cow)
if fmt == 'raw' and CONF.use_cow_images:
# back to qcow2 (no backing_file though) so that snapshot
# will be available
self._disk_raw_to_qcow2(pth)
def finish_migration(self, context, migration, instance, disk_info,
network_info, image_meta, resize_instance,
block_device_info=None, power_on=True):
LOG.debug("Starting finish_migration", instance=instance)
# resize disks. only "disk" and "disk.local" are necessary.
disk_info = jsonutils.loads(disk_info)
for info in disk_info:
size = self._disk_size_from_instance(instance, info)
self._disk_resize(info, size)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance,
block_device_info,
image_meta)
# assume _create_image do nothing if a target file exists.
self._create_image(context, instance,
disk_mapping=disk_info['mapping'],
network_info=network_info,
block_device_info=None, inject_files=False)
xml = self._get_guest_xml(context, instance, network_info, disk_info,
block_device_info=block_device_info,
write_to_disk=True)
self._create_domain_and_network(context, xml, instance, network_info,
block_device_info, power_on)
if power_on:
timer = loopingcall.FixedIntervalLoopingCall(
self._wait_for_running,
instance)
timer.start(interval=0.5).wait()
def _cleanup_failed_migration(self, inst_base):
"""Make sure that a failed migrate doesn't prevent us from rolling
back in a revert.
"""
try:
shutil.rmtree(inst_base)
except OSError as e:
if e.errno != errno.ENOENT:
raise
def finish_revert_migration(self, context, instance, network_info,
block_device_info=None, power_on=True):
LOG.debug("Starting finish_revert_migration",
instance=instance)
inst_base = libvirt_utils.get_instance_path(instance)
inst_base_resize = inst_base + "_resize"
# NOTE(danms): if we're recovering from a failed migration,
# make sure we don't have a left-over same-host base directory
# that would conflict. Also, don't fail on the rename if the
# failure happened early.
if os.path.exists(inst_base_resize):
self._cleanup_failed_migration(inst_base)
utils.execute('mv', inst_base_resize, inst_base)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance,
block_device_info)
xml = self._get_guest_xml(context, instance, network_info, disk_info,
block_device_info=block_device_info)
self._create_domain_and_network(context, xml, instance, network_info,
block_device_info, power_on)
if power_on:
timer = loopingcall.FixedIntervalLoopingCall(
self._wait_for_running,
instance)
timer.start(interval=0.5).wait()
def confirm_migration(self, migration, instance, network_info):
"""Confirms a resize, destroying the source VM."""
self._cleanup_resize(instance, network_info)
def get_diagnostics(self, instance):
def get_io_devices(xml_doc):
"""get the list of io devices from the xml document."""
result = {"volumes": [], "ifaces": []}
try:
doc = etree.fromstring(xml_doc)
except Exception:
return result
blocks = [('./devices/disk', 'volumes'),
('./devices/interface', 'ifaces')]
for block, key in blocks:
section = doc.findall(block)
for node in section:
for child in node.getchildren():
if child.tag == 'target' and child.get('dev'):
result[key].append(child.get('dev'))
return result
domain = self._lookup_by_name(instance['name'])
output = {}
# get cpu time, might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
try:
cputime = domain.vcpus()[0]
for i in range(len(cputime)):
output["cpu" + str(i) + "_time"] = cputime[i][2]
except libvirt.libvirtError:
pass
# get io status
xml = domain.XMLDesc(0)
dom_io = get_io_devices(xml)
for guest_disk in dom_io["volumes"]:
try:
# blockStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
stats = domain.blockStats(guest_disk)
output[guest_disk + "_read_req"] = stats[0]
output[guest_disk + "_read"] = stats[1]
output[guest_disk + "_write_req"] = stats[2]
output[guest_disk + "_write"] = stats[3]
output[guest_disk + "_errors"] = stats[4]
except libvirt.libvirtError:
pass
for interface in dom_io["ifaces"]:
try:
# interfaceStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
stats = domain.interfaceStats(interface)
output[interface + "_rx"] = stats[0]
output[interface + "_rx_packets"] = stats[1]
output[interface + "_rx_errors"] = stats[2]
output[interface + "_rx_drop"] = stats[3]
output[interface + "_tx"] = stats[4]
output[interface + "_tx_packets"] = stats[5]
output[interface + "_tx_errors"] = stats[6]
output[interface + "_tx_drop"] = stats[7]
except libvirt.libvirtError:
pass
output["memory"] = domain.maxMemory()
# memoryStats might launch an exception if the method
# is not supported by the underlying hypervisor being
# used by libvirt
try:
mem = domain.memoryStats()
for key in mem.keys():
output["memory-" + key] = mem[key]
except (libvirt.libvirtError, AttributeError):
pass
return output
def instance_on_disk(self, instance):
# ensure directories exist and are writable
instance_path = libvirt_utils.get_instance_path(instance)
LOG.debug('Checking instance files accessibility %s', instance_path)
return os.access(instance_path, os.W_OK)
def inject_network_info(self, instance, nw_info):
self.firewall_driver.setup_basic_filtering(instance, nw_info)
def _delete_instance_files(self, instance):
# NOTE(mikal): a shim to handle this file not using instance objects
# everywhere. Remove this when that conversion happens.
context = nova_context.get_admin_context(read_deleted='yes')
inst_obj = objects.Instance.get_by_uuid(context, instance['uuid'])
# NOTE(mikal): this code should be pushed up a layer when this shim is
# removed.
attempts = int(inst_obj.system_metadata.get('clean_attempts', '0'))
success = self.delete_instance_files(inst_obj)
inst_obj.system_metadata['clean_attempts'] = str(attempts + 1)
if success:
inst_obj.cleaned = True
inst_obj.save(context)
def delete_instance_files(self, instance):
target = libvirt_utils.get_instance_path(instance)
# A resize may be in progress
target_resize = target + '_resize'
# Other threads may attempt to rename the path, so renaming the path
# to target + '_del' (because it is atomic) and iterating through
# twice in the unlikely event that a concurrent rename occurs between
# the two rename attempts in this method. In general this method
# should be fairly thread-safe without these additional checks, since
# other operations involving renames are not permitted when the task
# state is not None and the task state should be set to something
# other than None by the time this method is invoked.
target_del = target + '_del'
for i in six.moves.range(2):
try:
utils.execute('mv', target, target_del)
break
except Exception:
pass
try:
utils.execute('mv', target_resize, target_del)
break
except Exception:
pass
# Either the target or target_resize path may still exist if all
# rename attempts failed.
remaining_path = None
for p in (target, target_resize):
if os.path.exists(p):
remaining_path = p
break
# A previous delete attempt may have been interrupted, so target_del
# may exist even if all rename attempts during the present method
# invocation failed due to the absence of both target and
# target_resize.
if not remaining_path and os.path.exists(target_del):
LOG.info(_LI('Deleting instance files %s'), target_del,
instance=instance)
remaining_path = target_del
try:
shutil.rmtree(target_del)
except OSError as e:
LOG.error(_LE('Failed to cleanup directory %(target)s: '
'%(e)s'), {'target': target_del, 'e': e},
instance=instance)
# It is possible that the delete failed, if so don't mark the instance
# as cleaned.
if remaining_path and os.path.exists(remaining_path):
LOG.info(_LI('Deletion of %s failed'), remaining_path,
instance=instance)
return False
LOG.info(_LI('Deletion of %s complete'), target_del, instance=instance)
return True
@property
def need_legacy_block_device_info(self):
return False
def default_root_device_name(self, instance, image_meta, root_bdm):
disk_bus = blockinfo.get_disk_bus_for_device_type(
CONF.libvirt.virt_type, image_meta, "disk")
cdrom_bus = blockinfo.get_disk_bus_for_device_type(
CONF.libvirt.virt_type, image_meta, "cdrom")
root_info = blockinfo.get_root_info(
CONF.libvirt.virt_type, image_meta, root_bdm, disk_bus,
cdrom_bus)
return block_device.prepend_dev(root_info['dev'])
def default_device_names_for_instance(self, instance, root_device_name,
*block_device_lists):
ephemerals, swap, block_device_mapping = block_device_lists[:3]
blockinfo.default_device_names(CONF.libvirt.virt_type,
nova_context.get_admin_context(),
instance, root_device_name,
ephemerals, swap,
block_device_mapping)
def is_supported_fs_format(self, fs_type):
return fs_type in [disk.FS_FORMAT_EXT2, disk.FS_FORMAT_EXT3,
disk.FS_FORMAT_EXT4, disk.FS_FORMAT_XFS]
class HostState(object):
"""Manages information about the compute node through libvirt."""
def __init__(self, driver):
super(HostState, self).__init__()
self._stats = {}
self.driver = driver
self.update_status()
def get_host_stats(self, refresh=False):
"""Return the current state of the host.
If 'refresh' is True, run update the stats first.
"""
if refresh or not self._stats:
self.update_status()
return self._stats
def update_status(self):
"""Retrieve status info from libvirt."""
def _get_disk_available_least():
"""Return total real disk available least size.
The size of available disk, when block_migration command given
disk_over_commit param is FALSE.
The size that deducted real instance disk size from the total size
of the virtual disk of all instances.
"""
disk_free_gb = disk_info_dict['free']
disk_over_committed = (self.driver.
_get_disk_over_committed_size_total())
# Disk available least size
available_least = disk_free_gb * units.Gi - disk_over_committed
return (available_least / units.Gi)
LOG.debug("Updating host stats")
disk_info_dict = self.driver._get_local_gb_info()
data = {}
# NOTE(dprince): calling capabilities before getVersion works around
# an initialization issue with some versions of Libvirt (1.0.5.5).
# See: https://bugzilla.redhat.com/show_bug.cgi?id=1000116
# See: https://bugs.launchpad.net/nova/+bug/1215593
data["supported_instances"] = \
self.driver._get_instance_capabilities()
data["vcpus"] = self.driver._get_vcpu_total()
data["memory_mb"] = self.driver._get_memory_mb_total()
data["local_gb"] = disk_info_dict['total']
data["vcpus_used"] = self.driver._get_vcpu_used()
data["memory_mb_used"] = self.driver._get_memory_mb_used()
data["local_gb_used"] = disk_info_dict['used']
data["hypervisor_type"] = self.driver._get_hypervisor_type()
data["hypervisor_version"] = self.driver._get_hypervisor_version()
data["hypervisor_hostname"] = self.driver._get_hypervisor_hostname()
data["cpu_info"] = self.driver._get_cpu_info()
data['disk_available_least'] = _get_disk_available_least()
data['pci_passthrough_devices'] = \
self.driver._get_pci_passthrough_devices()
self._stats = data
return data
| nova/virt/libvirt/driver.py | 244,846 | Manages information about the compute node through libvirt.
Checks if destination has enough disk for block migration.
Confirms existence of the tmpfile under CONF.instances_path.
Cannot confirm tmpfile return False.
Make sure that a failed migrate doesn't prevent us from rolling
back in a revert.
Delete all LVM disks for given instance object.
Used only for cleanup in case migrate_disk_and_power_off fails.
Removes existence of the tmpfile under CONF.instances_path.
Checks the host cpu is compatible to a cpu given by xml.
"xml" must be a part of libvirt.openAuth(...).getCapabilities().
return values follows by virCPUCompareResult.
if 0 > return value, do live migration.
'http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult'
:param cpu_info: json string of cpu feature from _get_cpu_info()
:returns:
None. if given cpu info is not compatible to this server,
raise exception.
Create a domain.
Either domain or xml must be passed in. If both are passed, then
the domain definition is overwritten from the xml.
Do required network setup and create domain.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param instance_dir:
instance path to use, calculated externally to handle block
migrating an instance with an old style instance path
:param disk_info_json:
json strings specified in get_instance_disk_info
Create a blank image of specified size.
Makes tmpfile under CONF.instances_path.
Create a swap file of specified size.
Converts a qcow2 disk to raw.
Converts a raw disk to qcow2.
Attempts to resize a disk to size
Attempts to resize a disk by checking the capabilities and
preparing the format, then calling disk.api.extend.
Note: Currently only support disk extend.
Determines the disk size from instance properties
Returns the disk size by using the disk name to determine whether it
is a root or an ephemeral disk, then by checking properties of the
instance returns the size converted to bytes.
Returns 0 if the disk name not match (disk, disk.local).
Wait for & dispatch events from native thread
Blocks until native thread indicates some events
are ready. Then dispatches all queued events.
Dispatches async events coming in from libvirtd.
This is a green thread which waits for events to
arrive from the libvirt event loop thread. This
then dispatches the events to the compute manager.
Warn about untested driver configurations.
This will log a warning message about untested driver or host arch
configurations to indicate to administrators that the quality is
unknown. Currently, only qemu or kvm on intel 32- or 64-bit systems
is tested upstream.
Receives lifecycle events from libvirt.
NB: this method is executing in a native thread, not
an eventlet coroutine. It can only invoke other libvirt
APIs, or use self.queue_event(). Any use of logging APIs
in particular is forbidden.
Download kernel and ramdisk for instance in instance directory.
Return all block devices in use on this node.
Get cpuinfo information.
Obtains cpu feature from virConnect.getCapabilities,
and returns as a json string.
:return: see above description
Get a PCI device's device type.
An assignable PCI device can be a normal PCI device,
a SR-IOV Physical Function (PF), or a SR-IOV Virtual
Function (VF). Only normal PCI devices or SR-IOV VFs
are assignable, while SR-IOV PFs are always owned by
hypervisor.
Please notice that a PCI device with SR-IOV
capability but not enabled is reported as normal PCI device.
Return total real disk available least size.
The size of available disk, when block_migration command given
disk_over_commit param is FALSE.
The size that deducted real instance disk size from the total size
of the virtual disk of all instances.
Return total over committed disk size for all instances.
Returns the xml for the disk mounted at device.
Get config data for parameters.
:param rescue: optional dictionary that should contain the key
'ramdisk_id' if a ramdisk is needed for the rescue image and
'kernel_id' if a kernel is needed for the rescue image.
Get metadata config for guest.
Returns an instance of config.LibvirtConfigCaps representing
the capabilities of the host.
Returns a UUID representing the host.
Returns the hostname of the hypervisor.
Get hypervisor type.
:returns: hypervisor type (ex. qemu)
Get hypervisor version.
:returns: hypervisor version (ex. 12003)
Get hypervisor instance capabilities
Returns a list of tuples that describe instances the
hypervisor is capable of hosting. Each tuple consists
of the triplet (arch, hypervisor_type, vm_mode).
:returns: List of tuples describing instance capabilities
Note that this function takes a domain xml.
Returns a list of all network interfaces for this instance.
Get local storage info of the compute node in GB.
:returns: A dict containing:
:total: How big the overall usable filesystem is (in gigabytes)
:free: How much space is free (in gigabytes)
:used: How much space is used (in gigabytes)
Get the total memory size(MB) of physical computer.
:returns: the total amount of memory(MB).
Get the used memory size(MB) of physical computer.
:returns: the total usage of memory(MB).
Get host PCI devices information.
Obtains pci devices information from libvirt, and returns
as a JSON string.
Each device information is a dictionary, with mandatory keys
of 'address', 'vendor_id', 'product_id', 'dev_type', 'dev_id',
'label' and other optional device specific information.
Refer to the objects/pci_device.py for more idea of these keys.
:returns: a JSON string containaing a list of the assignable PCI
devices information
Returns a dict of PCI device.
Get available vcpu number of physical computer.
:returns: the number of cpu core instances can be used.
Get vcpu usage number of physical computer.
:returns: The total number of vcpu(s) that are currently being used.
Reboot a virtual machine, given an instance reference.
Performs a Libvirt reset (if supported) on the domain.
If Libvirt reset is unavailable this method actually destroys and
re-creates the domain to ensure the reboot happens, as the guest
OS cannot ignore this action.
If xml is set, it uses the passed in xml in place of the xml from the
existing domain.
Initializes the libvirt events subsystem.
This requires running a native thread to provide the
libvirt event loop integration. This forwards events
to a green thread which does the actual dispatching.
Create a self-pipe for the native thread to synchronize on.
This code is taken from the eventlet tpool module, under terms
of the Apache License v2.0.
Injects data in a disk image
Helper used for injecting data in a disk image file system.
Keyword arguments:
instance -- a dict that refers instance specifications
network_info -- a dict that refers network speficications
admin_pass -- a string used to set an admin password
files -- a list of files needs to be injected
suffix -- a string used as an image name suffix
Determines whether the VM is booting from volume
Determines whether the disk mapping indicates that the VM
is booting from a volume.
Check if all block storage of an instance can be shared
between source and destination of a live migration.
Returns true if the instance is volume backed and has no local disks,
or if the image backend is the same on source and destination and the
backend shares block storage between compute nodes.
Check if instance path is shared between source and
destination of a live migration.
Get a list of libvirt.Domain objects for nova instances
:param only_running: True to only return running instances
:param only_guests: True to filter out any host domain (eg Dom-0)
Query libvirt to a get a list of all libvirt.Domain objects
that correspond to nova instances. If the only_running parameter
is true this list will only include active domains, otherwise
inactive domains will be included too. If the only_guests parameter
is true the list will have any "host" domain (aka Xen Domain-0)
filtered out.
:returns: list of libvirt.Domain objects
Do live migration.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param dest: destination host
:param post_method:
post operation method.
expected nova.compute.manager._post_live_migration.
:param recover_method:
recovery method when any exception occurs.
expected nova.compute.manager._rollback_live_migration.
:param block_migration: if true, do block migration.
:param migrate_data: implementation specific params
Snapshot an instance without downtime.
Retrieve libvirt domain object given an instance id.
All libvirt error handling should be handled in this method and
relevant nova exceptions should be raised in response.
Retrieve libvirt domain object given an instance name.
All libvirt error handling should be handled in this method and
relevant nova exceptions should be raised in response.
Returns all LVM disks for given instance object.
Receives async events coming in from libvirtd.
This is a native thread which runs the default
libvirt event loop implementation. This processes
any incoming async events from libvirtd and queues
them for later dispatch. This thread is only
permitted to use libvirt python APIs, and the
driver.queue_event method. In particular any use
of logging is forbidden, since it will confuse
eventlet's greenthread integration
Puts an event on the queue for dispatch.
This method is called by the native event thread to
put events on the queue for later dispatch by the
green thread. Any use of logging APIs is forbidden.
Set cache mode on LibvirtConfigGuestDisk object.
Enables / Disables the compute service on this host.
This doesn't override non-automatic disablement with an automatic
setting; thereby permitting operators to keep otherwise
healthy hosts out of rotation.
Attempt to shutdown and restart the instance gracefully.
We use shutdown and create here so we can return if the guest
responded and actually rebooted. Note that this method only
succeeds if the guest responds to acpi. Therefore we return
success or failure so we can fall back to a hard reboot if
necessary.
:returns: True if the reboot succeeded
Swap existing disk with a new block device.
Unplug VIFs from networks.
Perform volume snapshot.
:param domain: VM that volume is attached to
:param volume_id: volume UUID to snapshot
:param snapshot_id: UUID of snapshot being created
:param new_file: relative path to new qcow2 file present on share
Note:
if file being merged into == active image:
do a blockRebase (pull) operation
else:
do a blockCommit operation
Files must be adjacent in snap chain.
:param instance: instance object reference
:param volume_id: volume UUID
:param snapshot_id: snapshot UUID (unused currently)
:param delete_info: {
'type': 'qcow2',
'file_to_merge': 'a.img',
'merge_target_file': 'b.img' or None (if merging file_to_merge into
active image)
}
Libvirt blockjob handling required for this method is broken
in versions of libvirt that do not contain:
http://libvirt.org/git/?p=libvirt.git;h=0f9e67bfad (1.1.1)
(Patch is pending in 1.0.5-maint branch as well, but we cannot detect
libvirt 1.0.5.5 vs. 1.0.5.6 here.)
Send a snapshot status update to Cinder.
This method captures and logs exceptions that occur
since callers cannot do anything useful with these exceptions.
Operations on the Cinder side waiting for this will time out if
a failure occurs sending the update.
:param context: security context
:param snapshot_id: id of snapshot being updated
:param status: new status value
Wait for libvirt block job to complete.
Libvirt may return either cur==end or an empty dict when
the job is complete, depending on whether the job has been
cleaned up by libvirt yet, or not.
:returns: True if still in progress
False if completed
Called at an interval until the VM is running.
Called at an interval until the VM is gone.
Called at an interval until the VM is running again.
Note that this function takes an instance name.
Check if it is possible to execute live migration.
This runs checks on the destination host, and then calls
back to the source host to check the results.
:param context: security context
:param instance: nova.db.sqlalchemy.models.Instance
:param block_migration: if true, prepare for block migration
:param disk_over_commit: if true, allow disk over commit
:returns: a dict containing:
:filename: name of the tmpfile under CONF.instances_path
:block_migration: whether this is block migration
:disk_over_commit: disk-over-commit factor on dest host
:disk_available_mb: available disk space on dest host
Do required cleanup on dest host after check_can_live_migrate calls
:param context: security context
Check if it is possible to execute live migration.
This checks if the live migration can succeed, based on the
results from check_can_live_migrate_destination.
:param context: security context
:param instance: nova.db.sqlalchemy.models.Instance
:param dest_check_data: result of check_can_live_migrate_destination
:returns: a dict containing migration info
Confirms a resize, destroying the source VM.
Ensure that an instance's filtering rules are enabled.
When migrating an instance, we need the filtering rules to
be configured on the destination host before starting the
migration.
Also, when restarting the compute service, we need to ensure
that filtering rules exist for all running services.
Return usage info for volumes attached to vms on
a given host.
Retrieve resource information.
This method is called when nova-compute launches, and
as part of a periodic task that records the results in the DB.
:param nodename: will be put in PCI device
:returns: dictionary containing resource info
Return the current CPU state of the host.
Return the current state of the host.
If 'refresh' is True, run update the stats first.
Return the current state of the host.
If 'refresh' is True, run update the stats first.
Returns the result of calling "uptime".
Retrieve information from libvirt for a specific instance name.
If a libvirt error is encountered during lookup, we might raise a
NotFound exception or Error exception depending on how severe the
libvirt error is.
get the list of io devices from the xml document.
Efficient override of base instance_exists method.
Note that this function takes an instance name.
Spawning live_migration operation for distributing high-load.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param dest: destination host
:param post_method:
post operation method.
expected nova.compute.manager._post_live_migration.
:param recover_method:
recovery method when any exception occurs.
expected nova.compute.manager._rollback_live_migration.
:param block_migration: if true, do block migration.
:param migrate_data: implementation specific params
Manage the local cache of images.
eventlet.tpool.Proxy doesn't work with old-style class in __str__()
or __repr__() calls. See bug #962840 for details.
We perform a monkey patch to replace those two instance methods.
Pause VM instance.
Plug VIFs into networks.
Post operation of live migration at destination host.
:param context: security context
:param instance:
nova.db.sqlalchemy.models.Instance object
instance object that is migrated.
:param network_info: instance network information
:param block_migration: if true, post operation of block_migration.
Unplug VIFs from networks at source.
:param context: security context
:param instance: instance object reference
:param network_info: instance network information
Power off the specified instance.
Power on the specified instance.
Preparation live migration.
Reboot a virtual machine, given an instance reference.
Loads a VM using rescue images.
A rescue is normally performed when something goes wrong with the
primary images and data needs to be corrected/recovered. Rescuing
should not edit or over-ride the original image, only allow for
data recovery.
resume the specified instance.
resume guest state when a host is booted.
Clean up destination node after a failed live migration.
Create snapshot from a running VM instance.
This command only works with qemu 0.14+
Suspend the specified instance.
See comments of same method in firewall_driver.
Unpause paused VM instance.
Reboot the VM which is being rescued back into primary images.
Retrieve status info from libvirt.
Create snapshots of a Cinder volume via libvirt.
:param instance: VM instance object reference
:param volume_id: id of volume being snapshotted
:param create_info: dict of information used to create snapshots
- snapshot_id : ID of snapshot
- type : qcow2 / <other>
- new_file : qcow2 file created by Cinder which
becomes the VM's active image after
the snapshot is complete
waiting for live migration completion.
A connection to a hypervisor through libvirt.
Supports KVM, LXC, QEMU, UML, and XEN.
Copyright 2010 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Copyright (c) 2010 Citrix Systems, Inc. Copyright (c) 2011 Piston Cloud Computing, Inc Copyright (c) 2012 University Of Minho (c) Copyright 2013 Hewlett-Packard Development Company, L.P. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The libvirt driver will prefix any disable reason codes with this string. Disable reason for the service which was enabled or disabled without reason NOTE(maoy): The DOMAIN_BLOCKED state is only valid in Xen. It means that the VM is running and the vCPU is idle. So, we map it to RUNNING NOTE(maoy): The libvirt API doc says that DOMAIN_SHUTDOWN means the domain is being shut down. So technically the domain is still running. SHUTOFF is the real powered off state. But we will map both to SHUTDOWN anyway. http://libvirt.org/html/libvirt-libvirt.html When the above version matches/exceeds this version delete it & corresponding code using it Live snapshot requirements TODO(sdague): this should be 1.0.0, but hacked to set 1.3.0 until https://bugs.launchpad.net/nova/+bug/1334398 can be diagnosed & resolved block size tuning requirements BlockJobInfo management requirement Relative block commit (feature is detected, this version is only used for messaging) Just ignore instead of default outputting to stderr. We prefer 'none' for consistent performance, host crash safety & migration correctness by avoiding host page cache. Some filesystems (eg GlusterFS via FUSE) don't support O_DIRECT though. For those we fallback to 'writethrough' which gives host crash safety, and is safe for migration provided the filesystem is cache coherent (cluster filesystems typically are, but things like NFS are not). Queue the event... ...then wakeup the green thread to dispatch it Wait to be notified that there are some events pending will be raised when pipe is closed Process as many events as possible without blocking get_new_connection may already have disabled the host, in which case _wrapped_conn is None. Disable compute service to avoid new instances of being scheduled on this host. This is Windows compatibility -- use a socket instead of a pipe because pipes don't really exist on Windows. NOTE(dkliban): Error handler needs to be registered before libvirt connection is used for the first time. Otherwise, the handler does not get registered. Stop libguestfs using KVM unless we're also configured to use this. This solves problem where people need to stop Nova use of KVM because nested-virt is broken call with _wrapped_conn_lock held Enabling the compute service, in case it was disabled since the connection was successful. NOTE: The registerCloseCallback of python-libvirt 1.0.1+ is defined with 3 arguments, and the above registerClose- Callback succeeds. However, the one of python-libvirt 1.0.0 is defined with 4 arguments and TypeError happens here. Then python-libvirt 0.9 does not define a method register- CloseCallback. multiple concurrent connections are protected by _wrapped_conn_lock tpool.proxy_call creates a native thread. Due to limitations with eventlet locking we cannot use the logging API inside the called function. The modern (>= 0.9.13) fast way - 1 single API call for all domains The legacy (< 0.9.13) slow way - O(n) API call for n domains Redundant numOfDomains check is for libvirt bz 836647 Old libvirt, or a libvirt driver which doesn't implement the new API If the instance is already terminated, we're still happy Otherwise, destroy it NOTE(GuanQiang): teardown container to avoid resource leak If the instance is already shut off, we get this: Code=55 Error=Requested operation is not valid: domain is not running NOTE(vish): If the instance disappears during the destroy we ignore it so the cleanup can still be attempted because we would prefer destroy to never fail. NOTE(wangpan): If the instance was booted again after destroy, this may be a endless loop, so check the id of domain here, if it changed and the instance is still running, we should destroy it again. see https://bugs.launchpad.net/nova/+bug/1111213 for more details NOTE(vish): Older versions of libvirt don't support undefine flags, so attempt to do the right thing. FIXME(wangpan): if the instance is booted again here, such as the the soft reboot operation boot it here, it will become "running deleted", should we check and destroy it at the end of this method? NOTE(vish): we disconnect from volumes regardless The volume must be detached from the VM before disconnecting it from its encryptor. Otherwise, the encryptor may report that the volume is still in use. Don't block on Volume errors if we're trying to delete the instance as we may be patially created or deleted NOTE(haomai): destroy volumes if needed TODO(sdague): remove in Juno We don't want to leak old disks, but at the same time, we don't want to do an unsafe thing. So we will only handle the old filter if it's the system default still. TODO(sdague): remove in Juno NOTE(wangpan): we get the pre-grizzly instance path firstly, so the backup dir of pre-grizzly instance can be deleted correctly with grizzly or later nova. Deletion can fail over NFS, so retry the deletion as required. Set maximum attempt as 5, most test can remove the directory for the second time. Note(cfb): If the volume has a custom block size, check that that we are using QEMU/KVM and libvirt >= 0.10.2. The presence of a block size is considered mandatory by cinder so we fail if we can't honor the request. NOTE(vish): We can always affect config because our domains are persistent, but we should only affect live if the domain is running. cache device_path in connection_info -- required by encryptors Save a copy of the domain's persistent XML file Abort is an idempotent operation, so make sure any block jobs which may have failed are ended. NOTE (rmk): blockRebase cannot be executed on persistent domains, so we need to temporarily undefine it. If any part of this block fails, the domain is re-defined regardless. Start copy with VIR_DOMAIN_REBASE_REUSE_EXT flag to allow writing to existing external volume file NOTE(alex_xu): domain.blockJobAbort isn't sync call. This is bug in libvirt. So we need waiting for the pivot is finished. libvirt bug 1119173 NOTE(vish): We can always affect config because our domains are persistent, but we should only affect live if the domain is running. The volume must be detached from the VM before disconnecting it from its encryptor. Otherwise, the encryptor may report that the volume is still in use. NOTE(zhaoqin): If the instance does not exist, _lookup_by_name() will throw InstanceNotFound exception. Need to disconnect volume under this circumstance. NOTE(vish): This is called to cleanup volumes after live migration, so we should still disconnect even if the instance doesn't exist here anymore. NOTE(vish): NOTE(vish): glance forces ami disk format to be ami NOTE(bfilippov): save lvm and rbd as raw NOTE(rmk): Live snapshots require QEMU 1.3 and Libvirt 1.0.0. These restrictions can be relaxed as other configurations can be validated. Abort is an idempotent operation, so make sure any block jobs which may have failed are ended. This operation also confirms the running instance, as opposed to the system as a whole, has a new enough version of the hypervisor (bug 1193146). NOTE(rmk): We cannot perform live snapshots when a managedSave file is present, so we will use the cold/legacy method for instances which are shutdown. NOTE(dkang): managedSave does not work for LXC NOTE(xqueralt): libvirt needs o+x in the temp directory NOTE(dkang): because previous managedSave is not called for LXC, _create_domain must not be called. Upload that image to the image service Save a copy of the domain's persistent XML file Abort is an idempotent operation, so make sure any block jobs which may have failed are ended. NOTE (rmk): We are using shallow rebases as a workaround to a bug in QEMU 1.3. In order to do this, we need to create a destination image with the original backing file and matching size of the instance root disk. NOTE (rmk): blockRebase cannot be executed on persistent domains, so we need to temporarily undefine it. If any part of this block fails, the domain is re-defined regardless. NOTE (rmk): Establish a temporary mirror of our root disk and issue an abort once we have a complete copy. Convert the delta (CoW) image with a backing file to a flat image with no backing file. to be snapshotted by libvirt network disks (netfs, gluster, etc.) local disks not snapshotted disk is a Cinder volume with the correct volume_id Determine path for new_file based on current path Find dev name libgfapi delete determine reference within backing store for desired image pull via blockRebase() Merge the most recent snapshot into the active image often None commit with blockCommit() NOTE(vish): This will attempt to do a graceful shutdown/restart. NOTE(vish): This check allows us to reboot an instance that is already shutdown. NOTE(vish): This actually could take slightly longer than the FLAG defines depending on how long the get_info call takes to return. NOTE(ivoks): By checking domain IDs, we make sure we are not recreating domain that's already running. Get the system metadata from the instance Convert the system metadata to image metadata NOTE(vish): This could generate the wrong device_format if we are using the raw backend and the images don't exist yet. The create_images_and_backing below doesn't properly regenerate raw backend images, however, so when it does we need to (re)generate the xml after the images are in place. NOTE (rmk): Re-populate any missing backing files. Initialize all the necessary networking, block devices and start the instance. We use _hard_reboot here to ensure that all backing files, network, and block device connections, etc. are established and available before we attempt to start the instance. Check if the instance is running already and avoid doing anything if it is. Instance is not up and could be in an unknown state. Be as absolute as possible about getting it back into a known and running state. NOTE(ilyaalekseyev): Implementation like in multinics for xenapi(tr3buchet) NOTE(comstud): We want to try 'file' types first, then try 'pty' types. We can't use Python 2.7 syntax of: tree.find("./devices/console[@type='file']/source") because we need to support 2.6. If the guest has a console logging to a file prefer to use that Try 'pty' types NOTE(rmk): We had VNC consoles enabled but the instance in question is not actually listening for connections. TODO(sleepsonthefloor): use etree instead of minidom NOTE(rmk): We had Spice consoles enabled but the instance in question is not actually listening for connections. Check is the write allowed with 512 byte alignment Run as root only for block devices. Handles the partition need to be used. Handles the key injection. Handles the admin password injection. Handles the network injection. Handles the metadata injection ensure directories exist and are writable NOTE(dprince): for rescue console.log may already exist... chown it. NOTE(yaguang): For evacuate disk.config already exist in shared storage, chown it. NOTE(vish): No need add the suffix to console.log NOTE(ndipanov): Even if disk_mapping was passed in, which currently happens only on rescue - we still don't want to create a base image. Lookup the filesystem type if required Config drive File injection only if needed kvm , qemu support managed mode In managed mode, the configured device will be automatically detached from the host OS drivers when the guest is started, and then re-attached when the guest shuts down. we do manual detach only for xen Note(yjiang5) Spelling for 'dettach' is correct, see http://libvirt.org/html/libvirt-libvirt.html. Note(yjiang5): A reset of one PCI device may impact other devices on the same bus, thus we need two separated loops to detach and then reset it. for libvirt version < 1.1.1, this is race condition so forbid detach if not had this version after detachDeviceFlags returned, we should check the dom to ensure the detaching is finished Note(jang): this is a quick fix to stop operator- disabled compute hosts from re-enabling themselves automatically. We prefix any automatic reason code with a fixed string. We only re-enable a host automatically if we find that string in place. This should probably be replaced with a separate flag. FIXME(wangpan): the return value of baselineCPU should be None or xml string, but libvirt has a bug of it from 1.1.2 which is fixed in 1.2.0, this -1 checking should be removed later. only kvm support managed mode We are using default unit for memory: KiB NOTE(yamahata): for nova.api.ec2.cloud.CloudController.get_metadata() The underlying machine type can be set as an image attribute, or otherwise based on some architecture specific defaults For ARM systems we will default to vexpress-a15 for armv7 and virt for aarch64 we only support os_command_line with images with an explicit kernel set and don't want to break nova if there's an os_command_line property without a specified kernel_id param NOTE(mikal): Microsoft Windows expects the clock to be in "localtime". If the clock is set to UTC, then you can use a registry key to let windows know, but Microsoft says this is buggy in http://support.microsoft.com/kb/2687252 TODO(berrange) One day this should be per-guest OS type configurable NOTE(rfolco): HPET is a hardware timer for x86 arch. qemu -no-hpet is not supported on non-x86 targets. The QEMU 'pty' driver throws away any data if no client app is connected. Thus we can't get away with a single type=pty console. Instead we have to configure two separate consoles. We want a tablet if VNC is enabled, or SPICE is enabled and the SPICE agent is disabled NB: this implies that if both SPICE + VNC are enabled at the same time, we'll get the tablet whether the SPICE agent is used or not. NB some versions of libvirt support both SPICE and VNC at the same time. We're not trying to second guess which those versions are. We'll just let libvirt report the errors appropriately if the user enables both. NOTE(ldbragst): The following logic sets the video.type depending on supported defaults given the architecture, virtualization type, and features. The video.type attribute can be overridden by the user with image_meta['properties'], which is carried out in the next if statement below this one. NOTE(ldbragst): PowerKVM doesn't support 'cirrus' be default so use 'vga' instead when running on Power hardware. Set video memory, only if the flavor's limit is set Qemu guest agent only support 'qemu' and 'kvm' hypervisor Enable qga only if the 'hw_qemu_guest_agent' is equal to yes NB(sross): currently only actually supported by KVM/QEmu NOTE(danms): Stringifying a NetworkInfo will take a lock. Do this ahead of time so that we don't acquire it while also holding the logging lock. NOTE(mriedem): block_device_info can contain auth_password so we need to sanitize the password in the message. Save rootfs device to disconnect it when deleting the instance The domain may not be present if the instance failed to start NOTE(uni): Now the container is running with its own private mount namespace and so there is no need to keep the container rootfs mounted in the host namespace NOTE(danms): We need to collect any VIFs that are currently down that we expect a down->up event for. Anything that is already up will not undergo that transition, and for anything that might be stale (cache-wise) assume it's already up so we don't block on it. cache device_path in connection_info -- required by encryptors Neutron reported failure and we didn't swallow it, so bail here We never heard from Neutron Resume only if domain has been paused NOTE(gtt116): give other tasks a chance. skip dom0 the mem reported by dom0 is be greater of what it is being used Convert it to MB Convert it to MB NOTE(justinsb): getVersion moved between libvirt versions Trying to do be compatible with older versions is a lost cause But ... we can at least give the user a nice message NOTE(justinsb): If we wanted to get the version, we could: method = getattr(libvirt, 'getVersion', None) NOTE(justinsb): This would then rely on a proper version check TODO(berrange): why do we bother converting the libvirt capabilities XML into a special JSON format ? The data format is different across all the drivers so we could just return the raw capabilities XML which 'compare_cpu' could use directly That said, arch_filter.py now seems to rely on the libvirt drivers format which suggests this data format needs to be standardized across drivers requirement by DataBase Model Bail early if we know we can't support `listDevices` to avoid repeated warnings within a periodic task TODO(mdragon): console proxy should be implemented for libvirt, in case someone wants to use it with kvm or such. For now return fake data. Temporary: convert supported_instances into a string, while keeping the RPC version as JSON. Can be changed when RPC broadcast is removed Compare CPU Create file on storage, to be checked on source host Checking shared storage connectivity if block migration, instances_paths should not be on shared storage. NOTE(mikal): include the instance directory name here because it doesn't yet exist on the destination but we want to force that same name to be used pylint: disable E1120 Libvirt supports qcow2 disk format,which is usually compressed on compute nodes. Real disk image (compressed) may enlarged to "virtual disk size", that is specified as the maximum disk size. (See qemu-img -f path-to-disk) Scheduler recognizes destination host still has enough disk space if real disk size < available disk size if disk_over_commit is True, otherwise virtual disk size < available disk size. Check that available disk > necessary disk NOTE(berendt): virConnectCompareCPU not working for Xen unknown character exists in xml, then libvirt complains nwfilters may be defined in a separate thread in the case of libvirt non-blocking mode, so we wait for completion NB(sross): can't just use LibvirtConfigGuestparse_str here b/c it doesn't capture the entire XML description change over listen addresses Do live migration. Waiting for completion of live_migration. Steps for volume backed instance live migration w/o shared storage. NOTE(mikal): live migration of instances using config drive is not supported because of a bug in libvirt (read only devices are not copied by libvirt). See bug/1246201 NOTE(mikal): this doesn't use libvirt_utils.get_instance_path because we are ensuring that the same instance directory name is used as was at the source Ensure images and backing files are present. NOTE(angdraug): when block storage is shared between source and destination and instance path isn't (e.g. volume backed or rbd backed instance), instance path on destination has to be prepared Touch the console.log file, required by libvirt. if image has kernel and ramdisk, just download following normal way. Establishing connection to volume server. We call plug_vifs before the compute manager calls ensure_filtering_rules_for_instance, to ensure bridge is set up Retry operation is necessary because continuously request comes, concurrent request occurs to iptables, then it complains. Get image type and create empty disk image, and create backing file in case of qcow2. Creating backing file follows same way as spawning instances. if image has kernel and ramdisk, just download following normal way. Disconnect from volume server Define migrated instance, otherwise, suspend/destroy does not work. In case of block migration, destination does not have libvirt.xml get the real disk size or raise a localized error if image is unavailable Disk size that all instance uses : virtual_size - disk_size NOTE(gtt116): give other tasks a chance. Extract node's CPU statistics. getInfo() returns various information about the host node No. 3 is the expected CPU frequency. NOTE(dprince): host seems to be ignored for this call and in other compute drivers as well. Perhaps we should remove it? NOTE (rmk): There are two methods of determining whether we are on the same filesystem: the source and dest IP are the same, or we create a file on the dest system via SSH and check whether the source system can also see it. Checks if the migration needs a disk resize down. NOTE(dgenin): Migration is not implemented for LVM backed instances. copy disks to destination rename instance dir to +_resize at first for using shared storage for instance dir (eg. NFS). try to create the directory on the remote compute node if this fails we pass the exception up the stack so we can catch failures here earlier if we are migrating the instance with shared storage then create the directory. If it is a remote node the directory has already been created assume inst_base == dirname(info['path']) To properly resize the swap partition, it must be re-created with the proper size. This is acceptable because when an OS is shut down, the contents of the swap space are just garbage, the OS doesn't bother about what is in it. We will not copy over the swap disk here, and rely on finish_migration/_create_image to re-create it for us. merge backing file raw or qcow2 with no backing file If we have a non partitioned image that we can extend then ensure we're in 'raw' format so we can extend file system. back to qcow2 (no backing_file though) so that snapshot will be available resize disks. only "disk" and "disk.local" are necessary. assume _create_image do nothing if a target file exists. NOTE(danms): if we're recovering from a failed migration, make sure we don't have a left-over same-host base directory that would conflict. Also, don't fail on the rename if the failure happened early. get cpu time, might launch an exception if the method is not supported by the underlying hypervisor being used by libvirt get io status blockStats might launch an exception if the method is not supported by the underlying hypervisor being used by libvirt interfaceStats might launch an exception if the method is not supported by the underlying hypervisor being used by libvirt memoryStats might launch an exception if the method is not supported by the underlying hypervisor being used by libvirt ensure directories exist and are writable NOTE(mikal): a shim to handle this file not using instance objects everywhere. Remove this when that conversion happens. NOTE(mikal): this code should be pushed up a layer when this shim is removed. A resize may be in progress Other threads may attempt to rename the path, so renaming the path to target + '_del' (because it is atomic) and iterating through twice in the unlikely event that a concurrent rename occurs between the two rename attempts in this method. In general this method should be fairly thread-safe without these additional checks, since other operations involving renames are not permitted when the task state is not None and the task state should be set to something other than None by the time this method is invoked. Either the target or target_resize path may still exist if all rename attempts failed. A previous delete attempt may have been interrupted, so target_del may exist even if all rename attempts during the present method invocation failed due to the absence of both target and target_resize. It is possible that the delete failed, if so don't mark the instance as cleaned. Disk available least size NOTE(dprince): calling capabilities before getVersion works around an initialization issue with some versions of Libvirt (1.0.5.5). See: https://bugzilla.redhat.com/show_bug.cgi?id=1000116 See: https://bugs.launchpad.net/nova/+bug/1215593 | 42,322 | en | 0.862784 |
# Copyright 2018 Braxton Mckee
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import object_database
class ServiceRuntimeConfig:
def __init__(self, serviceTemporaryStorageRoot, authToken, ownIpAddress):
self.serviceTemporaryStorageRoot = serviceTemporaryStorageRoot
self.authToken = authToken
self.ownIpAddress = ownIpAddress
class ServiceBase:
coresUsed = 1
gbRamUsed = 1
def __init__(self, db, serviceObject, runtimeConfig):
self.db = db
self.serviceObject = serviceObject
self.runtimeConfig = runtimeConfig
if self.serviceObject is not None:
self.serializationContext = self.serviceObject.getSerializationContext()
else:
self.serializationContext = None
@staticmethod
def configureFromCommandline(db, serviceObject, args):
"""Subclasses should take the remaining args from the commandline and configure using them"""
pass
def initialize(self):
pass
def doWork(self, shouldStop):
# subclasses actually do work in here.
shouldStop.wait()
@staticmethod
def serviceDisplay(serviceObject, instance=None, objType=None, queryArgs=None):
return object_database.web.cells.Card("No details provided for service '%s'" % serviceObject.name)
@staticmethod
def serviceHeaderToggles(serviceObject, instance=None):
return []
| object_database/service_manager/ServiceBase.py | 1,936 | Subclasses should take the remaining args from the commandline and configure using them
Copyright 2018 Braxton Mckee Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. subclasses actually do work in here. | 697 | en | 0.871016 |
# Generated by Django 2.0.5 on 2018-12-04 13:00
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('branch', '0007_auto_20181204_2034'),
]
operations = [
migrations.AddField(
model_name='authentication',
name='sty',
field=models.CharField(default=0, max_length=3),
preserve_default=False,
),
]
| Demo/branch/migrations/0008_authentication_sty.py | 434 | Generated by Django 2.0.5 on 2018-12-04 13:00 | 45 | en | 0.569114 |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkdyplsapi.endpoint import endpoint_data
class BindAxnRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'Dyplsapi', '2017-05-25', 'BindAxn')
self.set_method('POST')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_ResourceOwnerId(self): # Long
return self.get_query_params().get('ResourceOwnerId')
def set_ResourceOwnerId(self, ResourceOwnerId): # Long
self.add_query_param('ResourceOwnerId', ResourceOwnerId)
def get_CallDisplayType(self): # Integer
return self.get_query_params().get('CallDisplayType')
def set_CallDisplayType(self, CallDisplayType): # Integer
self.add_query_param('CallDisplayType', CallDisplayType)
def get_CallTimeout(self): # Integer
return self.get_query_params().get('CallTimeout')
def set_CallTimeout(self, CallTimeout): # Integer
self.add_query_param('CallTimeout', CallTimeout)
def get_PhoneNoX(self): # String
return self.get_query_params().get('PhoneNoX')
def set_PhoneNoX(self, PhoneNoX): # String
self.add_query_param('PhoneNoX', PhoneNoX)
def get_RingConfig(self): # String
return self.get_query_params().get('RingConfig')
def set_RingConfig(self, RingConfig): # String
self.add_query_param('RingConfig', RingConfig)
def get_ASRStatus(self): # Boolean
return self.get_query_params().get('ASRStatus')
def set_ASRStatus(self, ASRStatus): # Boolean
self.add_query_param('ASRStatus', ASRStatus)
def get_PhoneNoB(self): # String
return self.get_query_params().get('PhoneNoB')
def set_PhoneNoB(self, PhoneNoB): # String
self.add_query_param('PhoneNoB', PhoneNoB)
def get_PhoneNoA(self): # String
return self.get_query_params().get('PhoneNoA')
def set_PhoneNoA(self, PhoneNoA): # String
self.add_query_param('PhoneNoA', PhoneNoA)
def get_ExpectCity(self): # String
return self.get_query_params().get('ExpectCity')
def set_ExpectCity(self, ExpectCity): # String
self.add_query_param('ExpectCity', ExpectCity)
def get_ResourceOwnerAccount(self): # String
return self.get_query_params().get('ResourceOwnerAccount')
def set_ResourceOwnerAccount(self, ResourceOwnerAccount): # String
self.add_query_param('ResourceOwnerAccount', ResourceOwnerAccount)
def get_OwnerId(self): # Long
return self.get_query_params().get('OwnerId')
def set_OwnerId(self, OwnerId): # Long
self.add_query_param('OwnerId', OwnerId)
def get_OutOrderId(self): # String
return self.get_query_params().get('OutOrderId')
def set_OutOrderId(self, OutOrderId): # String
self.add_query_param('OutOrderId', OutOrderId)
def get_PoolKey(self): # String
return self.get_query_params().get('PoolKey')
def set_PoolKey(self, PoolKey): # String
self.add_query_param('PoolKey', PoolKey)
def get_Expiration(self): # String
return self.get_query_params().get('Expiration')
def set_Expiration(self, Expiration): # String
self.add_query_param('Expiration', Expiration)
def get_IsRecordingEnabled(self): # Boolean
return self.get_query_params().get('IsRecordingEnabled')
def set_IsRecordingEnabled(self, IsRecordingEnabled): # Boolean
self.add_query_param('IsRecordingEnabled', IsRecordingEnabled)
def get_OutId(self): # String
return self.get_query_params().get('OutId')
def set_OutId(self, OutId): # String
self.add_query_param('OutId', OutId)
def get_NoType(self): # String
return self.get_query_params().get('NoType')
def set_NoType(self, NoType): # String
self.add_query_param('NoType', NoType)
def get_ASRModelId(self): # String
return self.get_query_params().get('ASRModelId')
def set_ASRModelId(self, ASRModelId): # String
self.add_query_param('ASRModelId', ASRModelId)
def get_CallRestrict(self): # String
return self.get_query_params().get('CallRestrict')
def set_CallRestrict(self, CallRestrict): # String
self.add_query_param('CallRestrict', CallRestrict)
| aliyun-python-sdk-dyplsapi/aliyunsdkdyplsapi/request/v20170525/BindAxnRequest.py | 4,943 | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Long Long Integer Integer Integer Integer String String String String Boolean Boolean String String String String String String String String Long Long String String String String String String Boolean Boolean String String String String String String String String | 1,020 | en | 0.658176 |
from sqlalchemy import Boolean, Column, ForeignKey, Integer, String, DateTime, LargeBinary, Float, UniqueConstraint
from sqlalchemy.orm import relationship, backref
from datetime import datetime
from conductor.app.db.base_class import Base
class DiscoveryResult(Base):
__tablename__ = "discovery_results"
__table_args__ = (
# this can be db.PrimaryKeyConstraint if you want it to be a primary key
UniqueConstraint('train_id', 'station_id'),
)
id = Column(Integer, primary_key=True, index=True)
train_id = Column(Integer, ForeignKey("trains.id"))
station_id = Column(Integer, ForeignKey("stations.id"))
results = Column(String)
created_at = Column(DateTime, default=datetime.now())
| conductor/app/models/discovery.py | 732 | this can be db.PrimaryKeyConstraint if you want it to be a primary key | 70 | en | 0.746741 |
## @package attention
# Module caffe2.python.attention
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
class AttentionType:
Regular, Recurrent = range(2)
def s(scope, name):
# We have to manually scope due to our internal/external blob
# relationships.
return "{}/{}".format(str(scope), str(name))
# c_i = \sum_j w_{ij}\textbf{s}_j
def _calc_weighted_context(
model,
encoder_outputs_transposed,
encoder_output_dim,
attention_weights_3d,
scope,
):
# [batch_size, encoder_output_dim, 1]
attention_weighted_encoder_context = model.net.BatchMatMul(
[encoder_outputs_transposed, attention_weights_3d],
s(scope, 'attention_weighted_encoder_context'),
)
# TODO: somehow I cannot use Squeeze in-place op here
# [batch_size, encoder_output_dim]
attention_weighted_encoder_context, _ = model.net.Reshape(
attention_weighted_encoder_context,
[
attention_weighted_encoder_context,
s(scope, 'attention_weighted_encoder_context_old_shape'),
],
shape=[1, -1, encoder_output_dim],
)
return attention_weighted_encoder_context
# Calculate a softmax over the passed in attention energy logits
def _calc_attention_weights(
model,
attention_logits_transposed,
scope,
):
# TODO: we could try to force some attention weights to be zeros,
# based on encoder_lengths.
# [batch_size, encoder_length]
attention_weights = model.Softmax(
attention_logits_transposed,
s(scope, 'attention_weights'),
engine='CUDNN',
)
# TODO: make this operation in-place
# [batch_size, encoder_length, 1]
attention_weights_3d = model.net.ExpandDims(
attention_weights,
s(scope, 'attention_weights_3d'),
dims=[2],
)
return attention_weights_3d
# e_{ij} = \textbf{v}^T tanh \alpha(\textbf{h}_{i-1}, \textbf{s}_j)
def _calc_attention_logits_from_sum_match(
model,
decoder_hidden_encoder_outputs_sum,
encoder_output_dim,
scope,
):
# [encoder_length, batch_size, encoder_output_dim]
decoder_hidden_encoder_outputs_sum = model.net.Tanh(
decoder_hidden_encoder_outputs_sum,
decoder_hidden_encoder_outputs_sum,
)
attention_v = model.param_init_net.XavierFill(
[],
s(scope, 'attention_v'),
shape=[1, encoder_output_dim],
)
model.add_param(attention_v)
attention_zeros = model.param_init_net.ConstantFill(
[],
s(scope, 'attention_zeros'),
value=0.0,
shape=[1],
)
# [encoder_length, batch_size, 1]
attention_logits = model.net.FC(
[decoder_hidden_encoder_outputs_sum, attention_v, attention_zeros],
[s(scope, 'attention_logits')],
axis=2,
)
# [encoder_length, batch_size]
attention_logits = model.net.Squeeze(
[attention_logits],
[attention_logits],
dims=[2],
)
# [batch_size, encoder_length]
attention_logits_transposed = model.Transpose(
attention_logits,
s(scope, 'attention_logits_transposed'),
axes=[1, 0],
)
return attention_logits_transposed
# \textbf{W}^\alpha used in the context of \alpha_{sum}(a,b)
def _apply_fc_weight_for_sum_match(
model,
input,
dim_in,
dim_out,
scope,
name,
):
output = model.FC(
input,
s(scope, name),
dim_in=dim_in,
dim_out=dim_out,
axis=2,
)
output = model.net.Squeeze(
output,
output,
dims=[0],
)
return output
# Implement RecAtt due to section 4.1 in http://arxiv.org/abs/1601.03317
def apply_recurrent_attention(
model,
encoder_output_dim,
encoder_outputs_transposed,
weighted_encoder_outputs,
decoder_hidden_state_t,
decoder_hidden_state_dim,
attention_weighted_encoder_context_t_prev,
scope,
):
weighted_prev_attention_context = _apply_fc_weight_for_sum_match(
model=model,
input=attention_weighted_encoder_context_t_prev,
dim_in=encoder_output_dim,
dim_out=encoder_output_dim,
scope=scope,
name='weighted_prev_attention_context',
)
weighted_decoder_hidden_state = _apply_fc_weight_for_sum_match(
model=model,
input=decoder_hidden_state_t,
dim_in=decoder_hidden_state_dim,
dim_out=encoder_output_dim,
scope=scope,
name='weighted_decoder_hidden_state',
)
# [encoder_length, batch_size, encoder_output_dim]
decoder_hidden_encoder_outputs_sum_tmp = model.net.Add(
[
weighted_encoder_outputs,
weighted_decoder_hidden_state,
],
s(scope, 'decoder_hidden_encoder_outputs_sum_tmp'),
broadcast=1,
use_grad_hack=1,
)
# [encoder_length, batch_size, encoder_output_dim]
decoder_hidden_encoder_outputs_sum = model.net.Add(
[
decoder_hidden_encoder_outputs_sum_tmp,
weighted_prev_attention_context,
],
s(scope, 'decoder_hidden_encoder_outputs_sum'),
broadcast=1,
use_grad_hack=1,
)
attention_logits_transposed = _calc_attention_logits_from_sum_match(
model=model,
decoder_hidden_encoder_outputs_sum=decoder_hidden_encoder_outputs_sum,
encoder_output_dim=encoder_output_dim,
scope=scope,
)
# [batch_size, encoder_length, 1]
attention_weights_3d = _calc_attention_weights(
model=model,
attention_logits_transposed=attention_logits_transposed,
scope=scope,
)
# [batch_size, encoder_output_dim, 1]
attention_weighted_encoder_context = _calc_weighted_context(
model=model,
encoder_outputs_transposed=encoder_outputs_transposed,
encoder_output_dim=encoder_output_dim,
attention_weights_3d=attention_weights_3d,
scope=scope,
)
return attention_weighted_encoder_context, attention_weights_3d, [
decoder_hidden_encoder_outputs_sum_tmp,
decoder_hidden_encoder_outputs_sum,
]
def apply_regular_attention(
model,
encoder_output_dim,
encoder_outputs_transposed,
weighted_encoder_outputs,
decoder_hidden_state_t,
decoder_hidden_state_dim,
scope,
):
weighted_decoder_hidden_state = _apply_fc_weight_for_sum_match(
model=model,
input=decoder_hidden_state_t,
dim_in=decoder_hidden_state_dim,
dim_out=encoder_output_dim,
scope=scope,
name='weighted_decoder_hidden_state',
)
# [encoder_length, batch_size, encoder_output_dim]
decoder_hidden_encoder_outputs_sum = model.net.Add(
[weighted_encoder_outputs, weighted_decoder_hidden_state],
s(scope, 'decoder_hidden_encoder_outputs_sum'),
broadcast=1,
use_grad_hack=1,
)
attention_logits_transposed = _calc_attention_logits_from_sum_match(
model=model,
decoder_hidden_encoder_outputs_sum=decoder_hidden_encoder_outputs_sum,
encoder_output_dim=encoder_output_dim,
scope=scope,
)
# [batch_size, encoder_length, 1]
attention_weights_3d = _calc_attention_weights(
model=model,
attention_logits_transposed=attention_logits_transposed,
scope=scope,
)
# [batch_size, encoder_output_dim, 1]
attention_weighted_encoder_context = _calc_weighted_context(
model=model,
encoder_outputs_transposed=encoder_outputs_transposed,
encoder_output_dim=encoder_output_dim,
attention_weights_3d=attention_weights_3d,
scope=scope,
)
return attention_weighted_encoder_context, attention_weights_3d, [
decoder_hidden_encoder_outputs_sum,
]
| caffe2/python/attention.py | 7,839 | @package attention Module caffe2.python.attention We have to manually scope due to our internal/external blob relationships. c_i = \sum_j w_{ij}\textbf{s}_j [batch_size, encoder_output_dim, 1] TODO: somehow I cannot use Squeeze in-place op here [batch_size, encoder_output_dim] Calculate a softmax over the passed in attention energy logits TODO: we could try to force some attention weights to be zeros, based on encoder_lengths. [batch_size, encoder_length] TODO: make this operation in-place [batch_size, encoder_length, 1] e_{ij} = \textbf{v}^T tanh \alpha(\textbf{h}_{i-1}, \textbf{s}_j) [encoder_length, batch_size, encoder_output_dim] [encoder_length, batch_size, 1] [encoder_length, batch_size] [batch_size, encoder_length] \textbf{W}^\alpha used in the context of \alpha_{sum}(a,b) Implement RecAtt due to section 4.1 in http://arxiv.org/abs/1601.03317 [encoder_length, batch_size, encoder_output_dim] [encoder_length, batch_size, encoder_output_dim] [batch_size, encoder_length, 1] [batch_size, encoder_output_dim, 1] [encoder_length, batch_size, encoder_output_dim] [batch_size, encoder_length, 1] [batch_size, encoder_output_dim, 1] | 1,144 | en | 0.628811 |
# -*- coding: utf-8 -*-
from sqlalchemy import Column, String, Float
from sqlalchemy.ext.declarative import declarative_base
from zvt.contract import Mixin
from zvt.contract.register import register_schema
MoneyFlowBase = declarative_base()
# 板块资金流向
class BlockMoneyFlow(MoneyFlowBase, Mixin):
__tablename__ = 'block_money_flow'
code = Column(String(length=32))
name = Column(String(length=32))
# 收盘价
close = Column(Float)
change_pct = Column(Float)
turnover_rate = Column(Float)
# 净流入
net_inflows = Column(Float)
# 净流入率
net_inflow_rate = Column(Float)
# 主力=超大单+大单
net_main_inflows = Column(Float)
net_main_inflow_rate = Column(Float)
# 超大单
net_huge_inflows = Column(Float)
net_huge_inflow_rate = Column(Float)
# 大单
net_big_inflows = Column(Float)
net_big_inflow_rate = Column(Float)
# 中单
net_medium_inflows = Column(Float)
net_medium_inflow_rate = Column(Float)
# 小单
net_small_inflows = Column(Float)
net_small_inflow_rate = Column(Float)
class StockMoneyFlow(MoneyFlowBase, Mixin):
__tablename__ = 'stock_money_flow'
code = Column(String(length=32))
name = Column(String(length=32))
# 收盘价
close = Column(Float)
change_pct = Column(Float)
turnover_rate = Column(Float)
# 净流入
net_inflows = Column(Float)
# 净流入率
net_inflow_rate = Column(Float)
# 主力=超大单+大单
net_main_inflows = Column(Float)
net_main_inflow_rate = Column(Float)
# 超大单
net_huge_inflows = Column(Float)
net_huge_inflow_rate = Column(Float)
# 大单
net_big_inflows = Column(Float)
net_big_inflow_rate = Column(Float)
# 中单
net_medium_inflows = Column(Float)
net_medium_inflow_rate = Column(Float)
# 小单
net_small_inflows = Column(Float)
net_small_inflow_rate = Column(Float)
register_schema(providers=['sina'], db_name='money_flow', schema_base=MoneyFlowBase)
# the __all__ is generated
__all__ = ['BlockMoneyFlow', 'StockMoneyFlow'] | zvt/domain/misc/money_flow.py | 2,132 | -*- coding: utf-8 -*- 板块资金流向 收盘价 净流入 净流入率 主力=超大单+大单 超大单 大单 中单 小单 收盘价 净流入 净流入率 主力=超大单+大单 超大单 大单 中单 小单 the __all__ is generated | 125 | zh | 0.909419 |
# -*- coding: utf-8 -*-
"""Identity Services Engine getAllowedProtocolById data model.
Copyright (c) 2021 Cisco and/or its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import fastjsonschema
import json
from ciscoisesdk.exceptions import MalformedRequest
from builtins import *
class JSONSchemaValidatorE3DdfDdd45E299F14Ed194926F8De(object):
"""getAllowedProtocolById request schema definition."""
def __init__(self):
super(JSONSchemaValidatorE3DdfDdd45E299F14Ed194926F8De, self).__init__()
self._validator = fastjsonschema.compile(json.loads(
'''{
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"AllowedProtocols": {
"properties": {
"allowChap": {
"type": "boolean"
},
"allowEapFast": {
"type": "boolean"
},
"allowEapMd5": {
"type": "boolean"
},
"allowEapTls": {
"type": "boolean"
},
"allowEapTtls": {
"type": "boolean"
},
"allowLeap": {
"type": "boolean"
},
"allowMsChapV1": {
"type": "boolean"
},
"allowMsChapV2": {
"type": "boolean"
},
"allowPapAscii": {
"type": "boolean"
},
"allowPeap": {
"type": "boolean"
},
"allowPreferredEapProtocol": {
"type": "boolean"
},
"allowTeap": {
"type": "boolean"
},
"allowWeakCiphersForEap": {
"type": "boolean"
},
"description":
{
"type": "string"
},
"eapFast": {
"properties": {
"allowEapFastEapGtc": {
"type": "boolean"
},
"allowEapFastEapGtcPwdChange": {
"type": "boolean"
},
"allowEapFastEapGtcPwdChangeRetries": {
"type": "integer"
},
"allowEapFastEapMsChapV2": {
"type": "boolean"
},
"allowEapFastEapMsChapV2PwdChange": {
"type": "boolean"
},
"allowEapFastEapMsChapV2PwdChangeRetries": {
"type": "integer"
},
"allowEapFastEapTls": {
"type": "boolean"
},
"allowEapFastEapTlsAuthOfExpiredCerts": {
"type": "boolean"
},
"eapFastEnableEAPChaining": {
"type": "boolean"
},
"eapFastUsePacs": {
"type": "boolean"
},
"eapFastUsePacsAllowAnonymProvisioning": {
"type": "boolean"
},
"eapFastUsePacsAllowAuthenProvisioning": {
"type": "boolean"
},
"eapFastUsePacsAllowMachineAuthentication": {
"type": "boolean"
},
"eapFastUsePacsStatelessSessionResume": {
"type": "boolean"
},
"eapFastUsePacsTunnelPacTtl": {
"type": "integer"
},
"eapFastUsePacsTunnelPacTtlUnits": {
"type": "string"
},
"eapFastUsePacsUseProactivePacUpdatePrecentage": {
"type": "integer"
}
},
"type": "object"
},
"eapTls": {
"properties": {
"allowEapTlsAuthOfExpiredCerts": {
"type": "boolean"
},
"eapTlsEnableStatelessSessionResume": {
"type": "boolean"
}
},
"type": "object"
},
"eapTlsLBit": {
"type": "boolean"
},
"eapTtls": {
"properties": {
"eapTtlsChap": {
"type": "boolean"
},
"eapTtlsEapMd5": {
"type": "boolean"
},
"eapTtlsEapMsChapV2": {
"type": "boolean"
},
"eapTtlsEapMsChapV2PwdChange": {
"type": "boolean"
},
"eapTtlsEapMsChapV2PwdChangeRetries": {
"type": "integer"
},
"eapTtlsMsChapV1": {
"type": "boolean"
},
"eapTtlsMsChapV2": {
"type": "boolean"
},
"eapTtlsPapAscii": {
"type": "boolean"
}
},
"type": "object"
},
"name": {
"type": "string"
},
"peap": {
"properties": {
"allowPeapEapGtc": {
"type": "boolean"
},
"allowPeapEapMsChapV2": {
"type": "boolean"
},
"allowPeapEapMsChapV2PwdChange": {
"type": "boolean"
},
"allowPeapEapMsChapV2PwdChangeRetries": {
"type": "integer"
},
"allowPeapEapTls": {
"type": "boolean"
},
"allowPeapEapTlsAuthOfExpiredCerts": {
"type": "boolean"
},
"allowPeapV0": {
"type": "boolean"
},
"requireCryptobinding": {
"type": "boolean"
}
},
"type": "object"
},
"preferredEapProtocol": {
"type": "string"
},
"processHostLookup": {
"type": "boolean"
},
"requireMessageAuth": {
"type": "boolean"
},
"teap": {
"properties": {
"acceptClientCertDuringTunnelEst": {
"type": "boolean"
},
"allowTeapEapMsChapV2": {
"type": "boolean"
},
"allowTeapEapMsChapV2PwdChange": {
"type": "boolean"
},
"allowTeapEapMsChapV2PwdChangeRetries": {
"type": "integer"
},
"allowTeapEapTls": {
"type": "boolean"
},
"allowTeapEapTlsAuthOfExpiredCerts": {
"type": "boolean"
},
"enableEapChaining": {
"type": "boolean"
},
"requestBasicPwdAuth": {
"type": "boolean"
}
},
"type": "object"
}
},
"type": "object"
}
},
"type": "object"
}'''.replace("\n" + ' ' * 16, '')
))
def validate(self, request):
try:
self._validator(request)
except fastjsonschema.exceptions.JsonSchemaException as e:
raise MalformedRequest(
'{} is invalid. Reason: {}'.format(request, e.message)
)
| tests/models/validators/v3_0_0/jsd_e3ddfddd45e299f14ed194926f8de.py | 8,982 | getAllowedProtocolById request schema definition.
Identity Services Engine getAllowedProtocolById data model.
Copyright (c) 2021 Cisco and/or its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
-*- coding: utf-8 -*- | 1,206 | en | 0.860339 |
#! /usr/bin/python3
# -*- coding: utf-8 -*-
# @Time : 2019/3/10 7:44 PM
# @Author : xiaoliji
# @Email : yutian9527@gmail.com
"""
找出和为s的数字。
>>> nums = [1, 2, 4, 7, 11, 15]
>>> FindNumbersWithSum(nums, 15)
(4, 11)
"""
def FindNumbersWithSum(array: list, tsum: int) -> tuple:
l, r = 0, len(array)-1
while l < r:
if array[l] + array[r] < tsum:
l += 1
elif array[l]+array[r] > tsum:
r -= 1
else:
return array[l], array[r]
return [] | Algorithm/coding_interviews/Python/sword-for-offer/57_find_num_with_sum.py | 542 | 找出和为s的数字。
>>> nums = [1, 2, 4, 7, 11, 15]
>>> FindNumbersWithSum(nums, 15)
(4, 11)
! /usr/bin/python3 -*- coding: utf-8 -*- @Time : 2019/3/10 7:44 PM @Author : xiaoliji @Email : yutian9527@gmail.com | 208 | en | 0.344489 |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# This file is part of REANA.
# Copyright (C) 2017, 2018, 2019, 2020, 2021, 2022 CERN.
#
# REANA is free software; you can redistribute it and/or modify it
# under the terms of the MIT License; see LICENSE file for more details.
"""REANA Job Controller models."""
from marshmallow import Schema, fields, ValidationError, pre_load
from reana_commons.job_utils import deserialise_job_command
from reana_job_controller.config import (
REANA_KUBERNETES_JOBS_TIMEOUT_LIMIT,
REANA_KUBERNETES_JOBS_MAX_USER_TIMEOUT_LIMIT,
)
class Job(Schema):
"""Job model."""
cmd = fields.Str(required=True)
docker_img = fields.Str(required=True)
job_id = fields.Str(required=True)
max_restart_count = fields.Int(required=True)
restart_count = fields.Int(required=True)
status = fields.Str(required=True)
cvmfs_mounts = fields.String(missing="")
class JobRequest(Schema):
"""Job request model."""
job_name = fields.Str(required=True)
workflow_workspace = fields.Str(required=True)
workflow_uuid = fields.Str(required=True)
cmd = fields.Function(missing="", deserialize=deserialise_job_command)
prettified_cmd = fields.Str(missing="")
docker_img = fields.Str(required=True)
cvmfs_mounts = fields.String(missing="")
env_vars = fields.Dict(missing={})
shared_file_system = fields.Bool(missing=True)
compute_backend = fields.Str(required=False)
kerberos = fields.Bool(required=False)
voms_proxy = fields.Bool(required=False)
kubernetes_uid = fields.Int(required=False)
kubernetes_memory_limit = fields.Str(required=False)
kubernetes_job_timeout = fields.Int(required=False)
unpacked_img = fields.Bool(required=False)
htcondor_max_runtime = fields.Str(required=False)
htcondor_accounting_group = fields.Str(required=False)
slurm_partition = fields.Str(required=False)
slurm_time = fields.Str(required=False)
@pre_load
def set_kubernetes_job_timeout(self, in_data, **kwargs):
"""Set kubernetes_job_timeout to a default value if not provided and validate the value.
Method receives the whole data dictionary but operates *only* on kubernetes_job_timeout.
Updated dictionary is returned.
"""
if "kubernetes_job_timeout" not in in_data:
try:
in_data["kubernetes_job_timeout"] = int(
REANA_KUBERNETES_JOBS_TIMEOUT_LIMIT
)
except (ValueError, TypeError):
raise ValidationError(
"Default value of kubernetes_job_timeout is not an integer. "
f"Provided value is '{REANA_KUBERNETES_JOBS_TIMEOUT_LIMIT}'. "
"Please contact the administrator."
)
job_timeout = in_data["kubernetes_job_timeout"]
try:
job_timeout = int(job_timeout)
except (ValueError, TypeError):
raise ValidationError(
f"kubernetes_job_timeout must be an integer. Provided value is '{job_timeout}'."
)
if job_timeout <= 0:
raise ValidationError(
"kubernetes_job_timeout must be greater than 0."
f"Provided value is {job_timeout}."
)
try:
max_value = int(REANA_KUBERNETES_JOBS_MAX_USER_TIMEOUT_LIMIT)
except (ValueError, TypeError):
raise ValidationError(
"Max value for kubernetes_job_timeout is not an integer. "
f"Provided value is '{REANA_KUBERNETES_JOBS_MAX_USER_TIMEOUT_LIMIT}'. "
"Please contact the administrator."
)
if job_timeout > max_value:
raise ValidationError(
f"kubernetes_job_timeout exceeds maximum allowed value of {max_value} seconds. "
f"Provided value is {job_timeout} seconds."
)
in_data["kubernetes_job_timeout"] = job_timeout
return in_data
| reana_job_controller/schemas.py | 4,018 | Job model.
Job request model.
Set kubernetes_job_timeout to a default value if not provided and validate the value.
Method receives the whole data dictionary but operates *only* on kubernetes_job_timeout.
Updated dictionary is returned.
REANA Job Controller models.
!/usr/bin/env python3 -*- coding: utf-8 -*- This file is part of REANA. Copyright (C) 2017, 2018, 2019, 2020, 2021, 2022 CERN. REANA is free software; you can redistribute it and/or modify it under the terms of the MIT License; see LICENSE file for more details. | 530 | en | 0.72586 |
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "vote_project.settings")
try:
from django.core.management import execute_from_command_line
except ImportError:
# The above import may fail for some other reason. Ensure that the
# issue is really that Django is missing to avoid masking other
# exceptions on Python 2.
try:
import django
except ImportError:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
)
raise
execute_from_command_line(sys.argv)
| vote_project/manage.py | 810 | !/usr/bin/env python The above import may fail for some other reason. Ensure that the issue is really that Django is missing to avoid masking other exceptions on Python 2. | 171 | en | 0.863733 |
"""A setuptools based setup module.
See:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
# Arguments marked as "Required" below must be included for upload to PyPI.
# Fields marked as "Optional" may be commented out.
# https://stackoverflow.com/questions/458550/standard-way-to-embed-version-into-python-package/16084844#16084844
exec(open('pretrainedmodels/version.py').read())
setup(
# This is the name of your project. The first time you publish this
# package, this name will be registered for you. It will determine how
# users can install this project, e.g.:
#
# $ pip install sampleproject
#
# And where it will live on PyPI: https://pypi.org/project/sampleproject/
#
# There are some restrictions on what makes a valid project name
# specification here:
# https://packaging.python.org/specifications/core-metadata/#name
name='pretrainedmodels', # Required
# Versions should comply with PEP 440:
# https://www.python.org/dev/peps/pep-0440/
#
# For a discussion on single-sourcing the version across setup.py and the
# project code, see
# https://packaging.python.org/en/latest/single_source_version.html
version=__version__, # Required
# This is a one-line description or tagline of what your project does. This
# corresponds to the "Summary" metadata field:
# https://packaging.python.org/specifications/core-metadata/#summary
description='Pretrained models for Pytorch', # Required
# This is an optional longer description of your project that represents
# the body of text which users will see when they visit PyPI.
#
# Often, this is the same as your README, so you can just read it in from
# that file directly (as we have already done above)
#
# This field corresponds to the "Description" metadata field:
# https://packaging.python.org/specifications/core-metadata/#description-optional
long_description=long_description, # Optional
# This should be a valid link to your project's main homepage.
#
# This field corresponds to the "Home-Page" metadata field:
# https://packaging.python.org/specifications/core-metadata/#home-page-optional
url='https://github.com/cadene/pretrained-models.pytorch', # Optional
# This should be your name or the name of the organization which owns the
# project.
author='Remi Cadene', # Optional
# This should be a valid email address corresponding to the author listed
# above.
author_email='remi.cadene@icloud.com', # Optional
# Classifiers help users find your project by categorizing it.
#
# For a list of valid classifiers, see
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[ # Optional
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 3 - Alpha',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',
# Pick your license as you wish
'License :: OSI Approved :: MIT License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 3.6',
],
# This field adds keywords for your project which will appear on the
# project page. What does your project relate to?
#
# Note that this is a string of words separated by whitespace, not a list.
keywords='pytorch pretrained models deep learning', # Optional
# You can just specify package directories manually here if your project is
# simple. Or you can use find_packages().
#
# Alternatively, if you just want to distribute a single Python file, use
# the `py_modules` argument instead as follows, which will expect a file
# called `my_module.py` to exist:
#
# py_modules=["my_module"],
#
packages=find_packages(exclude=['data', 'examples']), # Required
# This field lists other packages that your project depends on to run.
# Any package you put here will be installed by pip when your project is
# installed, so they must be valid existing projects.
#
# For an analysis of "install_requires" vs pip's requirements files see:
# https://packaging.python.org/en/latest/requirements.html
install_requires=['torch', 'torchvision', 'munch', 'tqdm'], # Optional
# List additional groups of dependencies here (e.g. development
# dependencies). Users will be able to install these using the "extras"
# syntax, for example:
#
# $ pip install sampleproject[dev]
#
# Similar to `install_requires` above, these must be valid existing
# projects.
# extras_require={ # Optional
# 'dev': ['check-manifest'],
# 'test': ['coverage'],
# },
# If there are data files included in your packages that need to be
# installed, specify them here.
#
# If using Python 2.6 or earlier, then these have to be included in
# MANIFEST.in as well.
# package_data={ # Optional
# 'sample': ['package_data.dat'],
# },
# Although 'package_data' is the preferred approach, in some case you may
# need to place data files outside of your packages. See:
# http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files
#
# In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
#data_files=[('my_data', ['data/data_file'])], # Optional
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# `pip` to create the appropriate form of executable for the target
# platform.
#
# For example, the following would provide a command called `sample` which
# executes the function `main` from this package when invoked:
# entry_points={ # Optional
# 'console_scripts': [
# 'sample=sample:main',
# ],
# },
)
| setup.py | 6,578 | A setuptools based setup module.
See:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
Always prefer setuptools over distutils To use a consistent encoding Get the long description from the README file Arguments marked as "Required" below must be included for upload to PyPI. Fields marked as "Optional" may be commented out. https://stackoverflow.com/questions/458550/standard-way-to-embed-version-into-python-package/1608484416084844 This is the name of your project. The first time you publish this package, this name will be registered for you. It will determine how users can install this project, e.g.: $ pip install sampleproject And where it will live on PyPI: https://pypi.org/project/sampleproject/ There are some restrictions on what makes a valid project name specification here: https://packaging.python.org/specifications/core-metadata/name Required Versions should comply with PEP 440: https://www.python.org/dev/peps/pep-0440/ For a discussion on single-sourcing the version across setup.py and the project code, see https://packaging.python.org/en/latest/single_source_version.html Required This is a one-line description or tagline of what your project does. This corresponds to the "Summary" metadata field: https://packaging.python.org/specifications/core-metadata/summary Required This is an optional longer description of your project that represents the body of text which users will see when they visit PyPI. Often, this is the same as your README, so you can just read it in from that file directly (as we have already done above) This field corresponds to the "Description" metadata field: https://packaging.python.org/specifications/core-metadata/description-optional Optional This should be a valid link to your project's main homepage. This field corresponds to the "Home-Page" metadata field: https://packaging.python.org/specifications/core-metadata/home-page-optional Optional This should be your name or the name of the organization which owns the project. Optional This should be a valid email address corresponding to the author listed above. Optional Classifiers help users find your project by categorizing it. For a list of valid classifiers, see https://pypi.python.org/pypi?%3Aaction=list_classifiers Optional How mature is this project? Common values are 3 - Alpha 4 - Beta 5 - Production/Stable Indicate who your project is intended for Pick your license as you wish Specify the Python versions you support here. In particular, ensure that you indicate whether you support Python 2, Python 3 or both. This field adds keywords for your project which will appear on the project page. What does your project relate to? Note that this is a string of words separated by whitespace, not a list. Optional You can just specify package directories manually here if your project is simple. Or you can use find_packages(). Alternatively, if you just want to distribute a single Python file, use the `py_modules` argument instead as follows, which will expect a file called `my_module.py` to exist: py_modules=["my_module"], Required This field lists other packages that your project depends on to run. Any package you put here will be installed by pip when your project is installed, so they must be valid existing projects. For an analysis of "install_requires" vs pip's requirements files see: https://packaging.python.org/en/latest/requirements.html Optional List additional groups of dependencies here (e.g. development dependencies). Users will be able to install these using the "extras" syntax, for example: $ pip install sampleproject[dev] Similar to `install_requires` above, these must be valid existing projects. extras_require={ Optional 'dev': ['check-manifest'], 'test': ['coverage'], }, If there are data files included in your packages that need to be installed, specify them here. If using Python 2.6 or earlier, then these have to be included in MANIFEST.in as well. package_data={ Optional 'sample': ['package_data.dat'], }, Although 'package_data' is the preferred approach, in some case you may need to place data files outside of your packages. See: http://docs.python.org/3.4/distutils/setupscript.htmlinstalling-additional-files In this case, 'data_file' will be installed into '<sys.prefix>/my_data'data_files=[('my_data', ['data/data_file'])], Optional To provide executable scripts, use entry points in preference to the "scripts" keyword. Entry points provide cross-platform support and allow `pip` to create the appropriate form of executable for the target platform. For example, the following would provide a command called `sample` which executes the function `main` from this package when invoked: entry_points={ Optional 'console_scripts': [ 'sample=sample:main', ], }, | 4,829 | en | 0.785089 |
# encoding: utf-8
#
# Copyright 2009-2020 Greg Neagle.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
updatecheck.catalogs
Created by Greg Neagle on 2017-01-01.
Functions for working with Munki catalogs
"""
from __future__ import absolute_import, print_function
import os
from . import download
from .. import display
from .. import info
from .. import pkgutils
from .. import prefs
from .. import utils
from .. import FoundationPlist
from ..wrappers import is_a_string
def make_catalog_db(catalogitems):
"""Takes an array of catalog items and builds some indexes so we can
get our common data faster. Returns a dict we can use like a database"""
name_table = {}
pkgid_table = {}
itemindex = -1
for item in catalogitems:
itemindex = itemindex + 1
name = item.get('name', 'NO NAME')
vers = item.get('version', 'NO VERSION')
if name == 'NO NAME' or vers == 'NO VERSION':
display.display_warning('Bad pkginfo: %s', item)
# normalize the version number
vers = pkgutils.trim_version_string(vers)
# build indexes for items by name and version
if not name in name_table:
name_table[name] = {}
if not vers in name_table[name]:
name_table[name][vers] = []
name_table[name][vers].append(itemindex)
# build table of receipts
for receipt in item.get('receipts', []):
if 'packageid' in receipt and 'version' in receipt:
pkg_id = receipt['packageid']
version = receipt['version']
if not pkg_id in pkgid_table:
pkgid_table[pkg_id] = {}
if not version in pkgid_table[pkg_id]:
pkgid_table[pkg_id][version] = []
pkgid_table[pkg_id][version].append(itemindex)
# build table of update items with a list comprehension --
# filter all items from the catalogitems that have a non-empty
# 'update_for' list
updaters = [item for item in catalogitems if item.get('update_for')]
# now fix possible admin errors where 'update_for' is a string instead
# of a list of strings
for update in updaters:
if is_a_string(update['update_for']):
# convert to list of strings
update['update_for'] = [update['update_for']]
# build table of autoremove items with a list comprehension --
# filter all items from the catalogitems that have a non-empty
# 'autoremove' list
# autoremove items are automatically removed if they are not in the
# managed_install list (either directly or indirectly via included
# manifests)
autoremoveitems = [item.get('name') for item in catalogitems
if item.get('autoremove')]
# convert to set and back to list to get list of unique names
autoremoveitems = list(set(autoremoveitems))
pkgdb = {}
pkgdb['named'] = name_table
pkgdb['receipts'] = pkgid_table
pkgdb['updaters'] = updaters
pkgdb['autoremoveitems'] = autoremoveitems
pkgdb['items'] = catalogitems
return pkgdb
def add_package_ids(catalogitems, itemname_to_pkgid, pkgid_to_itemname):
"""Adds packageids from each catalogitem to two dictionaries.
One maps itemnames to receipt pkgids, the other maps receipt pkgids
to itemnames"""
for item in catalogitems:
name = item.get('name')
if not name:
continue
if item.get('receipts'):
if not name in itemname_to_pkgid:
itemname_to_pkgid[name] = {}
for receipt in item['receipts']:
if 'packageid' in receipt and 'version' in receipt:
pkgid = receipt['packageid']
vers = receipt['version']
if not pkgid in itemname_to_pkgid[name]:
itemname_to_pkgid[name][pkgid] = []
if not vers in itemname_to_pkgid[name][pkgid]:
itemname_to_pkgid[name][pkgid].append(vers)
if not pkgid in pkgid_to_itemname:
pkgid_to_itemname[pkgid] = {}
if not name in pkgid_to_itemname[pkgid]:
pkgid_to_itemname[pkgid][name] = []
if not vers in pkgid_to_itemname[pkgid][name]:
pkgid_to_itemname[pkgid][name].append(vers)
def split_name_and_version(some_string):
"""Splits a string into the name and version number.
Name and version must be separated with a hyphen ('-')
or double hyphen ('--').
'TextWrangler-2.3b1' becomes ('TextWrangler', '2.3b1')
'AdobePhotoshopCS3--11.2.1' becomes ('AdobePhotoshopCS3', '11.2.1')
'MicrosoftOffice2008-12.2.1' becomes ('MicrosoftOffice2008', '12.2.1')
"""
for delim in ('--', '-'):
if some_string.count(delim) > 0:
chunks = some_string.split(delim)
vers = chunks.pop()
name = delim.join(chunks)
if vers[0] in '0123456789':
return (name, vers)
return (some_string, '')
def get_all_items_with_name(name, cataloglist):
"""Searches the catalogs in a list for all items matching a given name.
Returns:
list of pkginfo items; sorted with newest version first. No precedence
is given to catalog order.
"""
def item_version(item):
"""Returns a MunkiLooseVersion for pkginfo item"""
return pkgutils.MunkiLooseVersion(item['version'])
itemlist = []
# we'll throw away any included version info
name = split_name_and_version(name)[0]
display.display_debug1('Looking for all items matching: %s...', name)
for catalogname in cataloglist:
if not catalogname in list(_CATALOG.keys()):
# in case catalogname refers to a non-existent catalog...
continue
# is name in the catalog name table?
if name in _CATALOG[catalogname]['named']:
versionsmatchingname = _CATALOG[catalogname]['named'][name]
for vers in versionsmatchingname:
if vers == 'latest':
continue
indexlist = _CATALOG[catalogname]['named'][name][vers]
for index in indexlist:
thisitem = _CATALOG[catalogname]['items'][index]
if not thisitem in itemlist:
display.display_debug1(
'Adding item %s, version %s from catalog %s...',
name, thisitem['version'], catalogname)
itemlist.append(thisitem)
if itemlist:
# sort so latest version is first
itemlist.sort(key=item_version, reverse=True)
return itemlist
def get_auto_removal_items(installinfo, cataloglist):
"""Gets a list of items marked for automatic removal from the catalogs
in cataloglist. Filters those against items in the processed_installs
list, which should contain everything that is supposed to be installed.
Then filters against the removals list, which contains all the removals
that have already been processed.
"""
autoremovalnames = []
for catalogname in cataloglist or []:
if catalogname in list(_CATALOG.keys()):
autoremovalnames += _CATALOG[catalogname]['autoremoveitems']
processed_installs_names = [split_name_and_version(item)[0]
for item in installinfo['processed_installs']]
autoremovalnames = [item for item in autoremovalnames
if item not in processed_installs_names
and item not in installinfo['processed_uninstalls']]
return autoremovalnames
def look_for_updates(itemname, cataloglist):
"""Looks for updates for a given manifest item that is either
installed or scheduled to be installed or removed. This handles not only
specific application updates, but also updates that aren't simply
later versions of the manifest item.
For example, AdobeCameraRaw is an update for Adobe Photoshop, but
doesn't update the version of Adobe Photoshop.
Returns a list of manifestitem names that are updates for
manifestitem.
"""
display.display_debug1('Looking for updates for: %s', itemname)
# get a list of catalog items that are updates for other items
update_list = []
for catalogname in cataloglist:
if catalogname not in _CATALOG:
# in case the list refers to a non-existent catalog
continue
updaters = _CATALOG[catalogname]['updaters']
# list comprehension coming up...
update_items = [catalogitem['name']
for catalogitem in updaters
if itemname in catalogitem.get('update_for', [])]
if update_items:
update_list.extend(update_items)
# make sure the list has only unique items:
update_list = list(set(update_list))
if update_list:
# updates were found, so let's display them
num_updates = len(update_list)
# format the update list for better on-screen viewing
update_list_display = ", ".join(str(x) for x in update_list)
display.display_debug1(
'Found %s update(s): %s', num_updates, update_list_display)
return update_list
def look_for_updates_for_version(itemname, itemversion, cataloglist):
"""Looks for updates for a specific version of an item. Since these
can appear in manifests and pkginfo as item-version or item--version
we have to search twice."""
name_and_version = '%s-%s' % (itemname, itemversion)
alt_name_and_version = '%s--%s' % (itemname, itemversion)
update_list = look_for_updates(name_and_version, cataloglist)
update_list.extend(look_for_updates(alt_name_and_version, cataloglist))
# make sure the list has only unique items:
update_list = list(set(update_list))
return update_list
def best_version_match(vers_num, item_dict):
'''Attempts to find the best match in item_dict for vers_num'''
vers_tuple = vers_num.split('.')
precision = 1
while precision <= len(vers_tuple):
test_vers = '.'.join(vers_tuple[0:precision])
match_names = []
for item in item_dict.keys():
for item_version in item_dict[item]:
if (item_version.startswith(test_vers) and
item not in match_names):
match_names.append(item)
if len(match_names) == 1:
return match_names[0]
precision = precision + 1
return None
@utils.Memoize
def analyze_installed_pkgs():
"""Analyze catalog data and installed packages in an attempt to determine
what is installed."""
pkgdata = {}
itemname_to_pkgid = {}
pkgid_to_itemname = {}
for catalogname in _CATALOG:
catalogitems = _CATALOG[catalogname]['items']
add_package_ids(catalogitems, itemname_to_pkgid, pkgid_to_itemname)
# itemname_to_pkgid now contains all receipts (pkgids) we know about
# from items in all available catalogs
installedpkgs = pkgutils.getInstalledPackages()
installed = []
partiallyinstalled = []
installedpkgsmatchedtoname = {}
for name in itemname_to_pkgid:
# name is a Munki install item name
foundpkgcount = 0
for pkgid in itemname_to_pkgid[name]:
if pkgid in installedpkgs:
foundpkgcount += 1
if not name in installedpkgsmatchedtoname:
installedpkgsmatchedtoname[name] = []
# record this pkgid for Munki install item name
installedpkgsmatchedtoname[name].append(pkgid)
if foundpkgcount > 0:
if foundpkgcount == len(itemname_to_pkgid[name]):
# we found all receipts by pkgid on disk
installed.append(name)
else:
# we found only some receipts for the item
# on disk
partiallyinstalled.append(name)
# we pay special attention to the items that seem partially installed.
# we need to see if there are any packages that are unique to this item
# if there aren't, then this item probably isn't installed, and we're
# just finding receipts that are shared with other items.
for name in partiallyinstalled:
# get a list of pkgs for this item that are installed
pkgsforthisname = installedpkgsmatchedtoname[name]
# now build a list of all the pkgs referred to by all the other
# items that are either partially or entirely installed
allotherpkgs = []
for othername in installed:
allotherpkgs.extend(installedpkgsmatchedtoname[othername])
for othername in partiallyinstalled:
if othername != name:
allotherpkgs.extend(installedpkgsmatchedtoname[othername])
# use Python sets to find pkgs that are unique to this name
uniquepkgs = list(set(pkgsforthisname) - set(allotherpkgs))
if uniquepkgs:
installed.append(name)
# now filter partiallyinstalled to remove those items we moved to installed
partiallyinstalled = [item for item in partiallyinstalled
if item not in installed]
# build our reference table. For each item we think is installed,
# record the receipts on disk matched to the item
references = {}
for name in installed:
for pkgid in installedpkgsmatchedtoname[name]:
if not pkgid in references:
references[pkgid] = []
references[pkgid].append(name)
# look through all our installedpkgs, looking for ones that have not been
# attached to any Munki names yet
orphans = [pkgid for pkgid in installedpkgs if pkgid not in references]
# attempt to match orphans to Munki item names
matched_orphans = []
for pkgid in orphans:
if pkgid in pkgid_to_itemname:
installed_pkgid_version = installedpkgs[pkgid]
possible_match_items = pkgid_to_itemname[pkgid]
best_match = best_version_match(
installed_pkgid_version, possible_match_items)
if best_match:
matched_orphans.append(best_match)
# process matched_orphans
for name in matched_orphans:
if name not in installed:
installed.append(name)
if name in partiallyinstalled:
partiallyinstalled.remove(name)
for pkgid in installedpkgsmatchedtoname[name]:
if not pkgid in references:
references[pkgid] = []
if not name in references[pkgid]:
references[pkgid].append(name)
pkgdata['receipts_for_name'] = installedpkgsmatchedtoname
pkgdata['installed_names'] = installed
pkgdata['pkg_references'] = references
# left here for future debugging/testing use....
#pkgdata['itemname_to_pkgid'] = itemname_to_pkgid
#pkgdata['pkgid_to_itemname'] = pkgid_to_itemname
#pkgdata['partiallyinstalled_names'] = partiallyinstalled
#pkgdata['orphans'] = orphans
#pkgdata['matched_orphans'] = matched_orphans
#ManagedInstallDir = prefs.pref('ManagedInstallDir')
#pkgdatapath = os.path.join(ManagedInstallDir, 'PackageData.plist')
#try:
# FoundationPlist.writePlist(pkgdata, pkgdatapath)
#except FoundationPlist.NSPropertyListWriteException:
# pass
#catalogdbpath = os.path.join(ManagedInstallDir, 'CatalogDB.plist')
#try:
# FoundationPlist.writePlist(CATALOG, catalogdbpath)
#except FoundationPlist.NSPropertyListWriteException:
# pass
return pkgdata
def get_item_detail(name, cataloglist, vers='',
skip_min_os_check=False, suppress_warnings=False):
"""Searches the catalogs in list for an item matching the given name that
can be installed on the current hardware/OS (optionally skipping the
minimum OS check so we can return an item that requires a higher OS)
If no version is supplied, but the version is appended to the name
('TextWrangler--2.3.0.0.0') that version is used.
If no version is given at all, the latest version is assumed.
Returns a pkginfo item, or None.
"""
rejected_items = []
machine = info.getMachineFacts()
# condition check functions
def munki_version_ok(item):
'''Returns a boolean to indicate if the current Munki version is high
enough to install this item. If not, also adds the failure reason to
the rejected_items list.'''
if item.get('minimum_munki_version'):
min_munki_vers = item['minimum_munki_version']
display.display_debug1(
'Considering item %s, version %s '
'with minimum Munki version required %s',
item['name'], item['version'], min_munki_vers)
display.display_debug1(
'Our Munki version is %s', machine['munki_version'])
if (pkgutils.MunkiLooseVersion(machine['munki_version'])
< pkgutils.MunkiLooseVersion(min_munki_vers)):
reason = (
'Rejected item %s, version %s with minimum Munki version '
'required %s. Our Munki version is %s.'
% (item['name'], item['version'],
item['minimum_munki_version'], machine['munki_version']))
rejected_items.append(reason)
return False
return True
def os_version_ok(item, skip_min_os_check=False):
'''Returns a boolean to indicate if the item is ok to install under
the current OS. If not, also adds the failure reason to the
rejected_items list. If skip_min_os_check is True, skips the minimum os
version check.'''
# Is the current OS version >= minimum_os_version for the item?
if item.get('minimum_os_version') and not skip_min_os_check:
min_os_vers = item['minimum_os_version']
display.display_debug1(
'Considering item %s, version %s '
'with minimum os version required %s',
item['name'], item['version'], min_os_vers)
display.display_debug1(
'Our OS version is %s', machine['os_vers'])
if (pkgutils.MunkiLooseVersion(machine['os_vers']) <
pkgutils.MunkiLooseVersion(min_os_vers)):
# skip this one, go to the next
reason = (
'Rejected item %s, version %s with minimum os version '
'required %s. Our OS version is %s.'
% (item['name'], item['version'],
item['minimum_os_version'], machine['os_vers']))
rejected_items.append(reason)
return False
# current OS version <= maximum_os_version?
if item.get('maximum_os_version'):
max_os_vers = item['maximum_os_version']
display.display_debug1(
'Considering item %s, version %s '
'with maximum os version supported %s',
item['name'], item['version'], max_os_vers)
display.display_debug1(
'Our OS version is %s', machine['os_vers'])
if (pkgutils.MunkiLooseVersion(machine['os_vers']) >
pkgutils.MunkiLooseVersion(max_os_vers)):
# skip this one, go to the next
reason = (
'Rejected item %s, version %s with maximum os version '
'required %s. Our OS version is %s.'
% (item['name'], item['version'],
item['maximum_os_version'], machine['os_vers']))
rejected_items.append(reason)
return False
return True
def cpu_arch_ok(item):
'''Returns a boolean to indicate if the item is ok to install under
the current CPU architecture. If not, also adds the failure reason to
the rejected_items list.'''
if item.get('supported_architectures'):
display.display_debug1(
'Considering item %s, version %s '
'with supported architectures: %s',
item['name'], item['version'], item['supported_architectures'])
display.display_debug1(
'Our architecture is %s', machine['arch'])
if machine['arch'] in item['supported_architectures']:
return True
if ('x86_64' in item['supported_architectures'] and
machine['arch'] == 'i386' and
machine['x86_64_capable'] is True):
return True
# we didn't find a supported architecture that
# matches this machine
reason = (
'Rejected item %s, version %s with supported architectures: '
'%s. Our architecture is %s.'
% (item['name'], item['version'],
item['supported_architectures'], machine['arch']))
rejected_items.append(reason)
return False
return True
def installable_condition_ok(item):
'''Returns a boolean to indicate if an installable_condition predicate
in the current item passes. If not, also adds the failure reason to
the rejected_items list.'''
if item.get('installable_condition'):
if not info.predicate_evaluates_as_true(
item['installable_condition']):
rejected_items.append(
'Rejected item %s, version %s with installable_condition: '
'%s.' % (item['name'], item['version'],
item['installable_condition']))
return False
return True
if vers == 'apple_update_metadata':
vers = 'latest'
else:
(name, includedversion) = split_name_and_version(name)
if includedversion and vers == '':
vers = includedversion
if vers:
vers = pkgutils.trim_version_string(vers)
else:
vers = 'latest'
if skip_min_os_check:
display.display_debug1(
'Looking for detail for: %s, version %s, '
'ignoring minimum_os_version...', name, vers)
else:
display.display_debug1(
'Looking for detail for: %s, version %s...', name, vers)
for catalogname in cataloglist:
# is name in the catalog?
if catalogname in _CATALOG and name in _CATALOG[catalogname]['named']:
itemsmatchingname = _CATALOG[catalogname]['named'][name]
indexlist = []
if vers == 'latest':
# order all our items, highest version first
versionlist = list(itemsmatchingname.keys())
versionlist.sort(key=pkgutils.MunkiLooseVersion, reverse=True)
for versionkey in versionlist:
indexlist.extend(itemsmatchingname[versionkey])
elif vers in list(itemsmatchingname.keys()):
# get the specific requested version
indexlist = itemsmatchingname[vers]
if indexlist:
display.display_debug1(
'Considering %s items with name %s from catalog %s' %
(len(indexlist), name, catalogname))
for index in indexlist:
# iterate through list of items with matching name, highest
# version first, looking for first one that passes all the
# conditional tests (if any)
item = _CATALOG[catalogname]['items'][index]
if (munki_version_ok(item) and
os_version_ok(item,
skip_min_os_check=skip_min_os_check) and
cpu_arch_ok(item) and
installable_condition_ok(item)):
display.display_debug1(
'Found %s, version %s in catalog %s',
item['name'], item['version'], catalogname)
return item
# if we got this far, we didn't find it.
display.display_debug1('Not found')
for reason in rejected_items:
if suppress_warnings:
display.display_debug1(reason)
else:
display.display_warning(reason)
return None
# global to hold our catalog DBs
_CATALOG = {}
def get_catalogs(cataloglist):
"""Retrieves the catalogs from the server and populates our catalogs
dictionary.
"""
#global _CATALOG
for catalogname in cataloglist:
if not catalogname in _CATALOG:
catalogpath = download.download_catalog(catalogname)
if catalogpath:
try:
catalogdata = FoundationPlist.readPlist(catalogpath)
except FoundationPlist.NSPropertyListSerializationException:
display.display_error(
'Retrieved catalog %s is invalid.', catalogname)
try:
os.unlink(catalogpath)
except (OSError, IOError):
pass
else:
_CATALOG[catalogname] = make_catalog_db(catalogdata)
def clean_up():
"""Removes any catalog files that are no longer in use by this client"""
catalog_dir = os.path.join(prefs.pref('ManagedInstallDir'),
'catalogs')
for item in os.listdir(catalog_dir):
if item not in _CATALOG:
os.unlink(os.path.join(catalog_dir, item))
def catalogs():
'''Returns our internal _CATALOG dict'''
return _CATALOG
if __name__ == '__main__':
print('This is a library of support tools for the Munki Suite.')
| code/client/munkilib/updatecheck/catalogs.py | 26,415 | Adds packageids from each catalogitem to two dictionaries.
One maps itemnames to receipt pkgids, the other maps receipt pkgids
to itemnames
Analyze catalog data and installed packages in an attempt to determine
what is installed.
Attempts to find the best match in item_dict for vers_num
Returns our internal _CATALOG dict
Removes any catalog files that are no longer in use by this client
Returns a boolean to indicate if the item is ok to install under
the current CPU architecture. If not, also adds the failure reason to
the rejected_items list.
Searches the catalogs in a list for all items matching a given name.
Returns:
list of pkginfo items; sorted with newest version first. No precedence
is given to catalog order.
Gets a list of items marked for automatic removal from the catalogs
in cataloglist. Filters those against items in the processed_installs
list, which should contain everything that is supposed to be installed.
Then filters against the removals list, which contains all the removals
that have already been processed.
Retrieves the catalogs from the server and populates our catalogs
dictionary.
Searches the catalogs in list for an item matching the given name that
can be installed on the current hardware/OS (optionally skipping the
minimum OS check so we can return an item that requires a higher OS)
If no version is supplied, but the version is appended to the name
('TextWrangler--2.3.0.0.0') that version is used.
If no version is given at all, the latest version is assumed.
Returns a pkginfo item, or None.
Returns a boolean to indicate if an installable_condition predicate
in the current item passes. If not, also adds the failure reason to
the rejected_items list.
Returns a MunkiLooseVersion for pkginfo item
Looks for updates for a given manifest item that is either
installed or scheduled to be installed or removed. This handles not only
specific application updates, but also updates that aren't simply
later versions of the manifest item.
For example, AdobeCameraRaw is an update for Adobe Photoshop, but
doesn't update the version of Adobe Photoshop.
Returns a list of manifestitem names that are updates for
manifestitem.
Looks for updates for a specific version of an item. Since these
can appear in manifests and pkginfo as item-version or item--version
we have to search twice.
Takes an array of catalog items and builds some indexes so we can
get our common data faster. Returns a dict we can use like a database
Returns a boolean to indicate if the current Munki version is high
enough to install this item. If not, also adds the failure reason to
the rejected_items list.
Returns a boolean to indicate if the item is ok to install under
the current OS. If not, also adds the failure reason to the
rejected_items list. If skip_min_os_check is True, skips the minimum os
version check.
Splits a string into the name and version number.
Name and version must be separated with a hyphen ('-')
or double hyphen ('--').
'TextWrangler-2.3b1' becomes ('TextWrangler', '2.3b1')
'AdobePhotoshopCS3--11.2.1' becomes ('AdobePhotoshopCS3', '11.2.1')
'MicrosoftOffice2008-12.2.1' becomes ('MicrosoftOffice2008', '12.2.1')
updatecheck.catalogs
Created by Greg Neagle on 2017-01-01.
Functions for working with Munki catalogs
encoding: utf-8 Copyright 2009-2020 Greg Neagle. Licensed under the Apache License, Version 2.0 (the 'License'); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. normalize the version number build indexes for items by name and version build table of receipts build table of update items with a list comprehension -- filter all items from the catalogitems that have a non-empty 'update_for' list now fix possible admin errors where 'update_for' is a string instead of a list of strings convert to list of strings build table of autoremove items with a list comprehension -- filter all items from the catalogitems that have a non-empty 'autoremove' list autoremove items are automatically removed if they are not in the managed_install list (either directly or indirectly via included manifests) convert to set and back to list to get list of unique names we'll throw away any included version info in case catalogname refers to a non-existent catalog... is name in the catalog name table? sort so latest version is first get a list of catalog items that are updates for other items in case the list refers to a non-existent catalog list comprehension coming up... make sure the list has only unique items: updates were found, so let's display them format the update list for better on-screen viewing make sure the list has only unique items: itemname_to_pkgid now contains all receipts (pkgids) we know about from items in all available catalogs name is a Munki install item name record this pkgid for Munki install item name we found all receipts by pkgid on disk we found only some receipts for the item on disk we pay special attention to the items that seem partially installed. we need to see if there are any packages that are unique to this item if there aren't, then this item probably isn't installed, and we're just finding receipts that are shared with other items. get a list of pkgs for this item that are installed now build a list of all the pkgs referred to by all the other items that are either partially or entirely installed use Python sets to find pkgs that are unique to this name now filter partiallyinstalled to remove those items we moved to installed build our reference table. For each item we think is installed, record the receipts on disk matched to the item look through all our installedpkgs, looking for ones that have not been attached to any Munki names yet attempt to match orphans to Munki item names process matched_orphans left here for future debugging/testing use....pkgdata['itemname_to_pkgid'] = itemname_to_pkgidpkgdata['pkgid_to_itemname'] = pkgid_to_itemnamepkgdata['partiallyinstalled_names'] = partiallyinstalledpkgdata['orphans'] = orphanspkgdata['matched_orphans'] = matched_orphansManagedInstallDir = prefs.pref('ManagedInstallDir')pkgdatapath = os.path.join(ManagedInstallDir, 'PackageData.plist')try: FoundationPlist.writePlist(pkgdata, pkgdatapath)except FoundationPlist.NSPropertyListWriteException: passcatalogdbpath = os.path.join(ManagedInstallDir, 'CatalogDB.plist')try: FoundationPlist.writePlist(CATALOG, catalogdbpath)except FoundationPlist.NSPropertyListWriteException: pass condition check functions Is the current OS version >= minimum_os_version for the item? skip this one, go to the next current OS version <= maximum_os_version? skip this one, go to the next we didn't find a supported architecture that matches this machine is name in the catalog? order all our items, highest version first get the specific requested version iterate through list of items with matching name, highest version first, looking for first one that passes all the conditional tests (if any) if we got this far, we didn't find it. global to hold our catalog DBsglobal _CATALOG | 7,416 | en | 0.831414 |
#--- Exercício 1 - Funções - 1
#--- Escreva uma função que imprima um cabeçalho
#--- O cabeçalho deve ser escrito usando a multiplicação de carácter
#--- resultado esperado: -------------- Cadastro Serasa --------------------------
#--- O cabeçalho deve conter o nome de uma empresa, que será uma variável
#--- Realize a chamada da função na ultima linha do seu programa
nome_empresa = input("Insira o nome da empresa: ")
def imprimir_cabecalho():
print("-" *10 , "Cadastro", nome_empresa, "-" *10 )
imprimir_cabecalho() | 01-Exercicios/Aula007/Ex1.py | 545 | --- Exercício 1 - Funções - 1--- Escreva uma função que imprima um cabeçalho--- O cabeçalho deve ser escrito usando a multiplicação de carácter --- resultado esperado: -------------- Cadastro Serasa ----------------------------- O cabeçalho deve conter o nome de uma empresa, que será uma variável--- Realize a chamada da função na ultima linha do seu programa | 361 | pt | 0.992754 |
# coding=utf-8
from typing import Iterable
from .registry import registry
@registry.register("A000073")
def tribonacci() -> Iterable[int]:
"""Tribonacci numbers."""
yield 0
yield 0
yield 1
p3: int = 0 # tribonacci(0)
p2: int = 0 # tribonacci(1)
p1: int = 1 # tribonacci(2)
while True:
curr: int = p1 + p2 + p3
yield curr
p1, p2, p3 = curr, p1, p2
| oeis/tribonacci.py | 409 | Tribonacci numbers.
coding=utf-8 tribonacci(0) tribonacci(1) tribonacci(2) | 76 | en | 0.538386 |
from dataclasses import dataclass
from enum import IntEnum
from typing import Optional, Dict
from blspy import G1Element
from wheat.protocols.pool_protocol import POOL_PROTOCOL_VERSION
from wheat.types.blockchain_format.coin import Coin
from wheat.types.blockchain_format.program import Program
from wheat.types.blockchain_format.sized_bytes import bytes32
from wheat.util.byte_types import hexstr_to_bytes
from wheat.util.ints import uint32, uint8
from wheat.util.streamable import streamable, Streamable
class PoolSingletonState(IntEnum):
"""
From the user's point of view, a pool group can be in these states:
`SELF_POOLING`: The singleton exists on the blockchain, and we are farming
block rewards to a wallet address controlled by the user
`LEAVING_POOL`: The singleton exists, and we have entered the "escaping" state, which
means we are waiting for a number of blocks = `relative_lock_height` to pass, so we can leave.
`FARMING_TO_POOL`: The singleton exists, and it is assigned to a pool.
`CLAIMING_SELF_POOLED_REWARDS`: We have submitted a transaction to sweep our
self-pooled funds.
"""
SELF_POOLING = 1
LEAVING_POOL = 2
FARMING_TO_POOL = 3
SELF_POOLING = PoolSingletonState.SELF_POOLING
LEAVING_POOL = PoolSingletonState.LEAVING_POOL
FARMING_TO_POOL = PoolSingletonState.FARMING_TO_POOL
@dataclass(frozen=True)
@streamable
class PoolState(Streamable):
"""
`PoolState` is a type that is serialized to the blockchain to track the state of the user's pool singleton
`target_puzzle_hash` is either the pool address, or the self-pooling address that pool rewards will be paid to.
`target_puzzle_hash` is NOT the p2_singleton puzzle that block rewards are sent to.
The `p2_singleton` address is the initial address, and the `target_puzzle_hash` is the final destination.
`relative_lock_height` is zero when in SELF_POOLING state
"""
version: uint8
state: uint8 # PoolSingletonState
# `target_puzzle_hash`: A puzzle_hash we pay to
# When self-farming, this is a main wallet address
# When farming-to-pool, the pool sends this to the farmer during pool protocol setup
target_puzzle_hash: bytes32 # TODO: rename target_puzzle_hash -> pay_to_address
# owner_pubkey is set by the wallet, once
owner_pubkey: G1Element
pool_url: Optional[str]
relative_lock_height: uint32
def initial_pool_state_from_dict(state_dict: Dict, owner_pubkey: G1Element, owner_puzzle_hash: bytes32) -> PoolState:
state_str = state_dict["state"]
singleton_state: PoolSingletonState = PoolSingletonState[state_str]
if singleton_state == SELF_POOLING:
target_puzzle_hash = owner_puzzle_hash
pool_url: str = ""
relative_lock_height = uint32(0)
elif singleton_state == FARMING_TO_POOL:
target_puzzle_hash = bytes32(hexstr_to_bytes(state_dict["target_puzzle_hash"]))
pool_url = state_dict["pool_url"]
relative_lock_height = uint32(state_dict["relative_lock_height"])
else:
raise ValueError("Initial state must be SELF_POOLING or FARMING_TO_POOL")
# TODO: change create_pool_state to return error messages, as well
assert relative_lock_height is not None
return create_pool_state(singleton_state, target_puzzle_hash, owner_pubkey, pool_url, relative_lock_height)
def create_pool_state(
state: PoolSingletonState,
target_puzzle_hash: bytes32,
owner_pubkey: G1Element,
pool_url: Optional[str],
relative_lock_height: uint32,
) -> PoolState:
if state not in set(s.value for s in PoolSingletonState):
raise AssertionError("state {state} is not a valid PoolSingletonState,")
ps = PoolState(
POOL_PROTOCOL_VERSION, uint8(state), target_puzzle_hash, owner_pubkey, pool_url, relative_lock_height
)
# TODO Move verify here
return ps
@dataclass(frozen=True)
@streamable
class PoolWalletInfo(Streamable):
"""
Internal Pool Wallet state, not destined for the blockchain. This can be completely derived with
the Singleton's CoinSolutions list, or with the information from the WalletPoolStore.
"""
current: PoolState
target: Optional[PoolState]
launcher_coin: Coin
launcher_id: bytes32
p2_singleton_puzzle_hash: bytes32
current_inner: Program # Inner puzzle in current singleton, not revealed yet
tip_singleton_coin_id: bytes32
singleton_block_height: uint32 # Block height that current PoolState is from
| wheat/pools/pool_wallet_info.py | 4,509 | From the user's point of view, a pool group can be in these states:
`SELF_POOLING`: The singleton exists on the blockchain, and we are farming
block rewards to a wallet address controlled by the user
`LEAVING_POOL`: The singleton exists, and we have entered the "escaping" state, which
means we are waiting for a number of blocks = `relative_lock_height` to pass, so we can leave.
`FARMING_TO_POOL`: The singleton exists, and it is assigned to a pool.
`CLAIMING_SELF_POOLED_REWARDS`: We have submitted a transaction to sweep our
self-pooled funds.
`PoolState` is a type that is serialized to the blockchain to track the state of the user's pool singleton
`target_puzzle_hash` is either the pool address, or the self-pooling address that pool rewards will be paid to.
`target_puzzle_hash` is NOT the p2_singleton puzzle that block rewards are sent to.
The `p2_singleton` address is the initial address, and the `target_puzzle_hash` is the final destination.
`relative_lock_height` is zero when in SELF_POOLING state
Internal Pool Wallet state, not destined for the blockchain. This can be completely derived with
the Singleton's CoinSolutions list, or with the information from the WalletPoolStore.
PoolSingletonState `target_puzzle_hash`: A puzzle_hash we pay to When self-farming, this is a main wallet address When farming-to-pool, the pool sends this to the farmer during pool protocol setup TODO: rename target_puzzle_hash -> pay_to_address owner_pubkey is set by the wallet, once TODO: change create_pool_state to return error messages, as well TODO Move verify here Inner puzzle in current singleton, not revealed yet Block height that current PoolState is from | 1,684 | en | 0.907008 |
# SPDX-FileCopyrightText: 2017 Scott Shawcroft, written for Adafruit Industries
# SPDX-FileCopyrightText: Copyright (c) 2022 Jeff Epler for Adafruit Industries
#
# SPDX-License-Identifier: Unlicense
# On an Adafruit Feather M4 or Adafruit Feather RP2040 with Floppy Featherwing,
# do some track-to-track seeking and flux reading.
import board
import adafruit_floppy
D24 = getattr(board, "D24") or getattr(board, "A4")
D25 = getattr(board, "D25") or getattr(board, "A5")
floppy = adafruit_floppy.MFMFloppy(
densitypin=board.A0,
indexpin=board.A1,
selectpin=board.A2,
motorpin=board.A3,
directionpin=D24,
steppin=D25,
track0pin=board.D11,
protectpin=board.D10,
rddatapin=board.D9,
sidepin=board.D6,
readypin=board.D5,
)
floppy.selected = True
floppy.spin = True
print("Seek track 8")
floppy.track = 8
print("Seek track 0")
floppy.track = 0
print("Read partial track raw flux data")
buf = bytearray(30000)
n_read = floppy.flux_readinto(buf)
print("read", n_read)
buckets = [0] * 256
for b in buf:
buckets[b] += 1
oi = -1
for i, bi in enumerate(buckets):
if bi > 0:
if i != oi + 1:
print("---")
oi = i
print(f"{i:3} {bi:5}")
| examples/floppy_simpletest.py | 1,214 | SPDX-FileCopyrightText: 2017 Scott Shawcroft, written for Adafruit Industries SPDX-FileCopyrightText: Copyright (c) 2022 Jeff Epler for Adafruit Industries SPDX-License-Identifier: Unlicense On an Adafruit Feather M4 or Adafruit Feather RP2040 with Floppy Featherwing, do some track-to-track seeking and flux reading. | 317 | en | 0.604935 |
"""Code and data structures for managing source directives."""
import bisect
import collections
import re
import sys
import tokenize
from pytype import utils
from six import moves
_DIRECTIVE_RE = re.compile(r"#\s*(pytype|type)\s*:\s?([^#]*)")
_CLOSING_BRACKETS_RE = re.compile(r"^(\s*[]})]\s*)+(#.*)?$")
_WHITESPACE_RE = re.compile(r"^\s*(#.*)?$")
_CLASS_OR_FUNC_RE = re.compile(r"^(def|class)\s")
_DOCSTRING_RE = re.compile(r"^\s*(\"\"\"|''')")
_ALL_ERRORS = "*" # Wildcard for disabling all errors.
class _DirectiveError(Exception):
pass
class SkipFile(Exception):
"""Exception thrown if we encounter "pytype: skip-file" in the source code."""
class _LineSet(object):
"""A set of line numbers.
The data structure is optimized to represent the union of a sparse set
of integers and ranges of non-negative integers. This supports the two styles
of directives: those after a statement apply only to that line and those on
their own line apply until countered by the opposing directive.
"""
def __init__(self):
# Map of line->bool for specific lines, takes precedence over _transitions.
self._lines = {}
# A sorted list of the lines at which the range state changes
# polarity. It is assumed to initially be false (not in a range).
# Even positions represent the start of a range, odd positions represent
# the end of a range. Thus [2, 5, 10, 12] would include lines 2, 3, 4, 10,
# and 11. If the length is odd, then an end of maxint is implied, thus
# [2, 5, 10] would disable lines 2, 3, 4, 10, 11, 12, ...
self._transitions = []
def set_line(self, line, membership):
"""Set whether a given line is a member of the set."""
self._lines[line] = membership
def start_range(self, line, membership):
"""Start a range of lines that are either included/excluded from the set.
Args:
line: A line number.
membership: If True, lines >= line are included in the set (starting
a range), otherwise they are excluded (ending a range).
Raises:
ValueError: if line is less than that of a previous call to start_range().
"""
last = self._transitions[-1] if self._transitions else -1
# Assert that lines are monotonically increasing. This simplifies the
# logic of adding new lines and ensures that _ranges is sorted.
if line < last:
raise ValueError("Line number less than previous start_range() call.")
# Determine previous membership state (True if the last range has an
# indefinite end).
previous = (len(self._transitions) % 2) == 1
if membership == previous:
# TODO(dbaum): Consider issuing a warning here.
return # Redundant with previous state, do nothing.
elif line == last:
# We have either enable/disable or disable/enable on the same line,
# cancel them out by popping the previous transition.
self._transitions.pop()
else:
# Normal case - add a transition at this line.
self._transitions.append(line)
def __contains__(self, line):
"""Return if a line is a member of the set."""
# First check for an entry in _lines.
specific = self._lines.get(line)
if specific is not None:
return specific
# Find the position in _ranges for line. The polarity of this position
# determines whether we are inside a range (odd) or outside (even).
pos = bisect.bisect(self._transitions, line)
return (pos % 2) == 1
def get_disable_after(self, lineno):
"""Get an unclosed disable, if any, that starts after lineno."""
if len(self._transitions) % 2 == 1 and self._transitions[-1] >= lineno:
return self._transitions[-1]
return None
class Director(object):
"""Holds all of the directive information for a source file."""
def __init__(self, src, errorlog, filename, disable):
"""Create a Director for a source file.
Args:
src: The source text as a string.
errorlog: An ErrorLog object. Directive errors will be logged to the
errorlog.
filename: The name of the source file.
disable: List of error messages to always ignore.
"""
self._filename = filename
self._errorlog = errorlog
self._type_comments = {} # Map from line number to (code, comment).
self._docstrings = set() # Start lines of docstrings.
# Lines that have "type: ignore". These will disable all errors, and in
# the future may have other impact (such as not attempting an import).
self._ignore = _LineSet()
# Map from error name to lines for which that error is disabled. Note
# that _ALL_ERRORS is essentially a wildcard name (it matches all names).
self._disables = collections.defaultdict(_LineSet)
# Apply global disable, from the command line arguments:
for error_name in disable:
self._disables[error_name].start_range(0, True)
# Parse the source code for directives.
self._parse_source(src)
@property
def type_comments(self):
return self._type_comments
@property
def docstrings(self):
return sorted(self._docstrings)
@property
def ignore(self):
return self._ignore
def _adjust_type_comments(self, closing_bracket_lines, whitespace_lines):
"""Adjust any type comments affected by closing bracket lines.
Lines that contain nothing but closing brackets don't appear in the
bytecode, so for, e.g.,
v = [
"hello",
"world",
] # line 4
line 4 is where any type comment for 'v' should be put, but the
STORE_NAME opcode for 'v' is at line 3. If we find a type comment put
(wrongly) on line 3, we'll report an error, and if we find a type comment
on line 4, we'll move it to line 3.
Args:
closing_bracket_lines: A set of lines containing only closing brackets,
to be used for adjusting affected type comments.
whitespace_lines: A set of lines containing only whitespace. Its union
with closing_bracket_lines is a set of consecutive lines.
"""
target = min(closing_bracket_lines | whitespace_lines) - 1
if target in self._type_comments:
self._errorlog.ignored_type_comment(
self._filename, target, self._type_comments[target][1])
del self._type_comments[target]
end = max(closing_bracket_lines)
if end in self._type_comments:
self._type_comments[target] = self._type_comments[end]
del self._type_comments[end]
def _parse_source(self, src):
"""Parse a source file, extracting directives from comments."""
f = moves.StringIO(src)
defs_start = None
closing_bracket_lines = set()
whitespace_lines = set()
for tok, _, start, _, line in tokenize.generate_tokens(f.readline):
lineno, col = start
if defs_start is None and _CLASS_OR_FUNC_RE.match(line):
defs_start = lineno
if _CLOSING_BRACKETS_RE.match(line):
closing_bracket_lines.add(lineno)
elif _WHITESPACE_RE.match(line):
whitespace_lines.add(lineno)
elif _DOCSTRING_RE.match(line):
self._docstrings.add(lineno)
else:
if closing_bracket_lines:
self._adjust_type_comments(closing_bracket_lines, whitespace_lines)
closing_bracket_lines.clear()
whitespace_lines.clear()
if tok == tokenize.COMMENT:
matches = list(_DIRECTIVE_RE.finditer(line[col:]))
is_nested = bool(matches) and matches[0].start(0) > 0
for m in matches:
code = line[:col].strip()
tool, data = m.groups()
open_ended = not code
data = data.strip()
if tool == "type":
self._process_type(lineno, code, data, is_nested)
elif tool == "pytype":
try:
self._process_pytype(lineno, data, open_ended)
except _DirectiveError as e:
self._errorlog.invalid_directive(
self._filename, lineno, utils.message(e))
else:
pass # ignore comments for other tools
if closing_bracket_lines:
self._adjust_type_comments(closing_bracket_lines, whitespace_lines)
if defs_start is not None:
disables = list(self._disables.items())
# Add "# type: ignore" to the list of disables that we check.
disables.append(("Type checking", self._ignore))
for name, lineset in disables:
lineno = lineset.get_disable_after(defs_start)
if lineno is not None:
self._errorlog.late_directive(self._filename, lineno, name)
def _process_type(self, lineno, code, data, is_nested):
"""Process a type: comment."""
# Discard type comments embedded in larger whole-line comments.
if not code and is_nested:
return
if lineno in self._type_comments:
# If we have multiple type comments on the same line, take the last one,
# but add an error to the log.
self._errorlog.invalid_directive(
self._filename, lineno,
"Multiple type comments on the same line.")
if data == "ignore":
if not code:
self._ignore.start_range(lineno, True)
else:
self._ignore.set_line(lineno, True)
else:
self._type_comments[lineno] = (code, data)
def _process_pytype(self, lineno, data, open_ended):
"""Process a pytype: comment."""
if not data:
raise _DirectiveError("Invalid directive syntax.")
for option in data.split():
# Parse the command.
if option == "skip-file":
raise SkipFile()
try:
command, values = option.split("=", 1)
values = values.split(",")
except ValueError:
raise _DirectiveError("Invalid directive syntax.")
# Additional commands may be added in the future. For now, only
# "disable" and "enable" are supported.
if command == "disable":
disable = True
elif command == "enable":
disable = False
else:
raise _DirectiveError("Unknown pytype directive: '%s'" % command)
if not values:
raise _DirectiveError(
"Disable/enable must specify one or more error names.")
for error_name in values:
if (error_name == _ALL_ERRORS or
self._errorlog.is_valid_error_name(error_name)):
lines = self._disables[error_name]
if open_ended:
lines.start_range(lineno, disable)
else:
lines.set_line(lineno, disable)
else:
self._errorlog.invalid_directive(
self._filename, lineno, "Invalid error name: '%s'" % error_name)
def should_report_error(self, error):
"""Return whether the error should be logged.
This method is suitable for use as an error filter.
Args:
error: An error._Error object.
Returns:
True iff the error should be included in the log.
"""
# Always report errors that aren't for this file or do not have a line
# number.
if error.filename != self._filename or error.lineno is None:
return True
# Treat lineno=0 as below the file, so we can filter it.
lineno = error.lineno or sys.maxsize
# Report the error if it isn't subject to any ignore or disable.
return (lineno not in self._ignore and
lineno not in self._disables[_ALL_ERRORS] and
lineno not in self._disables[error.name])
| pytype/directors.py | 11,290 | Holds all of the directive information for a source file.
Exception thrown if we encounter "pytype: skip-file" in the source code.
A set of line numbers.
The data structure is optimized to represent the union of a sparse set
of integers and ranges of non-negative integers. This supports the two styles
of directives: those after a statement apply only to that line and those on
their own line apply until countered by the opposing directive.
Return if a line is a member of the set.
Create a Director for a source file.
Args:
src: The source text as a string.
errorlog: An ErrorLog object. Directive errors will be logged to the
errorlog.
filename: The name of the source file.
disable: List of error messages to always ignore.
Adjust any type comments affected by closing bracket lines.
Lines that contain nothing but closing brackets don't appear in the
bytecode, so for, e.g.,
v = [
"hello",
"world",
] # line 4
line 4 is where any type comment for 'v' should be put, but the
STORE_NAME opcode for 'v' is at line 3. If we find a type comment put
(wrongly) on line 3, we'll report an error, and if we find a type comment
on line 4, we'll move it to line 3.
Args:
closing_bracket_lines: A set of lines containing only closing brackets,
to be used for adjusting affected type comments.
whitespace_lines: A set of lines containing only whitespace. Its union
with closing_bracket_lines is a set of consecutive lines.
Parse a source file, extracting directives from comments.
Process a pytype: comment.
Process a type: comment.
Get an unclosed disable, if any, that starts after lineno.
Set whether a given line is a member of the set.
Return whether the error should be logged.
This method is suitable for use as an error filter.
Args:
error: An error._Error object.
Returns:
True iff the error should be included in the log.
Start a range of lines that are either included/excluded from the set.
Args:
line: A line number.
membership: If True, lines >= line are included in the set (starting
a range), otherwise they are excluded (ending a range).
Raises:
ValueError: if line is less than that of a previous call to start_range().
Code and data structures for managing source directives.
Wildcard for disabling all errors. Map of line->bool for specific lines, takes precedence over _transitions. A sorted list of the lines at which the range state changes polarity. It is assumed to initially be false (not in a range). Even positions represent the start of a range, odd positions represent the end of a range. Thus [2, 5, 10, 12] would include lines 2, 3, 4, 10, and 11. If the length is odd, then an end of maxint is implied, thus [2, 5, 10] would disable lines 2, 3, 4, 10, 11, 12, ... Assert that lines are monotonically increasing. This simplifies the logic of adding new lines and ensures that _ranges is sorted. Determine previous membership state (True if the last range has an indefinite end). TODO(dbaum): Consider issuing a warning here. Redundant with previous state, do nothing. We have either enable/disable or disable/enable on the same line, cancel them out by popping the previous transition. Normal case - add a transition at this line. First check for an entry in _lines. Find the position in _ranges for line. The polarity of this position determines whether we are inside a range (odd) or outside (even). Map from line number to (code, comment). Start lines of docstrings. Lines that have "type: ignore". These will disable all errors, and in the future may have other impact (such as not attempting an import). Map from error name to lines for which that error is disabled. Note that _ALL_ERRORS is essentially a wildcard name (it matches all names). Apply global disable, from the command line arguments: Parse the source code for directives. ignore comments for other tools Add " type: ignore" to the list of disables that we check. Discard type comments embedded in larger whole-line comments. If we have multiple type comments on the same line, take the last one, but add an error to the log. Parse the command. Additional commands may be added in the future. For now, only "disable" and "enable" are supported. Always report errors that aren't for this file or do not have a line number. Treat lineno=0 as below the file, so we can filter it. Report the error if it isn't subject to any ignore or disable. | 4,408 | en | 0.885741 |
import os
import time
import torch
import torch.nn as nn
import utils
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
def instance_bce_with_logits(logits, labels):
assert logits.dim() == 2
loss = nn.functional.binary_cross_entropy_with_logits(logits, labels)
loss *= labels.size(1)
return loss
def compute_score_with_logits(logits, labels):
logits = torch.max(logits, 1)[1].data # argmax
one_hots = torch.zeros(*labels.size()).cuda()
one_hots.scatter_(1, logits.view(-1, 1), 1)
scores = (one_hots * labels)
return scores
def train(model, train_loader, eval_loader, num_epochs, output, opt, wd):
utils.create_dir(output)
# Paper uses AdaDelta
if opt == 'Adadelta':
optim = torch.optim.Adadelta(model.parameters(), rho=0.95, eps=1e-6, weight_decay=wd)
elif opt == 'RMSprop':
optim = torch.optim.RMSprop(model.parameters(), lr=0.01, alpha=0.99, eps=1e-08, weight_decay=wd, momentum=0, centered=False)
elif opt == 'Adam':
optim = torch.optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=wd)
else:
optim = torch.optim.Adamax(model.parameters(), weight_decay=wd)
logger = utils.Logger(os.path.join(output, 'log.txt'))
best_eval_score = 0
for epoch in range(num_epochs):
total_loss = 0
train_score = 0
t = time.time()
correct = 0
for i, (v, b, q, a) in enumerate(train_loader):
v = Variable(v).cuda()
b = Variable(b).cuda() # boxes not used
q = Variable(q).cuda()
a = Variable(a).cuda() # true labels
pred = model(v, b, q, a)
loss = instance_bce_with_logits(pred, a)
loss.backward()
nn.utils.clip_grad_norm(model.parameters(), 0.25)
optim.step()
optim.zero_grad()
batch_score = compute_score_with_logits(pred, a.data).sum()
total_loss += loss.data[0] * v.size(0)
train_score += batch_score
total_loss /= len(train_loader.dataset)
train_score = 100 * train_score / len(train_loader.dataset)
model.train(False)
eval_score, bound, V_loss = evaluate(model, eval_loader)
model.train(True)
logger.write('epoch %d, time: %.2f' % (epoch, time.time()-t))
logger.write('\ttrain_loss: %.3f, score: %.3f' % (total_loss, train_score))
logger.write('\teval loss: %.3f, score: %.3f (%.3f)' % (V_loss, 100 * eval_score, 100 * bound))
if eval_score > best_eval_score:
model_path = os.path.join(output, 'model.pth')
torch.save(model.state_dict(), model_path)
best_eval_score = eval_score
def evaluate(model, dataloader):
score = 0
V_loss = 0
upper_bound = 0
num_data = 0
for v, b, q, a in iter(dataloader):
v = Variable(v, volatile=True).cuda()
b = Variable(b, volatile=True).cuda()
q = Variable(q, volatile=True).cuda()
a = Variable(a, volatile=True).cuda()
pred = model(v, b, q, None)
loss = instance_bce_with_logits(pred, a)
V_loss += loss.data[0] * v.size(0)
batch_score = compute_score_with_logits(pred, a.data).sum()
score += batch_score
upper_bound += (a.max(1)[0]).sum()
num_data += pred.size(0)
score = score / len(dataloader.dataset)
V_loss /= len(dataloader.dataset)
upper_bound = upper_bound / len(dataloader.dataset)
return score, upper_bound, V_loss
| train.py | 3,564 | argmax Paper uses AdaDelta boxes not used true labels | 53 | en | 0.141878 |
from collections import defaultdict
from hashlib import md5
from typing import DefaultDict, Dict, List, Set, Union
import attr
from pyrdf2vec.graphs import KG, Vertex
from pyrdf2vec.typings import Entities, EntityWalks, SWalk
from pyrdf2vec.walkers import RandomWalker
@attr.s
class WLWalker(RandomWalker):
"""Weisfeiler-Lehman walking strategy which relabels the nodes of the
extracted random walks, providing additional information about the entity
representations only when a maximum number of walks is not specified.
Attributes:
_inv_label_map: Stores the mapping of the inverse labels.
Defaults to defaultdict.
_is_support_remote: True if the walking strategy can be used with a
remote Knowledge Graph, False Otherwise.
Defaults to False.
_label_map: Stores the mapping of the inverse labels.
Defaults to defaultdict.
kg: The global KG used later on for the worker process.
Defaults to None.
max_depth: The maximum depth of one walk.
max_walks: The maximum number of walks per entity.
Defaults to None.
md5_bytes: The number of bytes to keep after hashing objects in
MD5. Hasher allows to reduce the memory occupied by a long text. If
md5_bytes is None, no hash is applied.
Defaults to 8.
random_state: The random state to use to keep random determinism with
the walking strategy.
Defaults to None.
sampler: The sampling strategy.
Defaults to UniformSampler.
wl_iterations: The Weisfeiler Lehman's iteration.
Defaults to 4.
"""
wl_iterations = attr.ib(
kw_only=True,
default=4,
type=int,
validator=attr.validators.instance_of(int),
)
_is_support_remote = attr.ib(
init=False, repr=False, type=bool, default=False
)
_inv_label_map = attr.ib(
init=False,
repr=False,
type=DefaultDict["Vertex", Dict[Union[str, int], Union[str, int]]],
factory=lambda: defaultdict(dict),
)
_label_map = attr.ib(
init=False,
repr=False,
type=DefaultDict["Vertex", Dict[int, str]],
factory=lambda: defaultdict(dict),
)
def _create_label(self, kg: KG, vertex: Vertex, n: int) -> str:
"""Creates a label according to a vertex and its neighbors.
kg: The Knowledge Graph.
The graph from which the neighborhoods are extracted for the
provided entities.
vertex: The vertex to get its neighbors to create the suffix.
n: The index of the neighbor
Returns:
the label created for the vertex.
"""
if len(self._label_map) == 0:
self._weisfeiler_lehman(kg)
suffix = "-".join(
sorted(
set(
[
self._label_map[neighbor][n - 1]
for neighbor in kg.get_neighbors(
vertex, is_reverse=True
)
]
)
)
)
return f"{self._label_map[vertex][n - 1]}-{suffix}"
def _weisfeiler_lehman(self, kg: KG) -> None:
"""Performs Weisfeiler-Lehman relabeling of the vertices.
Args:
kg: The Knowledge Graph.
The graph from which the neighborhoods are extracted for the
provided entities.
"""
for vertex in kg._vertices:
self._label_map[vertex][0] = vertex.name
self._inv_label_map[vertex][0] = vertex.name
for n in range(1, self.wl_iterations + 1):
for vertex in kg._vertices:
if self.md5_bytes:
self._label_map[vertex][n] = str(
md5(
self._create_label(kg, vertex, n).encode()
).digest()[: self.md5_bytes]
)
else:
self._label_map[vertex][n] = str(
self._create_label(kg, vertex, n)
)
for vertex in kg._vertices:
for k, v in self._label_map[vertex].items():
self._inv_label_map[vertex][v] = k
def extract(
self, kg: KG, entities: Entities, verbose: int = 0
) -> List[List[SWalk]]:
"""Fits the provided sampling strategy and then calls the
private _extract method that is implemented for each of the
walking strategies.
Args:
kg: The Knowledge Graph.
entities: The entities to be extracted from the Knowledge Graph.
verbose: The verbosity level.
0: does not display anything;
1: display of the progress of extraction and training of walks;
2: debugging.
Defaults to 0.
Returns:
The 2D matrix with its number of rows equal to the number of
provided entities; number of column equal to the embedding size.
"""
self._weisfeiler_lehman(kg)
return super().extract(kg, entities, verbose)
def _map_wl(self, entity: Vertex, pos: int, n: int) -> str:
"""Maps certain vertices to MD5 hashes to save memory. For entities of
interest (provided by the user to the extract function) and predicates,
the string representation is kept.
Args:
entity: The entity to be mapped.
pos: The position of the entity in the walk.
n: The iteration number of the WL algorithm.
Returns:
A hash (string) or original string representation.
"""
if entity.name in self._entities or pos % 2 == 1:
return entity.name
else:
return self._label_map[entity][n]
def _extract(self, kg: KG, entity: Vertex) -> EntityWalks:
"""Extracts random walks for an entity based on a Knowledge Graph.
Args:
kg: The Knowledge Graph.
entity: The root node to extract walks.
Returns:
A dictionary having the entity as key and a list of tuples as value
corresponding to the extracted walks.
"""
canonical_walks: Set[SWalk] = set()
for n in range(self.wl_iterations + 1):
for walk in self.extract_walks(kg, entity):
canonical_walk: List[str] = [
self._map_wl(vertex, i, n) for i, vertex in enumerate(walk)
]
canonical_walks.add(tuple(canonical_walk))
return {entity.name: list(canonical_walks)}
| pyrdf2vec/walkers/weisfeiler_lehman.py | 6,735 | Weisfeiler-Lehman walking strategy which relabels the nodes of the
extracted random walks, providing additional information about the entity
representations only when a maximum number of walks is not specified.
Attributes:
_inv_label_map: Stores the mapping of the inverse labels.
Defaults to defaultdict.
_is_support_remote: True if the walking strategy can be used with a
remote Knowledge Graph, False Otherwise.
Defaults to False.
_label_map: Stores the mapping of the inverse labels.
Defaults to defaultdict.
kg: The global KG used later on for the worker process.
Defaults to None.
max_depth: The maximum depth of one walk.
max_walks: The maximum number of walks per entity.
Defaults to None.
md5_bytes: The number of bytes to keep after hashing objects in
MD5. Hasher allows to reduce the memory occupied by a long text. If
md5_bytes is None, no hash is applied.
Defaults to 8.
random_state: The random state to use to keep random determinism with
the walking strategy.
Defaults to None.
sampler: The sampling strategy.
Defaults to UniformSampler.
wl_iterations: The Weisfeiler Lehman's iteration.
Defaults to 4.
Creates a label according to a vertex and its neighbors.
kg: The Knowledge Graph.
The graph from which the neighborhoods are extracted for the
provided entities.
vertex: The vertex to get its neighbors to create the suffix.
n: The index of the neighbor
Returns:
the label created for the vertex.
Extracts random walks for an entity based on a Knowledge Graph.
Args:
kg: The Knowledge Graph.
entity: The root node to extract walks.
Returns:
A dictionary having the entity as key and a list of tuples as value
corresponding to the extracted walks.
Maps certain vertices to MD5 hashes to save memory. For entities of
interest (provided by the user to the extract function) and predicates,
the string representation is kept.
Args:
entity: The entity to be mapped.
pos: The position of the entity in the walk.
n: The iteration number of the WL algorithm.
Returns:
A hash (string) or original string representation.
Performs Weisfeiler-Lehman relabeling of the vertices.
Args:
kg: The Knowledge Graph.
The graph from which the neighborhoods are extracted for the
provided entities.
Fits the provided sampling strategy and then calls the
private _extract method that is implemented for each of the
walking strategies.
Args:
kg: The Knowledge Graph.
entities: The entities to be extracted from the Knowledge Graph.
verbose: The verbosity level.
0: does not display anything;
1: display of the progress of extraction and training of walks;
2: debugging.
Defaults to 0.
Returns:
The 2D matrix with its number of rows equal to the number of
provided entities; number of column equal to the embedding size. | 2,986 | en | 0.854357 |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import copy
import json
import requests
import pytz
import time
from inky.inky_uc8159 import Inky, DESATURATED_PALETTE
from datetime import datetime
from PIL import Image, ImageFont, ImageDraw
import io
import apikey
import os
import signal
import RPi.GPIO as GPIO
path = os.path.dirname(os.path.realpath(__file__))
ICON_SIZE = 100
TILE_WIDTH = 150
TILE_HEIGHT = 200
FONT_SIZE = 25
SPACE = 2
ROTATE = 0 # 180 = flip display
USE_INKY = True
SHOW_CLOCK = False
SLEEP_TIME = 3600
colors = ['Black', 'White', 'Green', 'Blue', 'Red', 'Yellow', 'Orange']
percipitation_colour = colors[0]
temprature_colour = colors[4]
day_colour = colors[3]
#BUTTONS = [5, 6, 16, 24]
LABELS = ['A','B','C','D']
GPIO.setmode(GPIO.BCM)
#GPIO.setup(Buttons, GPIO.IN, pull_up_down=GPIO.PUD_UP)
#def handle_button(pin):
# label = LABELS[BUTTONS.index(pin)]
time_colour = colors[4]
general_map = {
200: "thunderstorm.PNG8",
201: "thunderstorm.PNG8",
202: "thunderstorm.PNG8",
210: "lightning.PNG8",
211: "lightning.PNG8",
212: "lightning.PNG8",
221: "lightning.PNG8",
230: "thunderstorm.PNG8",
231: "thunderstorm.PNG8",
232: "thunderstorm.PNG8",
300: "sprinkle.PNG8",
301: "sprinkle.PNG8",
302: "rain.PNG8",
310: "rain-mix.PNG8",
311: "rain.PNG8",
312: "rain.PNG8",
313: "showers.PNG8",
314: "rain.PNG8",
321: "sprinkle.PNG8",
500: "sprinkle.PNG8",
501: "rain.PNG8",
502: "rain.PNG8",
503: "rain.PNG8",
504: "rain.PNG8",
511: "rain-mix.PNG8",
520: "showers.PNG8",
521: "showers.PNG8",
522: "showers.PNG8",
531: "storm-showers.PNG8",
600: "snow.PNG8",
601: "snow.PNG8",
602: "sleet.PNG8",
611: "rain-mix.PNG8",
612: "rain-mix.PNG8",
615: "rain-mix.PNG8",
616: "rain-mix.PNG8",
620: "rain-mix.PNG8",
621: "snow.PNG8",
622: "snow.PNG8",
701: "showers.PNG8",
711: "smoke.PNG8",
721: "day-haze.PNG8",
731: "dust.PNG8",
741: "fog.PNG8",
761: "dust.PNG8",
762: "dust.PNG8",
771: "cloudy-gusts.PNG8",
781: "tornado.PNG8",
800: "day-sunny.PNG8",
801: "cloudy-gusts.PNG8",
802: "cloudy-gusts.PNG8",
803: "cloudy-gusts.PNG8",
804: "cloudy.PNG8",
900: "tornado.PNG8",
901: "storm-showers.PNG8",
902: "hurricane.PNG8",
903: "snowflake-cold.PNG8",
904: "hot.PNG8",
905: "windy.PNG8",
906: "hail.PNG8",
957: "strong-wind.PNG8"}
day_map = {
200: "day-thunderstorm.PNG8",
201: "day-thunderstorm.PNG8",
202: "day-thunderstorm.PNG8",
210: "day-lightning.PNG8",
211: "day-lightning.PNG8",
212: "day-lightning.PNG8",
221: "day-lightning.PNG8",
230: "day-thunderstorm.PNG8",
231: "day-thunderstorm.PNG8",
232: "day-thunderstorm.PNG8",
300: "day-sprinkle.PNG8",
301: "day-sprinkle.PNG8",
302: "day-rain.PNG8",
310: "day-rain.PNG8",
311: "day-rain.PNG8",
312: "day-rain.PNG8",
313: "day-rain.PNG8",
314: "day-rain.PNG8",
321: "day-sprinkle.PNG8",
500: "day-sprinkle.PNG8",
501: "day-rain.PNG8",
502: "day-rain.PNG8",
503: "day-rain.PNG8",
504: "day-rain.PNG8",
511: "day-rain-mix.PNG8",
520: "day-showers.PNG8",
521: "day-showers.PNG8",
522: "day-showers.PNG8",
531: "day-storm-showers.PNG8",
600: "day-snow.PNG8",
601: "day-sleet.PNG8",
602: "day-snow.PNG8",
611: "day-rain-mix.PNG8",
612: "day-rain-mix.PNG8",
615: "day-rain-mix.PNG8",
616: "day-rain-mix.PNG8",
620: "day-rain-mix.PNG8",
621: "day-snow.PNG8",
622: "day-snow.PNG8",
701: "day-showers.PNG8",
711: "smoke.PNG8",
721: "day-haze.PNG8",
731: "dust.PNG8",
741: "day-fog.PNG8",
761: "dust.PNG8",
762: "dust.PNG8",
781: "tornado.PNG8",
800: "day-sunny.PNG8",
801: "day-cloudy-gusts.PNG8",
802: "day-cloudy-gusts.PNG8",
803: "day-cloudy-gusts.PNG8",
804: "day-sunny-overcast.PNG8",
900: "tornado.PNG8",
902: "hurricane.PNG8",
903: "snowflake-cold.PNG8",
904: "hot.PNG8",
906: "day-hail.PNG8",
957: "strong-wind.PNG8"}
night_map = {
200: "night-alt-thunderstorm.PNG8",
201: "night-alt-thunderstorm.PNG8",
202: "night-alt-thunderstorm.PNG8",
210: "night-alt-lightning.PNG8",
211: "night-alt-lightning.PNG8",
212: "night-alt-lightning.PNG8",
221: "night-alt-lightning.PNG8",
230: "night-alt-thunderstorm.PNG8",
231: "night-alt-thunderstorm.PNG8",
232: "night-alt-thunderstorm.PNG8",
300: "night-alt-sprinkle.PNG8",
301: "night-alt-sprinkle.PNG8",
302: "night-alt-rain.PNG8",
310: "night-alt-rain.PNG8",
311: "night-alt-rain.PNG8",
312: "night-alt-rain.PNG8",
313: "night-alt-rain.PNG8",
314: "night-alt-rain.PNG8",
321: "night-alt-sprinkle.PNG8",
500: "night-alt-sprinkle.PNG8",
501: "night-alt-rain.PNG8",
502: "night-alt-rain.PNG8",
503: "night-alt-rain.PNG8",
504: "night-alt-rain.PNG8",
511: "night-alt-rain-mix.PNG8",
520: "night-alt-showers.PNG8",
521: "night-alt-showers.PNG8",
522: "night-alt-showers.PNG8",
531: "night-alt-storm-showers.PNG8",
600: "night-alt-snow.PNG8",
601: "night-alt-sleet.PNG8",
602: "night-alt-snow.PNG8",
611: "night-alt-rain-mix.PNG8",
612: "night-alt-rain-mix.PNG8",
615: "night-alt-rain-mix.PNG8",
616: "night-alt-rain-mix.PNG8",
620: "night-alt-rain-mix.PNG8",
621: "night-alt-snow.PNG8",
622: "night-alt-snow.PNG8",
701: "night-alt-showers.PNG8",
711: "smoke.PNG8",
721: "day-haze.PNG8",
731: "dust.PNG8",
741: "night-fog.PNG8",
761: "dust.PNG8",
762: "dust.PNG8",
781: "tornado.PNG8",
800: "night-clear.PNG8",
801: "night-alt-cloudy-gusts.PNG8",
802: "night-alt-cloudy-gusts.PNG8",
803: "night-alt-cloudy-gusts.PNG8",
804: "night-alt-cloudy.PNG8",
900: "tornado.PNG8",
902: "hurricane.PNG8",
903: "snowflake-cold.PNG8",
904: "hot.PNG8",
906: "night-alt-hail.PNG8",
957: "strong-wind.PNG8"}
class Day:
def __init__(self, min, max, pop, id, sunrise, sunset, pressure, dt):
self.min = int(min + 0.5)
self.max = int(max + 0.5)
self.pop = pop
self.id = id
self.sunrise = sunrise
self.sunset = sunset
self.pressure = pressure
self.dt = dt
def get_icon(name):
return Image.open(name).convert("RGBA")
def day_lists_not_identical(days, other_days):
if (len(days) != len(other_days)):
return True
for i in range(len(days)):
if (days[i].min != other_days[i].min):
return True
if (days[i].max != other_days[i].max):
return True
if (days[i].pop != other_days[i].pop):
return True
if (days[i].id != other_days[i].id):
return True
return True
api_key = apikey.api_key
if (api_key == "<your API key>"):
print("You forgot to enter your API key")
exit()
lat = apikey.lat
lon = apikey.lon
url = "https://api.openweathermap.org/data/2.5/onecall?lat=%s&lon=%s&exclude=hourly&appid=%s&units=metric" % (
lat, lon, api_key)
palette_colors = [(c[0] / 255.0, c[1] / 255.0, c[2] / 255.0) for c in DESATURATED_PALETTE[2:6] + [(0, 0, 0)]]
tile_positions = []
for i in range(2):
for j in range(4):
tile_positions.append((j * TILE_WIDTH, i * TILE_HEIGHT))
inky_display = Inky()
satuation = 0
y_top = int(inky_display.height)
y_bottom = y_top + int(inky_display.height * (4.0 / 10.0))
font = ImageFont.truetype(path+
"/fonts/BungeeColor-Regular_colr_Windows.ttf", FONT_SIZE)
old_days = []
while(True):
try:
response = requests.get(url)
data = json.loads(response.text)
except:
None
days = []
daily = data["daily"]
for day in daily:
min = day["temp"]["min"]
max = day["temp"]["max"]
pop = day["pop"]
id = day["weather"][0]["id"]
sunrise = int(day["sunrise"])
sunset = int(day["sunset"])
dt = int(day["dt"])
pressure = int(day["pressure"])
days.append(Day(min, max, pop, id, sunrise, sunset, pressure, dt))
#pressure = int(day["pressure"])
#print(day["pressure"])
if (day_lists_not_identical(days, old_days)):
old_days = copy.deepcopy(days)
img = Image.new("RGBA", inky_display.resolution, colors[1])
draw = ImageDraw.Draw(img)
for i in range(8):
name = path+"/icons/wi-"
if (i == 0):
t = int(time.time())
if (t < days[i].sunset):
name += day_map[days[i].id]
else:
name += night_map[days[i].id]
else:
name += general_map[days[i].id]
icon = get_icon(name)
x = tile_positions[i][0] + (TILE_WIDTH - ICON_SIZE) // 2
y = tile_positions[i][1]
img.paste(icon, (x, y))
text = str(int(100 * days[i].pop)) + "%"
w, h = font.getsize(text)
x = tile_positions[i][0] + (TILE_WIDTH - w) // 2
y = tile_positions[i][1] + ICON_SIZE + SPACE
draw.text((x, y), text, percipitation_colour, font)
text = str(days[i].min) + "°|" + str(days[i].max) + "°"
w, h = font.getsize(text)
x = tile_positions[i][0] + (TILE_WIDTH - w) // 2
y += FONT_SIZE
draw.text((x, y), text, temprature_colour, font)
press = str(days[i].pressure)
text = str(press)+"hPa"
w, h = font.getsize(text)
x = tile_positions[i][0] + (TILE_WIDTH - w) // 2
y += FONT_SIZE
draw.text((x, y), text, day_colour, font)
ts = time.gmtime(days[i].dt)
day_name = time.strftime("%a", ts)
text = day_name
w, h = font.getsize(text)
x = tile_positions[i][0] + (TILE_WIDTH - w) // 2
y += FONT_SIZE
draw.text((x, y), text, day_colour, font)
img.rotate(180)
if (SHOW_CLOCK == True):
now = datetime.now()
current_time = now.strftime("%H:%M")
draw.text((245, 410), current_time, time_colour, font)
if (USE_INKY):
inky_display.set_border(colors[4])
inky_display.set_image(img.rotate(ROTATE), saturation=0)
inky_display.show()
else:
img.show()
time.sleep(SLEEP_TIME)
print("loop")
| weather.py | 10,557 | !/usr/bin/python3 -*- coding: utf-8 -*- 180 = flip displayBUTTONS = [5, 6, 16, 24]GPIO.setup(Buttons, GPIO.IN, pull_up_down=GPIO.PUD_UP)def handle_button(pin): label = LABELS[BUTTONS.index(pin)]pressure = int(day["pressure"])print(day["pressure"]) | 249 | en | 0.394866 |
"""
This module contains the cli functions.
Split them out into separate files if required.
"""
import sys
import os
import subprocess
import pickle
from cheapskate_bal import balance as bal
__collector__ = {'exe': "collect_3008", 'samp_rate': 2000}
def csbal_process():
"""
This method is run when the `csbal` script is called.
can be used to check a single file (check balance state after adjusting)
args are file stem, freq (Hz [rpm/60] float), samp_rate (data collector)
"""
args = sys.argv[1:]
stem = args[0]
freq = float(args[1])
samp_rate = float(args[2])
df = bal.read_data_files(stem, freq, samp_rate)
bal.graph_data(df)
bal.process_data(df, freq, samp_rate, True)
def grab_data(tests, stem):
for t in tests:
msg, tag = t
print("\n\n==================================")
print(msg)
print("start DUT now")
input("Press Enter to start data capture...")
cp = subprocess.run(["taskset", "-c", "3", "nice", "-20", __collector__['exe'], stem+tag],
capture_output=True, universal_newlines=True)
summary = cp.stdout.splitlines()[-5:]
print(*summary,sep='\n')
def batch_process(tests, stem, freq):
results = []
for t in tests:
tag = t[1]
sr = __collector__['samp_rate']
df = bal.read_data_files(stem+tag, freq, sr)
results.append(bal.process_data(df, freq, sr))
return results
def csbal_single():
"""
This method performs the whole process for a single plane balance
Four data files are captured, and the results are emitted
args are file stem, freq(Hz), shift angle of test mass (deg), test mass """
args = sys.argv[1:]
if len(args) < 4:
print("args are stem, freq, shift_ang, test_mass")
stem = args[0]
freq = float(args[1])
shift_ang = float(args[2])
tmass = float(args[3])
offset_1_ang = 360
offset_2_ang = 360 # these should not both be 0, as there is a div by their sum
if len(args) > 5:
offset_1_ang = float(args[4])
offset_2_ang = float(args[5])
# make sure the stem looks like a directory
if stem[-1] != os.path.sep:
stem = stem + os.path.sep
tests = [('T0: initial unbalanced state', 't0'),
('T1: test mass at 0 deg ref', 't1'),
('T2: test mass at positive angle', 't2'),
('T3: test mass at negative angle', 't3'), ]
grab_data(tests, stem)
print("Processing captured data...")
results = batch_process(tests, stem, freq)
print("Balace Results:")
bal.single_balance(results, tmass, shift_ang, offset_1_ang, offset_2_ang)
def csbal_dual_init():
"""
THis method performs the whole process for a dual plane balance
Three files are captured and the results are emitted
args are file stem, freq(Hz), shift angle of test mass (deg), test mass """
args = sys.argv[1:]
if len(args) < 4:
print("args are stem, freq, shift_ang, test_mass")
stem = args[0]
freq = float(args[1])
shift_ang = float(args[2])
tmass = float(args[3])
# make sure the stem looks like a directory
if stem[-1] != os.path.sep:
stem = stem + os.path.sep
tests = [('T0: initial unbalanced state', 't0'),
('TA: test mass on bearing 1 at shift angle', 'ta'),
('TB: test mass on bearing 2 at shift angle', 'tb')]
grab_data(tests, stem)
print("Processing captured data...")
results = batch_process(tests, stem, freq)
print("Dual Plane Balance Results")
influence, correction = bal.dual_compute_influence(results, tmass, shift_ang)
# write the influence params to a file
inf_file = stem+"influence"
with open(inf_file, 'wb') as filehandle:
pickle.dump(influence, filehandle)
def csbal_dual_iter():
"""
This method performs an iteration of dual plane balance, once the
influence params are known. One file is captured and the results
are emitted
args are file stem, tag, freq
"""
args = sys.argv[1:]
if len(args) < 3:
print("args are: filestem, tag, freq")
stem = args[0]
tag = args[1]
freq = float(args[2])
# make sure the stem looks like a directory
if stem[-1] != os.path.sep:
stem = stem + os.path.sep
# get the influence from file
influence = []
inf_file = stem+"influence"
with open(inf_file, 'rb') as filehandle:
influence = pickle.load(filehandle)
tests = [('T(curr): initial unbalanced state', 't'+tag)]
grab_data(tests, stem)
print("Processing captured data...")
results = batch_process(tests, stem, freq)
print("Dual Plane Balance Results")
correction = bal.dual_compute_weights(results, influence)
| cheapskate_bal/cheapskate_bal/cli.py | 4,806 | THis method performs the whole process for a dual plane balance
Three files are captured and the results are emitted
args are file stem, freq(Hz), shift angle of test mass (deg), test mass
This method performs an iteration of dual plane balance, once the
influence params are known. One file is captured and the results
are emitted
args are file stem, tag, freq
This method is run when the `csbal` script is called.
can be used to check a single file (check balance state after adjusting)
args are file stem, freq (Hz [rpm/60] float), samp_rate (data collector)
This method performs the whole process for a single plane balance
Four data files are captured, and the results are emitted
args are file stem, freq(Hz), shift angle of test mass (deg), test mass
This module contains the cli functions.
Split them out into separate files if required.
these should not both be 0, as there is a div by their sum make sure the stem looks like a directory make sure the stem looks like a directory write the influence params to a file make sure the stem looks like a directory get the influence from file | 1,100 | en | 0.917547 |
from __future__ import absolute_import
import os
# test_settings.py works differently from
# dev_settings.py/prod_settings.py; it actually is directly referenced
# by the test suite as DJANGO_SETTINGS_MODULE and imports settings.py
# directly and then hacks up the values that are different for the
# test suite. As will be explained, this is kinda messy and probably
# we'd be better off switching it to work more like dev_settings.py,
# but for now, this is what we have.
#
# An important downside of the test_settings.py approach is that if we
# want to change any settings that settings.py then computes
# additional settings from (e.g. EXTERNAL_HOST), we need to do a hack
# like the below line(s) before we import from settings, for
# transmitting the value of EXTERNAL_HOST to dev_settings.py so that
# it can be set there, at the right place in the settings.py flow.
# Ick.
if os.getenv("EXTERNAL_HOST") is None:
os.environ["EXTERNAL_HOST"] = "testserver"
from .settings import *
# Used to clone DBs in backend tests.
BACKEND_DATABASE_TEMPLATE = 'zulip_test_template'
DATABASES["default"] = {
"NAME": "zulip_test",
"USER": "zulip_test",
"PASSWORD": LOCAL_DATABASE_PASSWORD,
"HOST": "localhost",
"SCHEMA": "zulip",
"ENGINE": "django.db.backends.postgresql_psycopg2",
"TEST_NAME": "django_zulip_tests",
"OPTIONS": {"connection_factory": TimeTrackingConnection},
}
if USING_PGROONGA:
# We need to have "pgroonga" schema before "pg_catalog" schema in
# the PostgreSQL search path, because "pgroonga" schema overrides
# the "@@" operator from "pg_catalog" schema, and "pg_catalog"
# schema is searched first if not specified in the search path.
# See also: http://www.postgresql.org/docs/current/static/runtime-config-client.html
pg_options = '-c search_path=%(SCHEMA)s,zulip,public,pgroonga,pg_catalog' % \
DATABASES['default']
DATABASES['default']['OPTIONS']['options'] = pg_options
if "TORNADO_SERVER" in os.environ:
# This covers the Casper test suite case
TORNADO_SERVER = os.environ["TORNADO_SERVER"]
else:
# This covers the backend test suite case
TORNADO_SERVER = None
CAMO_URI = 'https://external-content.zulipcdn.net/'
CAMO_KEY = 'dummy'
if "CASPER_TESTS" in os.environ:
CASPER_TESTS = True
# Decrease the get_updates timeout to 1 second.
# This allows CasperJS to proceed quickly to the next test step.
POLL_TIMEOUT = 1000
# Don't use the real message log for tests
EVENT_LOG_DIR = '/tmp/zulip-test-event-log'
# Print our emails rather than sending them
EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'
# The test suite uses EmailAuthBackend
AUTHENTICATION_BACKENDS += ('zproject.backends.EmailAuthBackend',)
# Configure Google Oauth2
GOOGLE_OAUTH2_CLIENT_ID = "test_client_id"
# Makes testing LDAP backend require less mocking
AUTH_LDAP_ALWAYS_UPDATE_USER = False
TEST_SUITE = True
RATE_LIMITING = False
# Don't use rabbitmq from the test suite -- the user_profile_ids for
# any generated queue elements won't match those being used by the
# real app.
USING_RABBITMQ = False
# Disable the tutorial because it confuses the client tests.
TUTORIAL_ENABLED = False
# Disable use of memcached for caching
CACHES['database'] = {
'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
'LOCATION': 'zulip-database-test-cache',
'TIMEOUT': 3600,
'CONN_MAX_AGE': 600,
'OPTIONS': {
'MAX_ENTRIES': 100000
}
}
# Use production config from Webpack in tests
if CASPER_TESTS:
WEBPACK_FILE = 'webpack-stats-production.json'
else:
WEBPACK_FILE = os.path.join('var', 'webpack-stats-test.json')
WEBPACK_LOADER['DEFAULT']['STATS_FILE'] = os.path.join(DEPLOY_ROOT, WEBPACK_FILE)
if CASPER_TESTS:
# Don't auto-restart Tornado server during casper tests
AUTORELOAD = False
REALMS_HAVE_SUBDOMAINS = True
else:
# Use local memory cache for backend tests.
CACHES['default'] = {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
}
LOGGING['loggers']['zulip.requests']['level'] = 'CRITICAL'
LOGGING['loggers']['zulip.management']['level'] = 'CRITICAL'
LOGGING['loggers']['django.request'] = {'level': 'ERROR'}
LOGGING['loggers']['fakeldap'] = {'level': 'ERROR'}
# Enable file:/// hyperlink support by default in tests
ENABLE_FILE_LINKS = True
LOCAL_UPLOADS_DIR = 'var/test_uploads'
S3_KEY = 'test-key'
S3_SECRET_KEY = 'test-secret-key'
S3_AUTH_UPLOADS_BUCKET = 'test-authed-bucket'
# Test Custom TOS template rendering
TERMS_OF_SERVICE = 'corporate/terms.md'
INLINE_URL_EMBED_PREVIEW = False
HOME_NOT_LOGGED_IN = '/login'
LOGIN_URL = '/accounts/login'
# By default will not send emails when login occurs.
# Explicity set this to True within tests that must have this on.
SEND_LOGIN_EMAILS = False
GOOGLE_OAUTH2_CLIENT_ID = "id"
GOOGLE_OAUTH2_CLIENT_SECRET = "secret"
SOCIAL_AUTH_GITHUB_KEY = "key"
SOCIAL_AUTH_GITHUB_SECRET = "secret"
| zproject/test_settings.py | 4,961 | test_settings.py works differently from dev_settings.py/prod_settings.py; it actually is directly referenced by the test suite as DJANGO_SETTINGS_MODULE and imports settings.py directly and then hacks up the values that are different for the test suite. As will be explained, this is kinda messy and probably we'd be better off switching it to work more like dev_settings.py, but for now, this is what we have. An important downside of the test_settings.py approach is that if we want to change any settings that settings.py then computes additional settings from (e.g. EXTERNAL_HOST), we need to do a hack like the below line(s) before we import from settings, for transmitting the value of EXTERNAL_HOST to dev_settings.py so that it can be set there, at the right place in the settings.py flow. Ick. Used to clone DBs in backend tests. We need to have "pgroonga" schema before "pg_catalog" schema in the PostgreSQL search path, because "pgroonga" schema overrides the "@@" operator from "pg_catalog" schema, and "pg_catalog" schema is searched first if not specified in the search path. See also: http://www.postgresql.org/docs/current/static/runtime-config-client.html This covers the Casper test suite case This covers the backend test suite case Decrease the get_updates timeout to 1 second. This allows CasperJS to proceed quickly to the next test step. Don't use the real message log for tests Print our emails rather than sending them The test suite uses EmailAuthBackend Configure Google Oauth2 Makes testing LDAP backend require less mocking Don't use rabbitmq from the test suite -- the user_profile_ids for any generated queue elements won't match those being used by the real app. Disable the tutorial because it confuses the client tests. Disable use of memcached for caching Use production config from Webpack in tests Don't auto-restart Tornado server during casper tests Use local memory cache for backend tests. Enable file:/// hyperlink support by default in tests Test Custom TOS template rendering By default will not send emails when login occurs. Explicity set this to True within tests that must have this on. | 2,135 | en | 0.844574 |
import json
import os
import signal
import sys
from zipfile import BadZipfile
from zlib import error as zlib_error
from defusedxml.common import DefusedXmlException
import validator
from validator import decorator
from validator.chromemanifest import ChromeManifest
from validator.opensearch import detect_opensearch
from validator.rdf import RDFException, RDFParser
from validator.typedetection import detect_type
from validator.xpi import XPIManager
from constants import (PACKAGE_ANY, PACKAGE_EXTENSION, PACKAGE_SEARCHPROV,
PACKAGE_THEME)
types = {0: 'Unknown',
1: 'Extension/Multi-Extension',
2: 'Full Theme',
3: 'Dictionary',
4: 'Language Pack',
5: 'Search Provider'}
assumed_extensions = {'jar': PACKAGE_THEME,
'xml': PACKAGE_SEARCHPROV}
def prepare_package(err, path, expectation=0, for_appversions=None,
timeout=-1):
"""Prepares a file-based package for validation.
timeout is the number of seconds before validation is aborted.
If timeout is -1 then no timeout checking code will run.
"""
package = None
try:
# Test that the package actually exists. I consider this Tier 0
# since we may not even be dealing with a real file.
if not os.path.isfile(path):
err.error(('main', 'prepare_package', 'not_found'),
'The package could not be found')
return
# Pop the package extension.
package_extension = os.path.splitext(path)[1]
package_extension = package_extension.lower()
def timeout_handler(signum, frame):
raise validator.ValidationTimeout(timeout)
if timeout != -1:
signal.signal(signal.SIGALRM, timeout_handler)
signal.setitimer(signal.ITIMER_REAL, timeout)
if package_extension == '.xml':
test_search(err, path, expectation)
elif package_extension not in ('.xpi', '.jar'):
err.error(('main', 'prepare_package', 'unrecognized'),
'The package is not of a recognized type.')
else:
package = open(path, 'rb')
test_package(err, package, path, expectation, for_appversions)
except validator.ValidationTimeout:
err.system_error(
msg_id='validation_timeout',
message='Validation has timed out',
signing_severity='high',
description=('Validation was unable to complete in the allotted '
'time. This is most likely due to the size or '
'complexity of your add-on.',
'This timeout has been logged, but please consider '
'filing an issue report here: http://mzl.la/1DG0sFd'),
exc_info=sys.exc_info())
except Exception:
err.system_error(exc_info=sys.exc_info())
finally:
# Remove timers and signal handlers regardless of whether
# we've completed successfully or the timer has fired.
if timeout != -1:
signal.setitimer(signal.ITIMER_REAL, 0)
signal.signal(signal.SIGALRM, signal.SIG_DFL)
if package:
package.close()
decorator.cleanup()
def test_search(err, package, expectation=0):
'Tests the package to see if it is a search provider.'
expected_search_provider = expectation in (PACKAGE_ANY,
PACKAGE_SEARCHPROV)
# If we're not expecting a search provider, warn the user and stop
# testing it like a search provider.
if not expected_search_provider:
return err.warning(('main',
'test_search',
'extension'),
'Unexpected file extension.')
# Is this a search provider?
detect_opensearch(err, package, listed=err.get_resource('listed'))
if expected_search_provider and not err.failed():
err.detected_type = PACKAGE_SEARCHPROV
def test_package(err, file_, name, expectation=PACKAGE_ANY,
for_appversions=None):
'Begins tests for the package.'
# Load up a new instance of an XPI.
try:
package = XPIManager(file_, mode='r', name=name)
has_package_json = 'package.json' in package
has_manifest_json = 'manifest.json' in package
has_install_rdf = 'install.rdf' in package
# install.rdf? | package.json? | manifest.json? | error | use-file
# Yes | No | No | No | install.rdf
# Yes | Yes | No | No | install.rdf
# Yes | No | Yes | No | install.rdf
# No | No | Yes | No | manifest.json
# No | No | No | Yes | install.rdf
# No | Yes | No | No | package.json
# No | No | Yes | Yes | install.rdf
if has_package_json:
_load_package_json(err, package, expectation)
if has_manifest_json:
_load_manifest_json(err, package, expectation)
if has_install_rdf:
_load_install_rdf(err, package, expectation)
except IOError:
# Die on this one because the file won't open.
err.error(('main', 'test_package', 'unopenable'),
'The XPI could not be opened.')
return
except (BadZipfile, zlib_error):
# Die if the zip file is corrupt.
err.error(('submain', '_load_install_rdf', 'badzipfile'),
error='Corrupt ZIP file',
description='We were unable to decompress the zip file.')
return
if package.extension in assumed_extensions:
assumed_type = assumed_extensions[package.extension]
# Is the user expecting a different package type?
if expectation not in (PACKAGE_ANY, assumed_type):
err.error(('main', 'test_package', 'unexpected_type'),
'Unexpected package type (found theme)')
test_inner_package(err, package, for_appversions)
def _load_install_rdf(err, package, expectation):
try:
install_rdf = RDFParser(err, package.read('install.rdf'))
except (RDFException, DefusedXmlException) as ex:
if isinstance(ex, DefusedXmlException):
url = 'https://pypi.python.org/pypi/defusedxml/0.3#attack-vectors'
reason = 'Malicious XML was detected, see {0}.'.format(url)
line = 0
else:
reason = ('Try validating your RDF with the W3 validator: '
'http://www.w3.org/RDF/Validator/.')
line = ex.line()
err.error(
err_id=('main', 'test_package', 'parse_error'),
error='Could not parse `install.rdf`.',
description=('The RDF parser was unable to parse the '
'install.rdf file included with this add-on.',
reason),
filename='install.rdf',
line=line)
return
else:
if install_rdf.rdf is None:
err.error(
err_id=('main', 'test_package', 'cannot_parse_installrdf'),
error='Cannot read `install.rdf`',
description='The install.rdf file could not be parsed.',
filename='install.rdf')
return
else:
err.save_resource('has_install_rdf', True, pushable=True)
err.save_resource('install_rdf', install_rdf, pushable=True)
# Load up the results of the type detection
results = detect_type(err, install_rdf, package)
if results is None:
err.error(
err_id=('main', 'test_package', 'undeterminable_type'),
error='Unable to determine add-on type',
description='The type detection algorithm could not determine '
'the type of the add-on.')
return
else:
err.detected_type = results
# Compare the results of the low-level type detection to
# that of the expectation and the assumption.
if expectation not in (PACKAGE_ANY, results):
err.warning(
err_id=('main', 'test_package', 'extension_type_mismatch'),
warning='Extension Type Mismatch',
description=("We detected that the add-on's type does not match "
'the expected type.',
'Type "%s" expected, found "%s"' %
(types[expectation], types[results])))
def _load_package_json(err, package, expectation):
raw_package_json = package.read('package.json')
try:
package_json = json.loads(raw_package_json)
except ValueError:
err.error(
err_id=('main', 'test_package', 'parse_error'),
error='Could not parse `package.json`.',
description='The JSON parser was unable to parse the '
'package.json file included with this add-on.',
filename='package.json')
else:
err.save_resource('has_package_json', True, pushable=True)
err.save_resource('package_json', package_json, pushable=True)
err.detected_type = PACKAGE_EXTENSION
def _load_manifest_json(err, package, expectation):
raw_manifest_json = package.read('manifest.json')
try:
manifest_json = json.loads(raw_manifest_json)
except ValueError:
err.error(
err_id=('main', 'test_package', 'parse_error'),
error='Could not parse `manifest.json`.',
description='The JSON parser was unable to parse the '
'manifest.json file included with this add-on.',
filename='manifest.json')
else:
err.save_resource('has_manifest_json', True, pushable=True)
err.save_resource('manifest_json', manifest_json, pushable=True)
err.detected_type = PACKAGE_EXTENSION
def populate_chrome_manifest(err, xpi_package):
"Loads the chrome.manifest if it's present"
if 'chrome.manifest' in xpi_package:
chrome_data = xpi_package.read('chrome.manifest')
chrome = ChromeManifest(chrome_data, 'chrome.manifest')
chrome_recursion_buster = set()
# Handle the case of manifests linked from the manifest.
def get_linked_manifest(path, from_path, from_chrome, from_triple):
if path in chrome_recursion_buster:
err.warning(
err_id=('submain', 'populate_chrome_manifest',
'recursion'),
warning='Linked manifest recursion detected.',
description='A chrome registration file links back to '
'itself. This can cause a multitude of '
'issues.',
filename=path)
return
# Make sure the manifest is properly linked
if path not in xpi_package:
err.notice(
err_id=('submain', 'populate_chrome_manifest', 'linkerr'),
notice='Linked manifest could not be found.',
description=('A linked manifest file could not be found '
'in the package.',
'Path: %s' % path),
filename=from_path,
line=from_triple['line'],
context=from_chrome.context)
return
chrome_recursion_buster.add(path)
manifest = ChromeManifest(xpi_package.read(path), path)
for triple in manifest.triples:
yield triple
if triple['subject'] == 'manifest':
subpath = triple['predicate']
# If the path is relative, make it relative to the current
# file.
if not subpath.startswith('/'):
subpath = '%s/%s' % (
'/'.join(path.split('/')[:-1]), subpath)
subpath = subpath.lstrip('/')
for subtriple in get_linked_manifest(
subpath, path, manifest, triple):
yield subtriple
chrome_recursion_buster.discard(path)
chrome_recursion_buster.add('chrome.manifest')
# Search for linked manifests in the base manifest.
for extra_manifest in chrome.get_triples(subject='manifest'):
# When one is found, add its triples to our own.
for triple in get_linked_manifest(extra_manifest['predicate'],
'chrome.manifest', chrome,
extra_manifest):
chrome.triples.append(triple)
chrome_recursion_buster.discard('chrome.manifest')
# Create a reference so we can get the chrome manifest later, but make
# it pushable so we don't run chrome manifests in JAR files.
err.save_resource('chrome.manifest', chrome, pushable=True)
# Create a non-pushable reference for tests that need to access the
# chrome manifest from within JAR files.
err.save_resource('chrome.manifest_nopush', chrome, pushable=False)
def test_inner_package(err, xpi_package, for_appversions=None):
"Tests a package's inner content."
populate_chrome_manifest(err, xpi_package)
# Iterate through each tier.
for tier in sorted(decorator.get_tiers()):
# Let the error bundler know what tier we're on.
err.set_tier(tier)
# Iterate through each test of our detected type.
for test in decorator.get_tests(tier, err.detected_type):
# Test whether the test is app/version specific.
if test['versions'] is not None:
# If the test's version requirements don't apply to the add-on,
# then skip the test.
if not err.supports_version(test['versions']):
continue
# If the user's version requirements don't apply to the test or
# to the add-on, then skip the test.
if (for_appversions and
not (err._compare_version(requirements=for_appversions,
support=test['versions']) and
err.supports_version(for_appversions))):
continue
# Save the version requirements to the error bundler.
err.version_requirements = test['versions']
test_func = test['test']
if test['simple']:
test_func(err)
else:
# Pass in:
# - Error Bundler
# - A copy of the package itself
test_func(err, xpi_package)
# Return any errors at the end of the tier if undetermined.
if err.failed(fail_on_warnings=False) and not err.determined:
err.unfinished = True
err.discard_unused_messages(ending_tier=tier)
return err
# Return the results.
return err
| validator/submain.py | 15,382 | Loads the chrome.manifest if it's present
Prepares a file-based package for validation.
timeout is the number of seconds before validation is aborted.
If timeout is -1 then no timeout checking code will run.
Tests a package's inner content.
Begins tests for the package.
Tests the package to see if it is a search provider.
Test that the package actually exists. I consider this Tier 0 since we may not even be dealing with a real file. Pop the package extension. Remove timers and signal handlers regardless of whether we've completed successfully or the timer has fired. If we're not expecting a search provider, warn the user and stop testing it like a search provider. Is this a search provider? Load up a new instance of an XPI. install.rdf? | package.json? | manifest.json? | error | use-file Yes | No | No | No | install.rdf Yes | Yes | No | No | install.rdf Yes | No | Yes | No | install.rdf No | No | Yes | No | manifest.json No | No | No | Yes | install.rdf No | Yes | No | No | package.json No | No | Yes | Yes | install.rdf Die on this one because the file won't open. Die if the zip file is corrupt. Is the user expecting a different package type? Load up the results of the type detection Compare the results of the low-level type detection to that of the expectation and the assumption. Handle the case of manifests linked from the manifest. Make sure the manifest is properly linked If the path is relative, make it relative to the current file. Search for linked manifests in the base manifest. When one is found, add its triples to our own. Create a reference so we can get the chrome manifest later, but make it pushable so we don't run chrome manifests in JAR files. Create a non-pushable reference for tests that need to access the chrome manifest from within JAR files. Iterate through each tier. Let the error bundler know what tier we're on. Iterate through each test of our detected type. Test whether the test is app/version specific. If the test's version requirements don't apply to the add-on, then skip the test. If the user's version requirements don't apply to the test or to the add-on, then skip the test. Save the version requirements to the error bundler. Pass in: - Error Bundler - A copy of the package itself Return any errors at the end of the tier if undetermined. Return the results. | 2,570 | en | 0.84549 |
from flask import Flask, request, redirect, render_template, url_for, flash, jsonify
import gridfs, random, uuid, os
from flask_uploads import UploadSet, configure_uploads, IMAGES
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import desc
from datetime import datetime
app = Flask(__name__,
static_url_path = '',
static_folder = 'static',
template_folder = 'templates')
app.config['SECRET_KEY'] = 'big secrets'
photos = UploadSet('photos', IMAGES)
app.config['UPLOAD_FOLDER'] = 'images_store'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
# Database setup
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
db = SQLAlchemy(app)
# SQL form items
class PostItem(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), unique=False, nullable=False)
storeItem = db.Column(db.String(80), unique=False, nullable=False)
avalability = db.Column(db.String(80), unique=False, nullable=False)
location = db.Column(db.String(80), unique=False, nullable=False)
#time = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
def __repr__(self):
return (id, name, storeItem, avalability, location)
db.create_all()
def get_posts():
query = [i.__dict__ for i in PostItem.query.all()]
for item in query:
del item['_sa_instance_state']
return query
# Render webpages
@app.route("/")
def render_index():
return render_template("index.html", posts = get_posts())
@app.route('/about')
def render_about():
return render_template('about.html')
@app.route("/upload/", methods=['GET', 'POST'])
def render_upload():
# Get form data
if request.method == 'POST':
# Check if the form is empty
item = ""
if '--------' == request.form.get('storeItem'):
redirected = redirect(url_for('render_upload'))
flash('Please select store item.')
return redirected
elif 'Other' == request.form.get('storeItem'):
item = request.form.get('Other')
else:
item = request.form.get('storeItem')
if None is request.form.get('radio'):
redirected = redirect(url_for('render_upload'))
flash('Please select an availability option.')
return redirected
if '' == request.form.get('Name'):
redirected = redirect(url_for('render_upload'))
flash('Please enter a name.')
return redirected
if '' == request.form.get('location'):
redirected = redirect(url_for('render_upload'))
flash('Please enter a location.')
return redirected
if '' == request.form.get('store'):
redirected = redirect(url_for('render_upload'))
flash('Please enter a store.')
return redirected
if 'photo' not in request.files:
redirected = redirect(url_for('render_upload'))
flash('Please upload a photo.')
return redirected
file = request.files['photo']
if '' == file.filename:
redirected = redirect(url_for('render_upload'))
flash('No photo selected')
return redirected
locationStr = request.form.get('location') + '-' + request.form.get('store')
# Save to database
post = PostItem(name = request.form.get('Name'), storeItem = item, avalability = request.form.get('radio'), location = locationStr)
db.session.add(post)
db.session.commit()
# Save the photo in the upload folder
photo = request.files['photo']
path = os.path.join(app.config['UPLOAD_FOLDER'], str(post.id))
photo.save(path)
# Print test
print(str(post.id) + post.storeItem + post.avalability)
return redirect(url_for('render_index'))
return render_template('upload.html')
if __name__ == '__main__':
app.run('0.0.0.0', 3000)
| app.py | 3,986 | Database setup SQL form itemstime = db.Column(db.DateTime, nullable=False, default=datetime.utcnow) Render webpages Get form data Check if the form is empty Save to database Save the photo in the upload folder Print test | 220 | en | 0.527687 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Script to parse BSM event auditing files."""
import argparse
import logging
import sys
from dtformats import bsm
from dtformats import output_writers
def Main():
"""The main program function.
Returns:
bool: True if successful or False if not.
"""
argument_parser = argparse.ArgumentParser(description=(
'Extracts information from BSM event auditing files.'))
argument_parser.add_argument(
'-d', '--debug', dest='debug', action='store_true', default=False,
help='enable debug output.')
argument_parser.add_argument(
'source', nargs='?', action='store', metavar='PATH',
default=None, help='path of the BSM event auditing file.')
options = argument_parser.parse_args()
if not options.source:
print('Source file missing.')
print('')
argument_parser.print_help()
print('')
return False
logging.basicConfig(
level=logging.INFO, format='[%(levelname)s] %(message)s')
output_writer = output_writers.StdoutWriter()
try:
output_writer.Open()
except IOError as exception:
print('Unable to open output writer with error: {0!s}'.format(exception))
print('')
return False
log_file = bsm.BSMEventAuditingFile(
debug=options.debug, output_writer=output_writer)
log_file.Open(options.source)
print('BSM event auditing information:')
print('')
log_file.Close()
output_writer.Close()
return True
if __name__ == '__main__':
if not Main():
sys.exit(1)
else:
sys.exit(0)
| scripts/bsm.py | 1,550 | The main program function.
Returns:
bool: True if successful or False if not.
Script to parse BSM event auditing files.
!/usr/bin/env python -*- coding: utf-8 -*- | 166 | en | 0.535807 |
#!/usr/bin/env python
# -*- coding: utf8 -*-
# *****************************************************************
# ** PTS -- Python Toolkit for working with SKIRT **
# ** © Astronomical Observatory, Ghent University **
# *****************************************************************
## \package pts.magic.tools.masks Contains functions for dealing with two-dimensional masks.
# -----------------------------------------------------------------
# Ensure Python 3 functionality
from __future__ import absolute_import, division, print_function
# Import standard modules
import numpy as np
# Import the relevant PTS classes and modules
from . import regions
# -----------------------------------------------------------------
def annuli_around(region, inner_factor, outer_factor, header, x_size, y_size):
"""
This function ...
:param region:
:param inner_factor:
:param outer_factor:
:param header:
:param x_size:
:param y_size:
:return:
"""
# Create new regions for the background estimation around the stars
inner_region = regions.expand(region, inner_factor)
outer_region = regions.expand(region, outer_factor)
# Create inner and outer masks
inner_mask = regions.create_mask(inner_region, header, x_size, y_size)
outer_mask = regions.create_mask(outer_region, header, x_size, y_size)
# Create the mask
mask = inner_mask | np.logical_not(outer_mask)
# Return the mask
return mask
# -----------------------------------------------------------------
def masked_outside(region, header, x_size, y_size, expand_factor=1.0):
"""
This function ...
:param region:
:param header:
:param x_size:
:param y_size:
:param expand_factor:
:return:
"""
# Create a new region ...
region = regions.expand(region, factor=expand_factor)
# Create a mask from the region
mask = np.logical_not(regions.create_mask(region, header, x_size, y_size))
# Return the mask
return mask
# -----------------------------------------------------------------
def create_disk_mask(x_size, y_size, x_center, y_center, radius):
"""
This function ...
:param x_size:
:param y_size:
:param x_center:
:param y_center:
:param radius:
:return:
"""
# Calculate which pixels should be masked
y,x = np.ogrid[-y_center:y_size-y_center, -x_center:x_size-x_center]
mask = x*x + y*y <= radius*radius
# Return the mask
return mask
# -----------------------------------------------------------------
#def union(*args): # i wanted to do it this way, but didn't succeed ...
def union(mask_a, mask_b):
"""
This function ...
:param args:
:return:
"""
return mask_a + mask_b
# -----------------------------------------------------------------
#def intersection(*args): i wanted to do it this way, but didn't succeed ...
def intersection(mask_a, mask_b):
"""
This function ...
:param args:
:return:
"""
return mask_a * mask_b
# -----------------------------------------------------------------
def overlap(mask_a, mask_b):
"""
This function ...
:param mask_a:
:param mask_b:
:return:
"""
return np.any(intersection(mask_a, mask_b))
# -----------------------------------------------------------------
def split_overlap(base_mask, test_mask, return_segments=False):
"""
This function takes all blobs in the base_mask and checks whether they overlap with the test_mask.
The function returns two new masks, one mask with all the blobs that overlapped, and another with the blobs
that did not overlap.
:param base_mask:
:param test_mask:
:return:
"""
overlapping = np.zeros_like(base_mask, dtype=bool)
not_overlapping = np.copy(base_mask)
from photutils import detect_sources
segments = detect_sources(base_mask.astype('float'), 0.5, 1).data
overlap = intersection(segments, test_mask)
# Check which indices are present in the overlap map
possible = np.array(range(1, np.max(overlap) + 1))
present = np.in1d(possible, overlap)
indices = possible[present]
overlapping_segments = np.zeros_like(base_mask, dtype=int)
not_overlapping_segments = np.copy(segments)
# Remove the galaxies from the segmentation map
for index in indices:
blob = segments == index
overlapping[blob] = True
not_overlapping[blob] = False
overlapping_segments[blob] = index
not_overlapping_segments[blob] = 0
if return_segments: return overlapping, not_overlapping, overlapping_segments, not_overlapping_segments
else: return overlapping, not_overlapping
# -----------------------------------------------------------------
| CAAPR/CAAPR_AstroMagic/PTS/pts/magic/tools/masks.py | 4,809 | This function ...
:param region:
:param inner_factor:
:param outer_factor:
:param header:
:param x_size:
:param y_size:
:return:
This function ...
:param x_size:
:param y_size:
:param x_center:
:param y_center:
:param radius:
:return:
This function ...
:param args:
:return:
This function ...
:param region:
:param header:
:param x_size:
:param y_size:
:param expand_factor:
:return:
This function ...
:param mask_a:
:param mask_b:
:return:
This function takes all blobs in the base_mask and checks whether they overlap with the test_mask.
The function returns two new masks, one mask with all the blobs that overlapped, and another with the blobs
that did not overlap.
:param base_mask:
:param test_mask:
:return:
This function ...
:param args:
:return:
!/usr/bin/env python -*- coding: utf8 -*- ***************************************************************** ** PTS -- Python Toolkit for working with SKIRT ** ** © Astronomical Observatory, Ghent University ** ***************************************************************** \package pts.magic.tools.masks Contains functions for dealing with two-dimensional masks. ----------------------------------------------------------------- Ensure Python 3 functionality Import standard modules Import the relevant PTS classes and modules ----------------------------------------------------------------- Create new regions for the background estimation around the stars Create inner and outer masks Create the mask Return the mask ----------------------------------------------------------------- Create a new region ... Create a mask from the region Return the mask ----------------------------------------------------------------- Calculate which pixels should be masked Return the mask -----------------------------------------------------------------def union(*args): i wanted to do it this way, but didn't succeed ... -----------------------------------------------------------------def intersection(*args): i wanted to do it this way, but didn't succeed ... ----------------------------------------------------------------- ----------------------------------------------------------------- Check which indices are present in the overlap map Remove the galaxies from the segmentation map ----------------------------------------------------------------- | 2,337 | en | 0.534278 |
import sys, math
import numpy as np
import Box2D
from Box2D.b2 import (edgeShape, circleShape, fixtureDef, polygonShape, revoluteJointDef, contactListener)
import gym
from gym import spaces
from gym.utils import colorize, seeding
# This is simple 4-joints walker robot environment.
#
# There are two versions:
#
# - Normal, with slightly uneven terrain.
#
# - Hardcore with ladders, stumps, pitfalls.
#
# Reward is given for moving forward, total 300+ points up to the far end. If the robot falls,
# it gets -100. Applying motor torque costs a small amount of points, more optimal agent
# will get better score.
#
# Heuristic is provided for testing, it's also useful to get demonstrations to
# learn from. To run heuristic:
#
# python gym/envs/box2d/bipedal_walker.py
#
# State consists of hull angle speed, angular velocity, horizontal speed, vertical speed,
# position of joints and joints angular speed, legs contact with ground, and 10 lidar
# rangefinder measurements to help to deal with the hardcore version. There's no coordinates
# in the state vector. Lidar is less useful in normal version, but it works.
#
# To solve the game you need to get 300 points in 1600 time steps.
#
# To solve hardcore version you need 300 points in 2000 time steps.
#
# Created by Oleg Klimov. Licensed on the same terms as the rest of OpenAI Gym.
FPS = 50
SCALE = 30.0 # affects how fast-paced the game is, forces should be adjusted as well
MOTORS_TORQUE = 80
SPEED_HIP = 4
SPEED_KNEE = 6
LIDAR_RANGE = 160/SCALE
INITIAL_RANDOM = 5
HULL_POLY =[
(-30,+9), (+6,+9), (+34,+1),
(+34,-8), (-30,-8)
]
LEG_DOWN = -8/SCALE
LEG_W, LEG_H = 8/SCALE, 34/SCALE
VIEWPORT_W = 600
VIEWPORT_H = 400
TERRAIN_STEP = 14/SCALE
TERRAIN_LENGTH = 200 # in steps
TERRAIN_HEIGHT = VIEWPORT_H/SCALE/4
TERRAIN_GRASS = 10 # low long are grass spots, in steps
TERRAIN_STARTPAD = 20 # in steps
FRICTION = 2.5
HULL_FD = fixtureDef(
shape=polygonShape(vertices=[ (x/SCALE,y/SCALE) for x,y in HULL_POLY ]),
density=5.0,
friction=0.1,
categoryBits=0x0020,
maskBits=0x001, # collide only with ground
restitution=0.0) # 0.99 bouncy
LEG_FD = fixtureDef(
shape=polygonShape(box=(LEG_W/2, LEG_H/2)),
density=1.0,
restitution=0.0,
categoryBits=0x0020,
maskBits=0x001)
LOWER_FD = fixtureDef(
shape=polygonShape(box=(0.8*LEG_W/2, LEG_H/2)),
density=1.0,
restitution=0.0,
categoryBits=0x0020,
maskBits=0x001)
class ContactDetector(contactListener):
def __init__(self, env):
contactListener.__init__(self)
self.env = env
def BeginContact(self, contact):
if self.env.hull==contact.fixtureA.body or self.env.hull==contact.fixtureB.body:
self.env.game_over = True
for leg in [self.env.legs[1], self.env.legs[3]]:
if leg in [contact.fixtureA.body, contact.fixtureB.body]:
leg.ground_contact = True
def EndContact(self, contact):
for leg in [self.env.legs[1], self.env.legs[3]]:
if leg in [contact.fixtureA.body, contact.fixtureB.body]:
leg.ground_contact = False
class BipedalWalker(gym.Env):
metadata = {
'render.modes': ['human', 'rgb_array'],
'video.frames_per_second' : FPS
}
hardcore = False
def __init__(self):
self.seed()
self.viewer = None
self.world = Box2D.b2World()
self.terrain = None
self.hull = None
self.prev_shaping = None
self.fd_polygon = fixtureDef(
shape = polygonShape(vertices=
[(0, 0),
(1, 0),
(1, -1),
(0, -1)]),
friction = FRICTION)
self.fd_edge = fixtureDef(
shape = edgeShape(vertices=
[(0, 0),
(1, 1)]),
friction = FRICTION,
categoryBits=0x0001,
)
self.reset()
high = np.array([np.inf]*24)
self.action_space = spaces.Box(np.array([-1,-1,-1,-1]), np.array([+1,+1,+1,+1]))
self.observation_space = spaces.Box(-high, high)
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def _destroy(self):
if not self.terrain: return
self.world.contactListener = None
for t in self.terrain:
self.world.DestroyBody(t)
self.terrain = []
self.world.DestroyBody(self.hull)
self.hull = None
for leg in self.legs:
self.world.DestroyBody(leg)
self.legs = []
self.joints = []
def _generate_terrain(self, hardcore):
GRASS, STUMP, STAIRS, PIT, _STATES_ = range(5)
state = GRASS
velocity = 0.0
y = TERRAIN_HEIGHT
counter = TERRAIN_STARTPAD
oneshot = False
self.terrain = []
self.terrain_x = []
self.terrain_y = []
for i in range(TERRAIN_LENGTH):
x = i*TERRAIN_STEP
self.terrain_x.append(x)
if state==GRASS and not oneshot:
velocity = 0.8*velocity + 0.01*np.sign(TERRAIN_HEIGHT - y)
if i > TERRAIN_STARTPAD: velocity += self.np_random.uniform(-1, 1)/SCALE #1
y += velocity
elif state==PIT and oneshot:
counter = self.np_random.randint(3, 5)
poly = [
(x, y),
(x+TERRAIN_STEP, y),
(x+TERRAIN_STEP, y-4*TERRAIN_STEP),
(x, y-4*TERRAIN_STEP),
]
self.fd_polygon.shape.vertices=poly
t = self.world.CreateStaticBody(
fixtures = self.fd_polygon)
t.color1, t.color2 = (1,1,1), (0.6,0.6,0.6)
self.terrain.append(t)
self.fd_polygon.shape.vertices=[(p[0]+TERRAIN_STEP*counter,p[1]) for p in poly]
t = self.world.CreateStaticBody(
fixtures = self.fd_polygon)
t.color1, t.color2 = (1,1,1), (0.6,0.6,0.6)
self.terrain.append(t)
counter += 2
original_y = y
elif state==PIT and not oneshot:
y = original_y
if counter > 1:
y -= 4*TERRAIN_STEP
elif state==STUMP and oneshot:
counter = self.np_random.randint(1, 3)
poly = [
(x, y),
(x+counter*TERRAIN_STEP, y),
(x+counter*TERRAIN_STEP, y+counter*TERRAIN_STEP),
(x, y+counter*TERRAIN_STEP),
]
self.fd_polygon.shape.vertices=poly
t = self.world.CreateStaticBody(
fixtures = self.fd_polygon)
t.color1, t.color2 = (1,1,1), (0.6,0.6,0.6)
self.terrain.append(t)
elif state==STAIRS and oneshot:
stair_height = +1 if self.np_random.rand() > 0.5 else -1
stair_width = self.np_random.randint(4, 5)
stair_steps = self.np_random.randint(3, 5)
original_y = y
for s in range(stair_steps):
poly = [
(x+( s*stair_width)*TERRAIN_STEP, y+( s*stair_height)*TERRAIN_STEP),
(x+((1+s)*stair_width)*TERRAIN_STEP, y+( s*stair_height)*TERRAIN_STEP),
(x+((1+s)*stair_width)*TERRAIN_STEP, y+(-1+s*stair_height)*TERRAIN_STEP),
(x+( s*stair_width)*TERRAIN_STEP, y+(-1+s*stair_height)*TERRAIN_STEP),
]
self.fd_polygon.shape.vertices=poly
t = self.world.CreateStaticBody(
fixtures = self.fd_polygon)
t.color1, t.color2 = (1,1,1), (0.6,0.6,0.6)
self.terrain.append(t)
counter = stair_steps*stair_width
elif state==STAIRS and not oneshot:
s = stair_steps*stair_width - counter - stair_height
n = s/stair_width
y = original_y + (n*stair_height)*TERRAIN_STEP
oneshot = False
self.terrain_y.append(y)
counter -= 1
if counter==0:
counter = self.np_random.randint(TERRAIN_GRASS/2, TERRAIN_GRASS)
if state==GRASS and hardcore:
state = self.np_random.randint(1, _STATES_)
oneshot = True
else:
state = GRASS
oneshot = True
self.terrain_poly = []
for i in range(TERRAIN_LENGTH-1):
poly = [
(self.terrain_x[i], self.terrain_y[i]),
(self.terrain_x[i+1], self.terrain_y[i+1])
]
self.fd_edge.shape.vertices=poly
t = self.world.CreateStaticBody(
fixtures = self.fd_edge)
color = (0.3, 1.0 if i%2==0 else 0.8, 0.3)
t.color1 = color
t.color2 = color
self.terrain.append(t)
color = (0.4, 0.6, 0.3)
poly += [ (poly[1][0], 0), (poly[0][0], 0) ]
self.terrain_poly.append( (poly, color) )
self.terrain.reverse()
def _generate_clouds(self):
# Sorry for the clouds, couldn't resist
self.cloud_poly = []
for i in range(TERRAIN_LENGTH//20):
x = self.np_random.uniform(0, TERRAIN_LENGTH)*TERRAIN_STEP
y = VIEWPORT_H/SCALE*3/4
poly = [
(x+15*TERRAIN_STEP*math.sin(3.14*2*a/5)+self.np_random.uniform(0,5*TERRAIN_STEP),
y+ 5*TERRAIN_STEP*math.cos(3.14*2*a/5)+self.np_random.uniform(0,5*TERRAIN_STEP) )
for a in range(5) ]
x1 = min( [p[0] for p in poly] )
x2 = max( [p[0] for p in poly] )
self.cloud_poly.append( (poly,x1,x2) )
def reset(self):
self._destroy()
self.world.contactListener_bug_workaround = ContactDetector(self)
self.world.contactListener = self.world.contactListener_bug_workaround
self.game_over = False
self.prev_shaping = None
self.scroll = 0.0
self.lidar_render = 0
W = VIEWPORT_W/SCALE
H = VIEWPORT_H/SCALE
self._generate_terrain(self.hardcore)
self._generate_clouds()
init_x = TERRAIN_STEP*TERRAIN_STARTPAD/2
init_y = TERRAIN_HEIGHT+2*LEG_H
self.hull = self.world.CreateDynamicBody(
position = (init_x, init_y),
fixtures = HULL_FD
)
self.hull.color1 = (0.5,0.4,0.9)
self.hull.color2 = (0.3,0.3,0.5)
self.hull.ApplyForceToCenter((self.np_random.uniform(-INITIAL_RANDOM, INITIAL_RANDOM), 0), True)
self.legs = []
self.joints = []
for i in [-1,+1]:
leg = self.world.CreateDynamicBody(
position = (init_x, init_y - LEG_H/2 - LEG_DOWN),
angle = (i*0.05),
fixtures = LEG_FD
)
leg.color1 = (0.6-i/10., 0.3-i/10., 0.5-i/10.)
leg.color2 = (0.4-i/10., 0.2-i/10., 0.3-i/10.)
rjd = revoluteJointDef(
bodyA=self.hull,
bodyB=leg,
localAnchorA=(0, LEG_DOWN),
localAnchorB=(0, LEG_H/2),
enableMotor=True,
enableLimit=True,
maxMotorTorque=MOTORS_TORQUE,
motorSpeed = i,
lowerAngle = -0.8,
upperAngle = 1.1,
)
self.legs.append(leg)
self.joints.append(self.world.CreateJoint(rjd))
lower = self.world.CreateDynamicBody(
position = (init_x, init_y - LEG_H*3/2 - LEG_DOWN),
angle = (i*0.05),
fixtures = LOWER_FD
)
lower.color1 = (0.6-i/10., 0.3-i/10., 0.5-i/10.)
lower.color2 = (0.4-i/10., 0.2-i/10., 0.3-i/10.)
rjd = revoluteJointDef(
bodyA=leg,
bodyB=lower,
localAnchorA=(0, -LEG_H/2),
localAnchorB=(0, LEG_H/2),
enableMotor=True,
enableLimit=True,
maxMotorTorque=MOTORS_TORQUE,
motorSpeed = 1,
lowerAngle = -1.6,
upperAngle = -0.1,
)
lower.ground_contact = False
self.legs.append(lower)
self.joints.append(self.world.CreateJoint(rjd))
self.drawlist = self.terrain + self.legs + [self.hull]
class LidarCallback(Box2D.b2.rayCastCallback):
def ReportFixture(self, fixture, point, normal, fraction):
if (fixture.filterData.categoryBits & 1) == 0:
return 1
self.p2 = point
self.fraction = fraction
return 0
self.lidar = [LidarCallback() for _ in range(10)]
return self.step(np.array([0,0,0,0]))[0]
def step(self, action):
#self.hull.ApplyForceToCenter((0, 20), True) -- Uncomment this to receive a bit of stability help
control_speed = False # Should be easier as well
if control_speed:
self.joints[0].motorSpeed = float(SPEED_HIP * np.clip(action[0], -1, 1))
self.joints[1].motorSpeed = float(SPEED_KNEE * np.clip(action[1], -1, 1))
self.joints[2].motorSpeed = float(SPEED_HIP * np.clip(action[2], -1, 1))
self.joints[3].motorSpeed = float(SPEED_KNEE * np.clip(action[3], -1, 1))
else:
self.joints[0].motorSpeed = float(SPEED_HIP * np.sign(action[0]))
self.joints[0].maxMotorTorque = float(MOTORS_TORQUE * np.clip(np.abs(action[0]), 0, 1))
self.joints[1].motorSpeed = float(SPEED_KNEE * np.sign(action[1]))
self.joints[1].maxMotorTorque = float(MOTORS_TORQUE * np.clip(np.abs(action[1]), 0, 1))
self.joints[2].motorSpeed = float(SPEED_HIP * np.sign(action[2]))
self.joints[2].maxMotorTorque = float(MOTORS_TORQUE * np.clip(np.abs(action[2]), 0, 1))
self.joints[3].motorSpeed = float(SPEED_KNEE * np.sign(action[3]))
self.joints[3].maxMotorTorque = float(MOTORS_TORQUE * np.clip(np.abs(action[3]), 0, 1))
self.world.Step(1.0/FPS, 6*30, 2*30)
pos = self.hull.position
vel = self.hull.linearVelocity
for i in range(10):
self.lidar[i].fraction = 1.0
self.lidar[i].p1 = pos
self.lidar[i].p2 = (
pos[0] + math.sin(1.5*i/10.0)*LIDAR_RANGE,
pos[1] - math.cos(1.5*i/10.0)*LIDAR_RANGE)
self.world.RayCast(self.lidar[i], self.lidar[i].p1, self.lidar[i].p2)
state = [
self.hull.angle, # Normal angles up to 0.5 here, but sure more is possible.
2.0*self.hull.angularVelocity/FPS,
0.3*vel.x*(VIEWPORT_W/SCALE)/FPS, # Normalized to get -1..1 range
0.3*vel.y*(VIEWPORT_H/SCALE)/FPS,
self.joints[0].angle, # This will give 1.1 on high up, but it's still OK (and there should be spikes on hiting the ground, that's normal too)
self.joints[0].speed / SPEED_HIP,
self.joints[1].angle + 1.0,
self.joints[1].speed / SPEED_KNEE,
1.0 if self.legs[1].ground_contact else 0.0,
self.joints[2].angle,
self.joints[2].speed / SPEED_HIP,
self.joints[3].angle + 1.0,
self.joints[3].speed / SPEED_KNEE,
1.0 if self.legs[3].ground_contact else 0.0
]
state += [l.fraction for l in self.lidar]
assert len(state)==24
self.scroll = pos.x - VIEWPORT_W/SCALE/5
shaping = 130*pos[0]/SCALE # moving forward is a way to receive reward (normalized to get 300 on completion)
shaping -= 5.0*abs(state[0]) # keep head straight, other than that and falling, any behavior is unpunished
reward = 0
if self.prev_shaping is not None:
reward = shaping - self.prev_shaping
self.prev_shaping = shaping
for a in action:
reward -= 0.00035 * MOTORS_TORQUE * np.clip(np.abs(a), 0, 1)
# normalized to about -50.0 using heuristic, more optimal agent should spend less
done = False
if self.game_over or pos[0] < 0:
reward = -100
done = True
if pos[0] > (TERRAIN_LENGTH-TERRAIN_GRASS)*TERRAIN_STEP:
done = True
return np.array(state), reward, done, {}
def render(self, mode='human'):
from gym.envs.classic_control import rendering
if self.viewer is None:
self.viewer = rendering.Viewer(VIEWPORT_W, VIEWPORT_H)
self.viewer.set_bounds(self.scroll, VIEWPORT_W/SCALE + self.scroll, 0, VIEWPORT_H/SCALE)
self.viewer.draw_polygon( [
(self.scroll, 0),
(self.scroll+VIEWPORT_W/SCALE, 0),
(self.scroll+VIEWPORT_W/SCALE, VIEWPORT_H/SCALE),
(self.scroll, VIEWPORT_H/SCALE),
], color=(0.9, 0.9, 1.0) )
for poly,x1,x2 in self.cloud_poly:
if x2 < self.scroll/2: continue
if x1 > self.scroll/2 + VIEWPORT_W/SCALE: continue
self.viewer.draw_polygon( [(p[0]+self.scroll/2, p[1]) for p in poly], color=(1,1,1))
for poly, color in self.terrain_poly:
if poly[1][0] < self.scroll: continue
if poly[0][0] > self.scroll + VIEWPORT_W/SCALE: continue
self.viewer.draw_polygon(poly, color=color)
self.lidar_render = (self.lidar_render+1) % 100
i = self.lidar_render
if i < 2*len(self.lidar):
l = self.lidar[i] if i < len(self.lidar) else self.lidar[len(self.lidar)-i-1]
self.viewer.draw_polyline( [l.p1, l.p2], color=(1,0,0), linewidth=1 )
for obj in self.drawlist:
for f in obj.fixtures:
trans = f.body.transform
if type(f.shape) is circleShape:
t = rendering.Transform(translation=trans*f.shape.pos)
self.viewer.draw_circle(f.shape.radius, 30, color=obj.color1).add_attr(t)
self.viewer.draw_circle(f.shape.radius, 30, color=obj.color2, filled=False, linewidth=2).add_attr(t)
else:
path = [trans*v for v in f.shape.vertices]
self.viewer.draw_polygon(path, color=obj.color1)
path.append(path[0])
self.viewer.draw_polyline(path, color=obj.color2, linewidth=2)
flagy1 = TERRAIN_HEIGHT
flagy2 = flagy1 + 50/SCALE
x = TERRAIN_STEP*3
self.viewer.draw_polyline( [(x, flagy1), (x, flagy2)], color=(0,0,0), linewidth=2 )
f = [(x, flagy2), (x, flagy2-10/SCALE), (x+25/SCALE, flagy2-5/SCALE)]
self.viewer.draw_polygon(f, color=(0.9,0.2,0) )
self.viewer.draw_polyline(f + [f[0]], color=(0,0,0), linewidth=2 )
return self.viewer.render(return_rgb_array = mode=='rgb_array')
def close(self):
if self.viewer is not None:
self.viewer.close()
self.viewer = None
class BipedalWalkerHardcore(BipedalWalker):
hardcore = True
if __name__=="__main__":
# Heurisic: suboptimal, have no notion of balance.
env = BipedalWalker()
env.reset()
steps = 0
total_reward = 0
a = np.array([0.0, 0.0, 0.0, 0.0])
STAY_ON_ONE_LEG, PUT_OTHER_DOWN, PUSH_OFF = 1,2,3
SPEED = 0.29 # Will fall forward on higher speed
state = STAY_ON_ONE_LEG
moving_leg = 0
supporting_leg = 1 - moving_leg
SUPPORT_KNEE_ANGLE = +0.1
supporting_knee_angle = SUPPORT_KNEE_ANGLE
while True:
s, r, done, info = env.step(a)
total_reward += r
if steps % 20 == 0 or done:
print("\naction " + str(["{:+0.2f}".format(x) for x in a]))
print("step {} total_reward {:+0.2f}".format(steps, total_reward))
print("hull " + str(["{:+0.2f}".format(x) for x in s[0:4] ]))
print("leg0 " + str(["{:+0.2f}".format(x) for x in s[4:9] ]))
print("leg1 " + str(["{:+0.2f}".format(x) for x in s[9:14]]))
steps += 1
contact0 = s[8]
contact1 = s[13]
moving_s_base = 4 + 5*moving_leg
supporting_s_base = 4 + 5*supporting_leg
hip_targ = [None,None] # -0.8 .. +1.1
knee_targ = [None,None] # -0.6 .. +0.9
hip_todo = [0.0, 0.0]
knee_todo = [0.0, 0.0]
if state==STAY_ON_ONE_LEG:
hip_targ[moving_leg] = 1.1
knee_targ[moving_leg] = -0.6
supporting_knee_angle += 0.03
if s[2] > SPEED: supporting_knee_angle += 0.03
supporting_knee_angle = min( supporting_knee_angle, SUPPORT_KNEE_ANGLE )
knee_targ[supporting_leg] = supporting_knee_angle
if s[supporting_s_base+0] < 0.10: # supporting leg is behind
state = PUT_OTHER_DOWN
if state==PUT_OTHER_DOWN:
hip_targ[moving_leg] = +0.1
knee_targ[moving_leg] = SUPPORT_KNEE_ANGLE
knee_targ[supporting_leg] = supporting_knee_angle
if s[moving_s_base+4]:
state = PUSH_OFF
supporting_knee_angle = min( s[moving_s_base+2], SUPPORT_KNEE_ANGLE )
if state==PUSH_OFF:
knee_targ[moving_leg] = supporting_knee_angle
knee_targ[supporting_leg] = +1.0
if s[supporting_s_base+2] > 0.88 or s[2] > 1.2*SPEED:
state = STAY_ON_ONE_LEG
moving_leg = 1 - moving_leg
supporting_leg = 1 - moving_leg
if hip_targ[0]: hip_todo[0] = 0.9*(hip_targ[0] - s[4]) - 0.25*s[5]
if hip_targ[1]: hip_todo[1] = 0.9*(hip_targ[1] - s[9]) - 0.25*s[10]
if knee_targ[0]: knee_todo[0] = 4.0*(knee_targ[0] - s[6]) - 0.25*s[7]
if knee_targ[1]: knee_todo[1] = 4.0*(knee_targ[1] - s[11]) - 0.25*s[12]
hip_todo[0] -= 0.9*(0-s[0]) - 1.5*s[1] # PID to keep head strait
hip_todo[1] -= 0.9*(0-s[0]) - 1.5*s[1]
knee_todo[0] -= 15.0*s[3] # vertical speed, to damp oscillations
knee_todo[1] -= 15.0*s[3]
a[0] = hip_todo[0]
a[1] = knee_todo[0]
a[2] = hip_todo[1]
a[3] = knee_todo[1]
a = np.clip(0.5*a, -1.0, 1.0)
env.render()
if done: break
| mbbl_envs/env/gym_env/box2d/walker.py | 23,069 | This is simple 4-joints walker robot environment. There are two versions: - Normal, with slightly uneven terrain. - Hardcore with ladders, stumps, pitfalls. Reward is given for moving forward, total 300+ points up to the far end. If the robot falls, it gets -100. Applying motor torque costs a small amount of points, more optimal agent will get better score. Heuristic is provided for testing, it's also useful to get demonstrations to learn from. To run heuristic: python gym/envs/box2d/bipedal_walker.py State consists of hull angle speed, angular velocity, horizontal speed, vertical speed, position of joints and joints angular speed, legs contact with ground, and 10 lidar rangefinder measurements to help to deal with the hardcore version. There's no coordinates in the state vector. Lidar is less useful in normal version, but it works. To solve the game you need to get 300 points in 1600 time steps. To solve hardcore version you need 300 points in 2000 time steps. Created by Oleg Klimov. Licensed on the same terms as the rest of OpenAI Gym. affects how fast-paced the game is, forces should be adjusted as well in steps low long are grass spots, in steps in steps collide only with ground 0.99 bouncy1 Sorry for the clouds, couldn't resistself.hull.ApplyForceToCenter((0, 20), True) -- Uncomment this to receive a bit of stability help Should be easier as well Normal angles up to 0.5 here, but sure more is possible. Normalized to get -1..1 range This will give 1.1 on high up, but it's still OK (and there should be spikes on hiting the ground, that's normal too) moving forward is a way to receive reward (normalized to get 300 on completion) keep head straight, other than that and falling, any behavior is unpunished normalized to about -50.0 using heuristic, more optimal agent should spend less Heurisic: suboptimal, have no notion of balance. Will fall forward on higher speed -0.8 .. +1.1 -0.6 .. +0.9 supporting leg is behind PID to keep head strait vertical speed, to damp oscillations | 2,009 | en | 0.917774 |
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handles all requests relating to volumes.
"""
import functools
from eventlet import greenthread
from cinder import exception
from cinder import flags
from cinder.openstack.common import cfg
from cinder.image import glance
from cinder.openstack.common import log as logging
from cinder.openstack.common import rpc
import cinder.policy
from cinder.openstack.common import timeutils
from cinder import quota
from cinder.db import base
volume_host_opt = cfg.BoolOpt('snapshot_same_host',
default=True,
help='Create volume from snapshot at the host where snapshot resides')
FLAGS = flags.FLAGS
FLAGS.register_opt(volume_host_opt)
flags.DECLARE('storage_availability_zone', 'cinder.volume.manager')
LOG = logging.getLogger(__name__)
GB = 1048576 * 1024
def wrap_check_policy(func):
"""Check policy corresponding to the wrapped methods prior to execution
This decorator requires the first 3 args of the wrapped function
to be (self, context, volume)
"""
@functools.wraps(func)
def wrapped(self, context, target_obj, *args, **kwargs):
check_policy(context, func.__name__, target_obj)
return func(self, context, target_obj, *args, **kwargs)
return wrapped
def check_policy(context, action, target_obj=None):
target = {
'project_id': context.project_id,
'user_id': context.user_id,
}
target.update(target_obj or {})
_action = 'volume:%s' % action
cinder.policy.enforce(context, _action, target)
class API(base.Base):
"""API for interacting with the volume manager."""
def __init__(self, db_driver=None, image_service=None):
self.image_service = (image_service or
glance.get_default_image_service())
super(API, self).__init__(db_driver)
def create(self, context, size, name, description, snapshot=None,
image_id=None, volume_type=None, metadata=None,
availability_zone=None):
check_policy(context, 'create')
if snapshot is not None:
if snapshot['status'] != "available":
msg = _("status must be available")
raise exception.InvalidSnapshot(reason=msg)
if not size:
size = snapshot['volume_size']
snapshot_id = snapshot['id']
else:
snapshot_id = None
def as_int(s):
try:
return int(s)
except ValueError:
return s
# tolerate size as stringified int
size = as_int(size)
if not isinstance(size, int) or size <= 0:
msg = (_("Volume size '%s' must be an integer and greater than 0")
% size)
raise exception.InvalidInput(reason=msg)
if quota.allowed_volumes(context, 1, size) < 1:
pid = context.project_id
LOG.warn(_("Quota exceeded for %(pid)s, tried to create"
" %(size)sG volume") % locals())
raise exception.QuotaError(code="VolumeSizeTooLarge")
if image_id:
# check image existence
image_meta = self.image_service.show(context, image_id)
image_size_in_gb = int(image_meta['size']) / GB
#check image size is not larger than volume size.
if image_size_in_gb > size:
msg = _('Size of specified image is larger than volume size.')
raise exception.InvalidInput(reason=msg)
if availability_zone is None:
availability_zone = FLAGS.storage_availability_zone
if volume_type is None:
volume_type_id = None
else:
volume_type_id = volume_type.get('id', None)
options = {
'size': size,
'user_id': context.user_id,
'project_id': context.project_id,
'snapshot_id': snapshot_id,
'availability_zone': availability_zone,
'status': "creating",
'attach_status': "detached",
'display_name': name,
'display_description': description,
'volume_type_id': volume_type_id,
'metadata': metadata,
}
volume = self.db.volume_create(context, options)
rpc.cast(context,
FLAGS.scheduler_topic,
{"method": "create_volume",
"args": {"topic": FLAGS.volume_topic,
"volume_id": volume['id'],
"snapshot_id": volume['snapshot_id'],
"image_id": image_id}})
return volume
def _cast_create_volume(self, context, volume_id, snapshot_id):
# NOTE(Rongze Zhu): It is a simple solution for bug 1008866
# If snapshot_id is set, make the call create volume directly to
# the volume host where the snapshot resides instead of passing it
# through the scheduer. So snapshot can be copy to new volume.
if snapshot_id and FLAGS.snapshot_same_host:
snapshot_ref = self.db.snapshot_get(context, snapshot_id)
src_volume_ref = self.db.volume_get(context,
snapshot_ref['volume_id'])
topic = rpc.queue_get_for(context,
FLAGS.volume_topic,
src_volume_ref['host'])
rpc.cast(context,
topic,
{"method": "create_volume",
"args": {"volume_id": volume_id,
"snapshot_id": snapshot_id}})
else:
rpc.cast(context,
FLAGS.scheduler_topic,
{"method": "create_volume",
"args": {"topic": FLAGS.volume_topic,
"volume_id": volume_id,
"snapshot_id": snapshot_id}})
@wrap_check_policy
def delete(self, context, volume):
volume_id = volume['id']
if not volume['host']:
# NOTE(vish): scheduling failed, so delete it
self.db.volume_destroy(context, volume_id)
return
if volume['status'] not in ["available", "error"]:
msg = _("Volume status must be available or error")
raise exception.InvalidVolume(reason=msg)
snapshots = self.db.snapshot_get_all_for_volume(context, volume_id)
if len(snapshots):
msg = _("Volume still has %d dependent snapshots") % len(snapshots)
raise exception.InvalidVolume(reason=msg)
now = timeutils.utcnow()
self.db.volume_update(context, volume_id, {'status': 'deleting',
'terminated_at': now})
host = volume['host']
rpc.cast(context,
rpc.queue_get_for(context, FLAGS.volume_topic, host),
{"method": "delete_volume",
"args": {"volume_id": volume_id}})
@wrap_check_policy
def update(self, context, volume, fields):
self.db.volume_update(context, volume['id'], fields)
def get(self, context, volume_id):
rv = self.db.volume_get(context, volume_id)
volume = dict(rv.iteritems())
check_policy(context, 'get', volume)
return volume
def get_all(self, context, search_opts=None):
check_policy(context, 'get_all')
if search_opts is None:
search_opts = {}
if (context.is_admin and 'all_tenants' in search_opts):
# Need to remove all_tenants to pass the filtering below.
del search_opts['all_tenants']
volumes = self.db.volume_get_all(context)
else:
volumes = self.db.volume_get_all_by_project(context,
context.project_id)
if search_opts:
LOG.debug(_("Searching by: %s") % str(search_opts))
def _check_metadata_match(volume, searchdict):
volume_metadata = {}
for i in volume.get('volume_metadata'):
volume_metadata[i['key']] = i['value']
for k, v in searchdict.iteritems():
if (k not in volume_metadata.keys() or
volume_metadata[k] != v):
return False
return True
# search_option to filter_name mapping.
filter_mapping = {'metadata': _check_metadata_match}
result = []
for volume in volumes:
# go over all filters in the list
for opt, values in search_opts.iteritems():
try:
filter_func = filter_mapping[opt]
except KeyError:
# no such filter - ignore it, go to next filter
continue
else:
if filter_func(volume, values):
result.append(volume)
break
volumes = result
return volumes
def get_snapshot(self, context, snapshot_id):
check_policy(context, 'get_snapshot')
rv = self.db.snapshot_get(context, snapshot_id)
return dict(rv.iteritems())
def get_all_snapshots(self, context, search_opts=None):
check_policy(context, 'get_all_snapshots')
search_opts = search_opts or {}
if (context.is_admin and 'all_tenants' in search_opts):
# Need to remove all_tenants to pass the filtering below.
del search_opts['all_tenants']
return self.db.snapshot_get_all(context)
else:
return self.db.snapshot_get_all_by_project(context,
context.project_id)
@wrap_check_policy
def check_attach(self, context, volume):
# TODO(vish): abstract status checking?
if volume['status'] != "available":
msg = _("status must be available")
raise exception.InvalidVolume(reason=msg)
if volume['attach_status'] == "attached":
msg = _("already attached")
raise exception.InvalidVolume(reason=msg)
@wrap_check_policy
def check_detach(self, context, volume):
# TODO(vish): abstract status checking?
if volume['status'] == "available":
msg = _("already detached")
raise exception.InvalidVolume(reason=msg)
def remove_from_compute(self, context, volume, instance_id, host):
"""Remove volume from specified compute host."""
rpc.call(context,
rpc.queue_get_for(context, FLAGS.compute_topic, host),
{"method": "remove_volume_connection",
"args": {'instance_id': instance_id,
'volume_id': volume['id']}})
@wrap_check_policy
def reserve_volume(self, context, volume):
self.update(context, volume, {"status": "attaching"})
@wrap_check_policy
def unreserve_volume(self, context, volume):
if volume['status'] == "attaching":
self.update(context, volume, {"status": "available"})
@wrap_check_policy
def attach(self, context, volume, instance_uuid, mountpoint):
host = volume['host']
queue = rpc.queue_get_for(context, FLAGS.volume_topic, host)
return rpc.call(context, queue,
{"method": "attach_volume",
"args": {"volume_id": volume['id'],
"instance_uuid": instance_uuid,
"mountpoint": mountpoint}})
@wrap_check_policy
def detach(self, context, volume):
host = volume['host']
queue = rpc.queue_get_for(context, FLAGS.volume_topic, host)
return rpc.call(context, queue,
{"method": "detach_volume",
"args": {"volume_id": volume['id']}})
@wrap_check_policy
def initialize_connection(self, context, volume, connector):
host = volume['host']
queue = rpc.queue_get_for(context, FLAGS.volume_topic, host)
return rpc.call(context, queue,
{"method": "initialize_connection",
"args": {"volume_id": volume['id'],
"connector": connector}})
@wrap_check_policy
def terminate_connection(self, context, volume, connector):
self.unreserve_volume(context, volume)
host = volume['host']
queue = rpc.queue_get_for(context, FLAGS.volume_topic, host)
return rpc.call(context, queue,
{"method": "terminate_connection",
"args": {"volume_id": volume['id'],
"connector": connector}})
def _create_snapshot(self, context, volume, name, description,
force=False):
check_policy(context, 'create_snapshot', volume)
if ((not force) and (volume['status'] != "available")):
msg = _("must be available")
raise exception.InvalidVolume(reason=msg)
options = {
'volume_id': volume['id'],
'user_id': context.user_id,
'project_id': context.project_id,
'status': "creating",
'progress': '0%',
'volume_size': volume['size'],
'display_name': name,
'display_description': description}
snapshot = self.db.snapshot_create(context, options)
host = volume['host']
rpc.cast(context,
rpc.queue_get_for(context, FLAGS.volume_topic, host),
{"method": "create_snapshot",
"args": {"volume_id": volume['id'],
"snapshot_id": snapshot['id']}})
return snapshot
def create_snapshot(self, context, volume, name, description):
return self._create_snapshot(context, volume, name, description,
False)
def create_snapshot_force(self, context, volume, name, description):
return self._create_snapshot(context, volume, name, description,
True)
@wrap_check_policy
def delete_snapshot(self, context, snapshot):
if snapshot['status'] not in ["available", "error"]:
msg = _("Volume Snapshot status must be available or error")
raise exception.InvalidVolume(reason=msg)
self.db.snapshot_update(context, snapshot['id'],
{'status': 'deleting'})
volume = self.db.volume_get(context, snapshot['volume_id'])
host = volume['host']
rpc.cast(context,
rpc.queue_get_for(context, FLAGS.volume_topic, host),
{"method": "delete_snapshot",
"args": {"snapshot_id": snapshot['id']}})
@wrap_check_policy
def get_volume_metadata(self, context, volume):
"""Get all metadata associated with a volume."""
rv = self.db.volume_metadata_get(context, volume['id'])
return dict(rv.iteritems())
@wrap_check_policy
def delete_volume_metadata(self, context, volume, key):
"""Delete the given metadata item from an volume."""
self.db.volume_metadata_delete(context, volume['id'], key)
@wrap_check_policy
def update_volume_metadata(self, context, volume, metadata, delete=False):
"""Updates or creates volume metadata.
If delete is True, metadata items that are not specified in the
`metadata` argument will be deleted.
"""
if delete:
_metadata = metadata
else:
_metadata = self.get_volume_metadata(context, volume['id'])
_metadata.update(metadata)
self.db.volume_metadata_update(context, volume['id'], _metadata, True)
return _metadata
def get_volume_metadata_value(self, volume, key):
"""Get value of particular metadata key."""
metadata = volume.get('volume_metadata')
if metadata:
for i in volume['volume_metadata']:
if i['key'] == key:
return i['value']
return None
def _check_volume_availability(self, context, volume, force):
"""Check if the volume can be used."""
if volume['status'] not in ['available', 'in-use']:
msg = _('Volume status must be available/in-use.')
raise exception.InvalidVolume(reason=msg)
if not force and 'in-use' == volume['status']:
msg = _('Volume status is in-use.')
raise exception.InvalidVolume(reason=msg)
@wrap_check_policy
def copy_volume_to_image(self, context, volume, metadata, force):
"""Create a new image from the specified volume."""
self._check_volume_availability(context, volume, force)
recv_metadata = self.image_service.create(context, metadata)
self.update(context, volume, {'status': 'uploading'})
rpc.cast(context,
rpc.queue_get_for(context,
FLAGS.volume_topic,
volume['host']),
{"method": "copy_volume_to_image",
"args": {"volume_id": volume['id'],
"image_id": recv_metadata['id']}})
response = {"id": volume['id'],
"updated_at": volume['updated_at'],
"status": 'uploading',
"display_description": volume['display_description'],
"size": volume['size'],
"volume_type": volume['volume_type'],
"image_id": recv_metadata['id'],
"container_format": recv_metadata['container_format'],
"disk_format": recv_metadata['disk_format'],
"image_name": recv_metadata.get('name', None)
}
return response
| cinder/volume/api.py | 18,767 | API for interacting with the volume manager.
Check if the volume can be used.
Create a new image from the specified volume.
Delete the given metadata item from an volume.
Get all metadata associated with a volume.
Get value of particular metadata key.
Remove volume from specified compute host.
Updates or creates volume metadata.
If delete is True, metadata items that are not specified in the
`metadata` argument will be deleted.
Check policy corresponding to the wrapped methods prior to execution
This decorator requires the first 3 args of the wrapped function
to be (self, context, volume)
Handles all requests relating to volumes.
vim: tabstop=4 shiftwidth=4 softtabstop=4 Copyright 2010 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. tolerate size as stringified int check image existencecheck image size is not larger than volume size. NOTE(Rongze Zhu): It is a simple solution for bug 1008866 If snapshot_id is set, make the call create volume directly to the volume host where the snapshot resides instead of passing it through the scheduer. So snapshot can be copy to new volume. NOTE(vish): scheduling failed, so delete it Need to remove all_tenants to pass the filtering below. search_option to filter_name mapping. go over all filters in the list no such filter - ignore it, go to next filter Need to remove all_tenants to pass the filtering below. TODO(vish): abstract status checking? TODO(vish): abstract status checking? | 2,082 | en | 0.797932 |
"""
See notebook 5 for example use of show_mri_sample()
"""
import glob
import os
import random
import numpy as np
import pandas as pd
import torch
from torch.utils.data import Dataset
import matplotlib.pyplot as plt
import cv2
import scipy.ndimage as ndimage
def make_bg_transparent(im, bg_th=0.0, set_to_color=None):
# create transparency alpha channel
# convert image to RGBA
if len(im.shape) == 3:
alpha_c = (np.sum(im[:,:,:],axis=2) > bg_th).astype(im.dtype)
c1,c2,c3 = cv2.split(im)
else:
alpha_c = (im[:,:] > bg_th).astype(im.dtype)
c1,c2,c3 = im.copy(), im.copy(), im.copy()
if set_to_color is not None:
zeros = np.zeros_like(c1)
if set_to_color == 'green':
merged = np.stack([zeros,c2,zeros,alpha_c], axis=-1)
elif set_to_color == 'red':
merged = np.stack([c1,zeros,zeros,alpha_c], axis=-1)
elif set_to_color == 'royalblue':
merged = np.stack([c1,zeros,zeros,alpha_c], axis=-1)
elif set_to_color == 'violet':
merged = np.stack([c1,zeros,c3,alpha_c], axis=-1)
elif set_to_color == 'yellow':
merged = np.stack([c1,c2,zeros,alpha_c], axis=-1)
else:
merged = np.stack([c1,c2,c3,alpha_c], axis=-1)
return merged
def to_3d_points(im, th=1e-6, downsample=5):
xs,ys,ds = [],[],[]
if len(im.shape) == 4:
im3d = np.sum(im,axis=3)
else:
im3d = im
depth,width,height = im3d.shape
step_vol = downsample**3
for x in range(0, width - downsample, downsample):
for y in range(0, height - downsample, downsample):
for d in range(0, depth - downsample, downsample):
if (np.sum(im3d[d:d+downsample, x:x+downsample, y:y+downsample]) / step_vol) > th:
xs.append(x + (downsample//2))
ys.append(y + (downsample//2))
ds.append(d + (downsample//2))
return np.array(xs), np.array(ys), np.array(ds)
def adjust_saturation(img, sat_scale=0.3):
hsv_im = cv2.cvtColor((img * 255).astype(np.uint8), cv2.COLOR_RGB2HSV)
(h, s, v) = cv2.split(hsv_im)
s = s*sat_scale
s = np.clip(s,0,255)
hsv_im = np.stack([h,s,v],axis=2).astype(np.uint8)
return cv2.cvtColor(hsv_im, cv2.COLOR_HSV2RGB) / 255.
def show_mri_sample(sample, pred_mask=None, pred_lbl=None, seg_downsample=None, save_fn=None):
""" Plot sample in three projections """
plt.close('all')
alpha=0.5
image_alpha=1.0
ims = sample['image'].numpy()
means = sample['mean'].numpy()
stds = sample['std'].numpy()
segs = sample['segmentation'].numpy() if 'segmentation' in sample else None
# add batch dims if missing
if ims.ndim == 4:
ims = np.expand_dims(ims, 0)
means = np.expand_dims(means, 0)
stds = np.expand_dims(stds, 0)
if segs is not None:
segs = np.expand_dims(segs, 0)
n_images = len(ims)
n_root = int(np.ceil(np.sqrt(n_images)))
n_cols = n_root * 2
n_rows = n_root * 2
# special case fix to get with correct with small bs
if n_images == 2:
n_rows = 2
fig_scale = 2
f = plt.figure(figsize=(fig_scale*n_cols,fig_scale*n_rows))
# Read additional meta from batch
brats_ids = [sample['BraTSID']] if n_images == 1 else sample['BraTSID']
labels = None
if 'label' in sample:
labels = [sample['label']] if n_images == 1 else sample['label']
def _subplot_index(index, row_off, col_off):
startrow = (index * 2)//n_cols
startcol = (index * 2)%n_cols
return (2*startrow+row_off)*n_cols + (startcol + col_off) + 1
for index in range(n_images):
im = ims[index]
seg = segs[index]
seg = np.swapaxes(seg, 0,3)
# upsample seg back to original size if it has been downsampled
if seg_downsample is not None:
seg = seg.repeat(seg_downsample, axis=0).repeat(seg_downsample, axis=1).repeat(seg_downsample, axis=2)
# Normalize images for visualization
im = np.swapaxes(im, 0,3) # swap depth and chan axes
im = (im * stds[index]) + means[index]
title = f'BraTSID: {brats_ids[index]}'
if labels is not None:
title += f', GT-MGMT:{labels[index]}'
if pred_lbl is not None:
title += f'\nPred-MGMT:{float(pred_lbl[index][0]):.3f}'
d,x,y,c = im.shape
coronal_ax = f.add_subplot(n_rows,n_cols, _subplot_index(index,0,0))
coronal_ax.set_title(title + ' - coronal', fontsize=8)
coronal_ax.imshow(make_bg_transparent(adjust_saturation(im[::-1,x//2,:,:])), alpha=image_alpha)
sagittal_ax = f.add_subplot(n_rows,n_cols,_subplot_index(index,0,1))
sagittal_ax.set_title(title + ' - sagittal', fontsize=8)
sagittal_ax.get_yaxis().set_visible(False)
sagittal_ax.imshow(make_bg_transparent(adjust_saturation(im[::-1,:,y//2,:])), alpha=image_alpha)
axial_ax = f.add_subplot(n_rows,n_cols,_subplot_index(index,1,0))
axial_ax.set_title(title + ' - axial', fontsize=8)
axial_ax.imshow(make_bg_transparent(adjust_saturation(im[d//2,:,:,:])), alpha=image_alpha)
proj_ax = f.add_subplot(n_rows, n_cols, _subplot_index(index,1,1), projection='3d')
proj_ax.scatter(*to_3d_points(im), color='gray', alpha=0.015, s=5, depthshade=False)
proj_ax.set_title(f'Green=GT-tumor, Red=Pred-tumor\n{title}', fontsize=6)
proj_ax.set_xticks([])
proj_ax.set_yticks([])
proj_ax.set_zticks([])
if seg is not None:
for seg_chan, color in zip(range(seg.shape[3]),['green']):
coronal_ax.imshow(make_bg_transparent(seg[::-1,x//2,:,seg_chan], set_to_color=color), alpha=alpha)
sagittal_ax.imshow(make_bg_transparent(seg[::-1,:,y//2,seg_chan], set_to_color=color), alpha=alpha)
axial_ax.imshow(make_bg_transparent(seg[d//2,:,:,seg_chan], set_to_color=color), alpha=alpha)
proj_ax.scatter(*to_3d_points(seg[:,:,:,seg_chan]), color=color, s=5, alpha=0.05)
if pred_mask is not None:
pred = np.swapaxes(pred_mask[index].cpu().numpy(), 0,3)
pred = np.clip(pred, 0, 1.)
# upsample seg back to original size if it has been downsampled
if seg_downsample is not None:
pred = pred.repeat(seg_downsample, axis=0).repeat(seg_downsample, axis=1).repeat(seg_downsample, axis=2)
for seg_chan, color in zip(range(pred.shape[3]),['red']):
coronal_ax.imshow(make_bg_transparent(pred[::-1,x//2,:, seg_chan], set_to_color=color, bg_th=0.5), alpha=alpha)
sagittal_ax.imshow(make_bg_transparent(pred[::-1,:,y//2, seg_chan], set_to_color=color, bg_th=0.5), alpha=alpha)
axial_ax.imshow(make_bg_transparent(pred[d//2,:,:, seg_chan], set_to_color=color, bg_th=0.5), alpha=alpha)
proj_ax.scatter(*to_3d_points(pred[:,:,:,seg_chan], th=0.5), color=color, s=5, alpha=0.05)
# draw axial lines
coronal_ax.plot([0,x-1],[d//2,d//2],'--',color='white', linewidth=1) # coronal horizontal
coronal_ax.plot([x//2,x//2],[0,d-1],'--',color='white', linewidth=1) # coronal vertical
sagittal_ax.plot([0,y-1],[d//2,d//2],'--',color='white', linewidth=1) # sagittal horizontal
sagittal_ax.plot([y//2,y//2],[0,d-1],'--',color='white', linewidth=1) # sagittal vertical
axial_ax.plot([0,y-1],[x//2,x//2],'--',color='white', linewidth=1) # axial horizontal
axial_ax.plot([x//2,x//2],[0,y-1],'--',color='white', linewidth=1) # axial vertical
plt.subplots_adjust(left=0.00,top=1.,right=1.,bottom=0.00, wspace=0.15, hspace=0.15)
bbox = f.get_window_extent().transformed(f.dpi_scale_trans.inverted())
width, height = bbox.width*f.dpi, bbox.height*f.dpi
width *= 1.05
height *= 1.05
#if n_images == 2:
# n_rows = 2
for row in range(0, n_rows,2):
if n_images == 2 and row > 0:
break
for col in range(0, n_cols,2):
different_color = (row//2) % 2 == (col//2) % 2
color = (1,1,1) if different_color else (0.8,0.8,0.8)
f.patches.extend([
plt.Rectangle(
(width * col / n_cols, height * (n_rows - row - 2) / n_rows),
width / max(1,n_cols//2),
height / max(1,n_rows//2),
fill=True,
color=color,
zorder=-1, # below axes
alpha=0.5,
transform=None,
figure=f)
])
if save_fn is not None:
plt.savefig(save_fn, transparent=False)
else:
plt.show() | src/seg_model_utils/visualization.py | 8,945 | Plot sample in three projections
See notebook 5 for example use of show_mri_sample()
create transparency alpha channel convert image to RGBA add batch dims if missing special case fix to get with correct with small bs Read additional meta from batch upsample seg back to original size if it has been downsampled Normalize images for visualization swap depth and chan axes upsample seg back to original size if it has been downsampled draw axial lines coronal horizontal coronal vertical sagittal horizontal sagittal vertical axial horizontal axial verticalif n_images == 2: n_rows = 2 below axes | 601 | en | 0.806733 |
import glob
import pandas as pd
import os
import datetime
class DataMerge:
def __init__(self, directory):
self.directory = directory
self.__data = self.get_data_from(self.directory)
def date_to_int(self, dates):
"""
calculates number of days between 01/01/0001 and each date in dates
date has format '%m/%d/%Y'
:param dates: Pandas Series
:return: list
"""
ret = []
for date in dates:
date0 = datetime.datetime(year=1, month=1, day=1)
datex = datetime.datetime.strptime(date, '%m/%d/%Y')
ret.append((datex - date0).days)
return ret
def get_data_from(self, dir):
files = glob.glob(f'{dir}/*')
if files == []:
raise f'directory {dir} does not contain any .csv file'
data = None
for file in files:
if file == f'{dir}/merged_data.csv':
continue
if data is None:
data = pd.read_csv(file)
continue
temp_data = pd.read_csv(file)
temp_data = temp_data.dropna(axis=1)
data = data.append(temp_data)
data.drop_duplicates()
data = data.sort_values('Date', ascending=False, key=self.date_to_int)
data = data[: 408]
data.to_csv(f"{dir}/merged_data.csv", index=False)
return data
def get_data(self):
return self.__data | backend/data_merge.py | 1,449 | calculates number of days between 01/01/0001 and each date in dates
date has format '%m/%d/%Y'
:param dates: Pandas Series
:return: list | 137 | en | 0.799051 |
import jax.numpy as np
import matplotlib.pyplot as plt
def plot(vi, X,
target='vanishing',
n=1000, scale=1.5, x_max=1.0, y_max=1.0,
z_func=lambda x_, y_: 0.0,
show=False, splitshow=False):
nvars = X.shape[-1]
if nvars == 2:
_plot2d(vi, X, target=target,
n=n, scale=scale, x_max=x_max, y_max=y_max,
show=show, splitshow=splitshow)
elif nvars == 3:
_plot3d(vi, X, z_func, target=target,
n=n, scale=scale, x_max=x_max, y_max=y_max,
show=show, splitshow=splitshow)
else:
print(f'Cannot plot {nvars}-variate polynomials')
def _plot2d(vi, X, target='vanishing', n=1000, scale=1.5, x_max=1.0, y_max=1.0, show=False, splitshow=False):
## set plot range
m = np.mean(X, axis=0)
x_max = y_max = np.max(np.abs(X))
# x = np.arange(-scale*x_max, scale*x_max, resolution)
# y = np.arange(-scale*y_max, scale*y_max, resolution)
x = np.linspace(-scale*x_max, scale*x_max, 50)
y = np.linspace(-scale*y_max, scale*y_max, 50)
Z1, Z2 = np.meshgrid(x, y)
## set plot setting
npolys = 0
if target == 'vanishing':
# npolys = sum([Gt.shape[-1] for Gt in vi.basis.vanishings()])
npolys = sum([Bt.n_vanishings() for Bt in vi.basis])
# npolys = sum([len(Gt) for Gt in vi.basis.vanishings()])
elif target == 'nonvanishing':
npolys = sum([Bt.n_nonvanishings() for Bt in vi.basis])
colors = plt.cm.Dark2(np.linspace(0,1,8))
linestyles = ['solid','dashed','dashdot', 'dotted']
nfigs = min(npolys, n)
for i in range(nfigs):
f = lambda x_, y_: vi.evaluate(np.array([[x_,y_]]), target=target)[0,i]
f = np.vectorize(f)
plt.contour(Z1,Z2,f(Z1, Z2), levels=[0], colors=[colors[i%len(colors)]], linewidths=[1.], linestyles=[linestyles[i%4]])
if splitshow:
plt.plot(X[:,0], X[:,1], 'o', mfc='none', alpha=0.8)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
if not splitshow:
plt.plot(X[:,0], X[:,1], 'o', mfc='none', alpha=0.8)
plt.gca().set_aspect('equal', adjustable='box')
# plt.savefig('graph_Z.pdf')
if not splitshow and show:
plt.show()
def _plot3d(vi, X, z_func, target='vanishing', n=1000, scale=1.5, x_max=1.0, y_max=1.0, show=False, splitshow=False):
## set plot range
m = np.mean(X, axis=0)
x_max = y_max = np.max(np.abs(X))
x = np.linspace(-scale*x_max, scale*x_max, 50)
y = np.linspace(-scale*y_max, scale*y_max, 50)
Z1, Z2 = np.meshgrid(x, y)
## set plot setting
npolys = 0
if target == 'vanishing':
npolys = sum([np.asarray(Gt).shape[-1] for Gt in vi.basis.vanishings()])
# npolys = sum([len(Gt) for Gt in vi.basis.vanishings()])
elif target == 'nonvanishing':
npolys = sum([np.asarray(Ft).shape[-1] for Ft in vi.basis.nonvanishings()])
else:
print('unknown target: %s' % target)
colors = plt.cm.Dark2(np.linspace(0,1,8))
linestyles = ['solid','dashed','dashdot', 'dotted']
nfigs = min(npolys, n)
for i in range(nfigs):
f = lambda x_, y_: vi.evaluate(np.array([[x_,y_, z_func(x_,y_)]]), target=target)[0,i]
f = np.vectorize(f)
plt.contour(Z1,Z2,f(Z1, Z2), levels=[0], colors=[colors[i%len(colors)]], linewidths=[1.], linestyles=[linestyles[i%4]])
if splitshow:
plt.plot(X[:,0], X[:,1], 'o', mfc='none', alpha=0.8)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
if not splitshow:
plt.plot(X[:,0], X[:,1], 'o', mfc='none', alpha=0.8)
plt.gca().set_aspect('equal', adjustable='box')
# plt.savefig('graph_Z.pdf')
if not splitshow and show:
plt.show()
| mavi/jax/util/plot.py | 3,803 | set plot range x = np.arange(-scale*x_max, scale*x_max, resolution) y = np.arange(-scale*y_max, scale*y_max, resolution) set plot setting npolys = sum([Gt.shape[-1] for Gt in vi.basis.vanishings()]) npolys = sum([len(Gt) for Gt in vi.basis.vanishings()]) plt.savefig('graph_Z.pdf') set plot range set plot setting npolys = sum([len(Gt) for Gt in vi.basis.vanishings()]) plt.savefig('graph_Z.pdf') | 413 | en | 0.337409 |
"""Code generation utilities"""
from .utils import SchemaInfo, is_valid_identifier, indent_docstring, indent_arglist
import textwrap
import re
class CodeSnippet(object):
"""Object whose repr() is a string of code"""
def __init__(self, code):
self.code = code
def __repr__(self):
return self.code
def _get_args(info):
"""Return the list of args & kwds for building the __init__ function"""
# TODO: - set additional properties correctly
# - handle patternProperties etc.
required = set()
kwds = set()
invalid_kwds = set()
# TODO: specialize for anyOf/oneOf?
if info.is_allOf():
# recursively call function on all children
arginfo = [_get_args(child) for child in info.allOf]
nonkeyword = all(args[0] for args in arginfo)
required = set.union(set(), *(args[1] for args in arginfo))
kwds = set.union(set(), *(args[2] for args in arginfo))
kwds -= required
invalid_kwds = set.union(set(), *(args[3] for args in arginfo))
additional = all(args[4] for args in arginfo)
elif info.is_empty() or info.is_compound():
nonkeyword = True
additional = True
elif info.is_value():
nonkeyword = True
additional=False
elif info.is_object():
invalid_kwds = ({p for p in info.required if not is_valid_identifier(p)} |
{p for p in info.properties if not is_valid_identifier(p)})
required = {p for p in info.required if is_valid_identifier(p)}
kwds = {p for p in info.properties if is_valid_identifier(p)}
kwds -= required
nonkeyword = False
additional = True
#additional = info.additionalProperties or info.patternProperties
else:
raise ValueError("Schema object not understood")
return (nonkeyword, required, kwds, invalid_kwds, additional)
class SchemaGenerator(object):
"""Class that defines methods for generating code from schemas
Parameters
----------
classname : string
The name of the class to generate
schema : dict
The dictionary defining the schema class
rootschema : dict (optional)
The root schema for the class
basename : string (default: "SchemaBase")
The name of the base class to use in the class definition
schemarepr : CodeSnippet or object, optional
An object whose repr will be used in the place of the explicit schema.
This can be useful, for example, when the generated code should reference
a predefined schema object. The user must ensure that the schema within
the evaluated code is identical to the schema used to generate the code.
rootschemarepr : CodeSnippet or object, optional
An object whose repr will be used in the place of the explicit root
schema.
"""
schema_class_template = textwrap.dedent('''
class {classname}({basename}):
"""{docstring}"""
_schema = {schema!r}
_rootschema = {rootschema!r}
{init_code}
''')
init_template = textwrap.dedent("""
def __init__({arglist}):
super({classname}, self).__init__({super_arglist})
""").lstrip()
def _process_description(self, description):
return description
def __init__(self, classname, schema, rootschema=None,
basename='SchemaBase', schemarepr=None, rootschemarepr=None,
nodefault=()):
self.classname = classname
self.schema = schema
self.rootschema = rootschema
self.basename = basename
self.schemarepr = schemarepr
self.rootschemarepr = rootschemarepr
self.nodefault = nodefault
def schema_class(self):
"""Generate code for a schema class"""
rootschema = self.rootschema if self.rootschema is not None else self.schema
schemarepr = self.schemarepr if self.schemarepr is not None else self.schema
rootschemarepr = self.rootschemarepr
if rootschemarepr is None:
if rootschema is self.schema:
rootschemarepr = CodeSnippet('_schema')
else:
rootschemarepr = rootschema
return self.schema_class_template.format(
classname=self.classname,
basename=self.basename,
schema=schemarepr,
rootschema=rootschemarepr,
docstring=self.docstring(indent=4),
init_code=self.init_code(indent=4)
)
def docstring(self, indent=0):
# TODO: add a general description at the top, derived from the schema.
# for example, a non-object definition should list valid type, enum
# values, etc.
# TODO: use _get_args here for more information on allOf objects
info = SchemaInfo(self.schema, self.rootschema)
doc = ["{} schema wrapper".format(self.classname),
'',
info.medium_description]
if info.description:
doc += self._process_description( #remove condition description
re.sub(r"\n\{\n(\n|.)*\n\}",'',info.description)).splitlines()
if info.properties:
nonkeyword, required, kwds, invalid_kwds, additional = _get_args(info)
doc += ['',
'Attributes',
'----------',
'']
for prop in sorted(required) + sorted(kwds) + sorted(invalid_kwds):
propinfo = info.properties[prop]
doc += ["{} : {}".format(prop, propinfo.short_description),
" {}".format(self._process_description(propinfo.description))]
if len(doc) > 1:
doc += ['']
return indent_docstring(doc, indent_level=indent, width=100, lstrip=True)
def init_code(self, indent=0):
"""Return code suitablde for the __init__ function of a Schema class"""
info = SchemaInfo(self.schema, rootschema=self.rootschema)
nonkeyword, required, kwds, invalid_kwds, additional =_get_args(info)
nodefault=set(self.nodefault)
required -= nodefault
kwds -= nodefault
args = ['self']
super_args = []
if nodefault:
args.extend(sorted(nodefault))
elif nonkeyword:
args.append('*args')
super_args.append('*args')
args.extend('{}=Undefined'.format(p)
for p in sorted(required) + sorted(kwds))
super_args.extend('{0}={0}'.format(p)
for p in sorted(nodefault) + sorted(required) + sorted(kwds))
if additional:
args.append('**kwds')
super_args.append('**kwds')
arg_indent_level = 9 + indent
super_arg_indent_level = 23 + len(self.classname) + indent
initfunc = self.init_template.format(classname=self.classname,
arglist=indent_arglist(args, indent_level=arg_indent_level),
super_arglist=indent_arglist(super_args, indent_level=super_arg_indent_level))
if indent:
initfunc = ('\n' + indent * ' ').join(initfunc.splitlines())
return initfunc
| tools/schemapi/codegen.py | 7,247 | Object whose repr() is a string of code
Class that defines methods for generating code from schemas
Parameters
----------
classname : string
The name of the class to generate
schema : dict
The dictionary defining the schema class
rootschema : dict (optional)
The root schema for the class
basename : string (default: "SchemaBase")
The name of the base class to use in the class definition
schemarepr : CodeSnippet or object, optional
An object whose repr will be used in the place of the explicit schema.
This can be useful, for example, when the generated code should reference
a predefined schema object. The user must ensure that the schema within
the evaluated code is identical to the schema used to generate the code.
rootschemarepr : CodeSnippet or object, optional
An object whose repr will be used in the place of the explicit root
schema.
Return the list of args & kwds for building the __init__ function
Return code suitablde for the __init__ function of a Schema class
Generate code for a schema class
Code generation utilities
TODO: - set additional properties correctly - handle patternProperties etc. TODO: specialize for anyOf/oneOf? recursively call function on all childrenadditional = info.additionalProperties or info.patternProperties TODO: add a general description at the top, derived from the schema. for example, a non-object definition should list valid type, enum values, etc. TODO: use _get_args here for more information on allOf objectsremove condition description | 1,555 | en | 0.560673 |
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This module is largely a wrapper around `jaxlib` that performs version
# checking on import.
import jaxlib
_minimum_jaxlib_version = (0, 1, 38)
try:
from jaxlib import version as jaxlib_version
except:
# jaxlib is too old to have version number.
msg = 'This version of jax requires jaxlib version >= {}.'
raise ImportError(msg.format('.'.join(map(str, _minimum_jaxlib_version))))
version = tuple(int(x) for x in jaxlib_version.__version__.split('.'))
# Check the jaxlib version before importing anything else from jaxlib.
def _check_jaxlib_version():
if version < _minimum_jaxlib_version:
msg = 'jaxlib is version {}, but this version of jax requires version {}.'
if version == (0, 1, 23):
msg += ('\n\nA common cause of this error is that you installed jaxlib '
'using pip, but your version of pip is too old to support '
'manylinux2010 wheels. Try running:\n\n'
'pip install --upgrade pip\n'
'pip install --upgrade jax jaxlib\n')
raise ValueError(msg.format('.'.join(map(str, version)),
'.'.join(map(str, _minimum_jaxlib_version))))
_check_jaxlib_version()
try:
from jaxlib import tpu_client # pytype: disable=import-error
except:
tpu_client = None
from jaxlib import xla_client
from jaxlib import lapack
from jaxlib import pytree
from jaxlib import cusolver
try:
from jaxlib import cuda_prng
except ImportError:
cuda_prng = None
| jax/lib/__init__.py | 2,054 | Copyright 2018 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This module is largely a wrapper around `jaxlib` that performs version checking on import. jaxlib is too old to have version number. Check the jaxlib version before importing anything else from jaxlib. pytype: disable=import-error | 779 | en | 0.850194 |
# -*- coding: utf-8 -*-
"""
@author:XuMing(xuming624@qq.com)
@description:
"""
import sys
sys.path.append('..')
from nlpcommon import stopwords
if __name__ == '__main__':
print(len(stopwords), stopwords)
| examples/base_demo.py | 211 | @author:XuMing(xuming624@qq.com)
@description:
-*- coding: utf-8 -*- | 71 | en | 0.394133 |
import wmi
import speedtest_cli
import threading
import signal
import os
import json
def testSpeed(urls):
speedtest_cli.shutdown_event = threading.Event()
signal.signal(signal.SIGINT, speedtest_cli.ctrl_c)
print "Start to test download speed: "
dlspeed = speedtest_cli.downloadSpeed(urls)
dlspeed = (dlspeed / 1000 / 1000)
print('Download: %0.2f M%s/s' % (dlspeed, 'B'))
return dlspeed
def setGateway(wmiObj, gateway):
ip = '192.168.8.84'
subnetmask = '255.255.255.0'
configurations = wmiObj.Win32_NetworkAdapterConfiguration(Description="Realtek PCIe GBE Family Controller", IPEnabled=True)
if len(configurations) == 0:
print "No service available"
return
configuration = configurations[0]
# ret = configuration.EnableStatic(IPAddress=[ip],SubnetMask=[subnetmask])
ret = configuration.SetGateways(DefaultIPGateway=[gateway])
return ret
def checkGatewayStatus(urls):
if not urls:
urls = ["http://www.dynamsoft.com/assets/images/logo-index-dwt.png", "http://www.dynamsoft.com/assets/images/logo-index-dnt.png", "http://www.dynamsoft.com/assets/images/logo-index-ips.png", "http://www.codepool.biz/wp-content/uploads/2015/06/django_dwt.png", "http://www.codepool.biz/wp-content/uploads/2015/07/drag_element.png"]
# Query current gateway
wmiObj = wmi.WMI()
sql = "select IPAddress,DefaultIPGateway from Win32_NetworkAdapterConfiguration where Description=\"Realtek PCIe GBE Family Controller\" and IPEnabled=TRUE"
configurations = wmiObj.query(sql)
currentGateway = None
for configuration in configurations:
currentGateway = configuration.DefaultIPGateway[0]
print "IPAddress:", configuration.IPAddress[0], "DefaultIPGateway:", currentGateway
dlspeed = testSpeed(urls)
bestChoice = (currentGateway, dlspeed)
print "Init choice: " + str(bestChoice)
gateways = ["192.168.8.1", "192.168.8.2"] # define gateways
settingReturn = 0
gateways.remove(currentGateway)
for gateway in gateways:
settingReturn = setGateway(wmiObj, gateway)
if (settingReturn[0] != 0):
print "Setting failed"
return
print "Set gateway: " + gateway
dlspeed = testSpeed(urls)
option = (gateway, dlspeed)
print "Network option: " + str(option)
if (option[1] > bestChoice[1]):
bestChoice = option
print "Best choice: " + str(bestChoice)
setGateway(wmiObj, bestChoice[0])
try:
input("Press any key to continue: ")
except:
print('Finished')
def readConfigurationFile():
urls = None
config = 'config.json'
if os.path.exists(config):
with open(config) as file:
content = file.read()
try:
config_json = json.loads(content)
urls = config_json['urls']
except:
pass
return urls
def main():
urls = readConfigurationFile()
checkGatewayStatus(urls)
if __name__ == '__main__':
main()
| network.py | 3,056 | ret = configuration.EnableStatic(IPAddress=[ip],SubnetMask=[subnetmask]) Query current gateway define gateways | 110 | en | 0.46891 |
#!/usr/bin/env python
"""
Copyright 2017-2018 Fizyr (https://fizyr.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import argparse
import os
import sys
import cv2
import numpy as np
from keras_retinanet.utils.transform import random_transform_generator
from keras_retinanet.utils.visualization import draw_annotations, draw_boxes, draw_caption
from keras_retinanet.utils.colors import label_color
# Allow relative imports when being executed as script.
if __name__ == "__main__" and __package__ is None:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
import keras_retinanet.bin
__package__ = "keras_maskrcnn.bin"
# Change these to absolute imports if you copy this script outside the keras_retinanet package.
from ..utils.visualization import draw_mask
def create_generator(args):
# create random transform generator for augmenting training data
transform_generator = random_transform_generator(
# min_rotation=-0.1,
# max_rotation=0.1,
# min_translation=(-0.1, -0.1),
# max_translation=(0.1, 0.1),
# min_shear=-0.1,
# max_shear=0.1,
# min_scaling=(0.9, 0.9),
# max_scaling=(1.1, 1.1),
flip_x_chance=0.5,
# flip_y_chance=0.5,
)
if args.dataset_type == 'coco':
# import here to prevent unnecessary dependency on cocoapi
from ..preprocessing.coco import CocoGenerator
generator = CocoGenerator(
args.coco_path,
args.coco_set,
transform_generator=transform_generator
)
elif args.dataset_type == 'csv':
from ..preprocessing.csv_generator import CSVGenerator
generator = CSVGenerator(
args.annotations,
args.classes,
transform_generator=transform_generator
)
else:
raise ValueError('Invalid data type received: {}'.format(args.dataset_type))
return generator
def parse_args(args):
parser = argparse.ArgumentParser(description='Debug script for a RetinaNet-MaskRCNN network.')
subparsers = parser.add_subparsers(help='Arguments for specific dataset types.', dest='dataset_type')
subparsers.required = True
coco_parser = subparsers.add_parser('coco')
coco_parser.add_argument('coco_path', help='Path to dataset directory (ie. /tmp/COCO).')
coco_parser.add_argument('--coco-set', help='Name of the set to show (defaults to val2017).', default='val2017')
csv_parser = subparsers.add_parser('csv')
csv_parser.add_argument('annotations', help='Path to a CSV file containing annotations for evaluation.')
csv_parser.add_argument('classes', help='Path to a CSV file containing class label mapping.')
parser.add_argument('-l', '--loop', help='Loop forever, even if the dataset is exhausted.', action='store_true')
parser.add_argument('--no-resize', help='Disable image resizing.', dest='resize', action='store_false')
parser.add_argument('--anchors', help='Show positive anchors on the image.', action='store_true')
parser.add_argument('--annotations', help='Show annotations on the image. Green annotations have anchors, red annotations don\'t and therefore don\'t contribute to training.', action='store_true')
parser.add_argument('--masks', help='Show annotated masks on the image.', action='store_true')
parser.add_argument('--random-transform', help='Randomly transform image and annotations.', action='store_true')
return parser.parse_args(args)
def run(generator, args):
# display images, one at a time
for i in range(generator.size()):
# load the data
image = generator.load_image(i)
annotations, masks = generator.load_annotations(i)
# apply random transformations
if args.random_transform:
image, annotations, masks = generator.random_transform_group_entry(image, annotations, masks)
# resize the image and annotations
if args.resize:
image, image_scale = generator.resize_image(image)
annotations[:, :4] *= image_scale
for m in range(len(masks)):
masks[m], _ = generator.resize_image(masks[m])
# draw anchors on the image
if args.anchors:
labels, _, anchors = generator.compute_anchor_targets(image.shape, annotations, generator.num_classes())
draw_boxes(image, anchors[np.max(labels, axis=1) == 1], (255, 255, 0), thickness=1)
# draw annotations on the image
if args.annotations:
# draw annotations in red
draw_annotations(image, annotations, color=(0, 0, 255), label_to_name=generator.label_to_name)
# draw regressed anchors in green to override most red annotations
# result is that annotations without anchors are red, with anchors are green
labels, boxes, _ = generator.compute_anchor_targets(image.shape, annotations, generator.num_classes())
draw_boxes(image, boxes[np.max(labels, axis=1) == 1], (0, 255, 0))
# Draw masks over the image with random colours
if args.masks:
for m in range(len(masks)):
# crop the mask with the related bbox size, and then draw them
box = annotations[m, :4].astype(int)
mask = masks[m][box[1]:box[3], box[0]:box[2]]
draw_mask(image, box, mask, label_color(annotations[m, 4].astype(int)))
# add the label caption
caption = '{}'.format(generator.label_to_name(annotations[m, 4]))
draw_caption(image, box, caption)
cv2.imshow('Image', image)
if cv2.waitKey() == ord('q'):
return False
return True
def main(args=None):
# parse arguments
if args is None:
args = sys.argv[1:]
args = parse_args(args)
# create the generator
generator = create_generator(args)
# create the display window
cv2.namedWindow('Image', cv2.WINDOW_NORMAL)
if args.loop:
while run(generator, args):
pass
else:
run(generator, args)
if __name__ == '__main__':
main()
| keras_maskrcnn/bin/debug.py | 6,656 | Copyright 2017-2018 Fizyr (https://fizyr.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
!/usr/bin/env python Allow relative imports when being executed as script. Change these to absolute imports if you copy this script outside the keras_retinanet package. create random transform generator for augmenting training data min_rotation=-0.1, max_rotation=0.1, min_translation=(-0.1, -0.1), max_translation=(0.1, 0.1), min_shear=-0.1, max_shear=0.1, min_scaling=(0.9, 0.9), max_scaling=(1.1, 1.1), flip_y_chance=0.5, import here to prevent unnecessary dependency on cocoapi display images, one at a time load the data apply random transformations resize the image and annotations draw anchors on the image draw annotations on the image draw annotations in red draw regressed anchors in green to override most red annotations result is that annotations without anchors are red, with anchors are green Draw masks over the image with random colours crop the mask with the related bbox size, and then draw them add the label caption parse arguments create the generator create the display window | 1,571 | en | 0.745693 |
"""
DataMeta
DataMeta # noqa: E501
The version of the OpenAPI document: 1.4.0
Contact: leon.kuchenbecker@uni-tuebingen.de
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from datameta_client_lib.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
class StagedMetaDataSets(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'metadataset_ids': ([str],), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'metadataset_ids': 'metadatasetIds', # noqa: E501
}
_composed_schemas = {}
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, metadataset_ids, *args, **kwargs): # noqa: E501
"""StagedMetaDataSets - a model defined in OpenAPI
Args:
metadataset_ids ([str]):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.metadataset_ids = metadataset_ids
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
| datameta_client_lib/model/staged_meta_data_sets.py | 6,553 | NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
StagedMetaDataSets - a model defined in OpenAPI
Args:
metadataset_ids ([str]):
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
DataMeta
DataMeta # noqa: E501
The version of the OpenAPI document: 1.4.0
Contact: leon.kuchenbecker@uni-tuebingen.de
Generated by: https://openapi-generator.tech
noqa: F401 noqa: F401 noqa: F401 noqa: E501 noqa: E501 noqa: E501 discard variable. | 3,576 | en | 0.777281 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""The Gamma distribution class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.contrib.distributions.python.ops import distribution # pylint: disable=line-too-long
from tensorflow.contrib.framework.python.framework import tensor_util as contrib_tensor_util # pylint: disable=line-too-long
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
class Gamma(distribution.Distribution):
"""The `Gamma` distribution with parameter alpha and beta.
The parameters are the shape and inverse scale parameters alpha, beta.
The PDF of this distribution is:
```pdf(x) = (beta^alpha)(x^(alpha-1))e^(-x*beta)/Gamma(alpha), x > 0```
and the CDF of this distribution is:
```cdf(x) = GammaInc(alpha, beta * x) / Gamma(alpha), x > 0```
where GammaInc is the incomplete lower Gamma function.
Examples:
```python
dist = Gamma(alpha=3.0, beta=2.0)
dist2 = Gamma(alpha=[3.0, 4.0], beta=[2.0, 3.0])
```
"""
def __init__(self,
alpha,
beta,
validate_args=True,
allow_nan_stats=False,
name="Gamma"):
"""Construct Gamma distributions with parameters `alpha` and `beta`.
The parameters `alpha` and `beta` must be shaped in a way that supports
broadcasting (e.g. `alpha + beta` is a valid operation).
Args:
alpha: Floating point tensor, the shape params of the
distribution(s).
alpha must contain only positive values.
beta: Floating point tensor, the inverse scale params of the
distribution(s).
beta must contain only positive values.
validate_args: Whether to assert that `a > 0, b > 0`, and that `x > 0` in
the methods `prob(x)` and `log_prob(x)`. If `validate_args` is `False`
and the inputs are invalid, correct behavior is not guaranteed.
allow_nan_stats: Boolean, default `False`. If `False`, raise an
exception if a statistic (e.g. mean/mode/etc...) is undefined for any
batch member. If `True`, batch members with valid parameters leading to
undefined statistics will return NaN for this statistic.
name: The name to prepend to all ops created by this distribution.
Raises:
TypeError: if `alpha` and `beta` are different dtypes.
"""
self._allow_nan_stats = allow_nan_stats
self._validate_args = validate_args
with ops.name_scope(name, values=[alpha, beta]) as scope:
self._name = scope
with ops.control_dependencies([check_ops.assert_positive(
alpha), check_ops.assert_positive(beta)] if validate_args else []):
alpha = array_ops.identity(alpha, name="alpha")
beta = array_ops.identity(beta, name="beta")
contrib_tensor_util.assert_same_float_dtype((alpha, beta))
self._broadcast_tensor = alpha + beta
self._get_batch_shape = self._broadcast_tensor.get_shape()
self._get_event_shape = tensor_shape.TensorShape([])
self._alpha = alpha
self._beta = beta
@property
def allow_nan_stats(self):
"""Boolean describing behavior when a stat is undefined for batch member."""
return self._allow_nan_stats
@property
def validate_args(self):
"""Boolean describing behavior on invalid input."""
return self._validate_args
@property
def name(self):
"""Name to prepend to all ops."""
return self._name
@property
def dtype(self):
"""dtype of samples from this distribution."""
return self._alpha.dtype
@property
def alpha(self):
"""Shape parameter."""
return self._alpha
@property
def beta(self):
"""Inverse scale parameter."""
return self._beta
def batch_shape(self, name="batch_shape"):
"""Batch dimensions of this instance as a 1-D int32 `Tensor`.
The product of the dimensions of the `batch_shape` is the number of
independent distributions of this kind the instance represents.
Args:
name: name to give to the op
Returns:
`Tensor` `batch_shape`
"""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._broadcast_tensor]):
return array_ops.shape(self._broadcast_tensor)
def get_batch_shape(self):
"""`TensorShape` available at graph construction time.
Same meaning as `batch_shape`. May be only partially defined.
Returns:
`TensorShape` object.
"""
return self._get_batch_shape
def event_shape(self, name="event_shape"):
"""Shape of a sample from a single distribution as a 1-D int32 `Tensor`.
Args:
name: name to give to the op
Returns:
`Tensor` `event_shape`
"""
with ops.name_scope(self.name):
with ops.name_scope(name):
return constant_op.constant([], dtype=dtypes.int32)
def get_event_shape(self):
"""`TensorShape` available at graph construction time.
Same meaning as `event_shape`. May be only partially defined.
Returns:
`TensorShape` object.
"""
return self._get_event_shape
def mean(self, name="mean"):
"""Mean of each batch member."""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._alpha, self._beta]):
return self._alpha / self._beta
def mode(self, name="mode"):
"""Mode of each batch member.
The mode of a gamma distribution is `(alpha - 1) / beta` when `alpha > 1`,
and `NaN` otherwise. If `self.allow_nan_stats` is `False`, an exception
will be raised rather than returning `NaN`.
Args:
name: A name to give this op.
Returns:
The mode for every batch member, a `Tensor` with same `dtype` as self.
"""
alpha = self._alpha
beta = self._beta
with ops.name_scope(self.name):
with ops.name_scope(name, values=[alpha, beta]):
mode_if_defined = (alpha - 1.0) / beta
if self.allow_nan_stats:
alpha_ge_1 = alpha >= 1.0
nan = np.nan * self._ones()
return math_ops.select(alpha_ge_1, mode_if_defined, nan)
else:
one = constant_op.constant(1.0, dtype=self.dtype)
return control_flow_ops.with_dependencies(
[check_ops.assert_less(
one, alpha,
message="mode not defined for components of alpha <= 1"
)], mode_if_defined)
def variance(self, name="variance"):
"""Variance of each batch member."""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._alpha, self._beta]):
return self._alpha / math_ops.square(self._beta)
def std(self, name="std"):
"""Standard deviation of this distribution."""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._alpha, self._beta]):
return math_ops.sqrt(self._alpha) / self._beta
def log_prob(self, x, name="log_prob"):
"""Log prob of observations in `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
log_prob: tensor of dtype `dtype`, the log-PDFs of `x`.
Raises:
TypeError: if `x` and `alpha` are different dtypes.
"""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._alpha, self._beta, x]):
alpha = self._alpha
beta = self._beta
x = ops.convert_to_tensor(x)
x = control_flow_ops.with_dependencies([check_ops.assert_positive(x)] if
self.validate_args else [], x)
contrib_tensor_util.assert_same_float_dtype(tensors=[x,],
dtype=self.dtype)
return (alpha * math_ops.log(beta) + (alpha - 1) * math_ops.log(x) -
beta * x - math_ops.lgamma(self._alpha))
def prob(self, x, name="prob"):
"""Pdf of observations in `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
prob: tensor of dtype `dtype`, the PDFs of `x`
Raises:
TypeError: if `x` and `alpha` are different dtypes.
"""
return super(Gamma, self).prob(x, name)
def log_cdf(self, x, name="log_cdf"):
"""Log CDF of observations `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
log_cdf: tensor of dtype `dtype`, the log-CDFs of `x`.
"""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._alpha, self._beta, x]):
x = ops.convert_to_tensor(x)
x = control_flow_ops.with_dependencies([check_ops.assert_positive(x)] if
self.validate_args else [], x)
contrib_tensor_util.assert_same_float_dtype(tensors=[x,],
dtype=self.dtype)
# Note that igamma returns the regularized incomplete gamma function,
# which is what we want for the CDF.
return math_ops.log(math_ops.igamma(self._alpha, self._beta * x))
def cdf(self, x, name="cdf"):
"""CDF of observations `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
cdf: tensor of dtype `dtype`, the CDFs of `x`.
"""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self._alpha, self._beta, x]):
return math_ops.igamma(self._alpha, self._beta * x)
def entropy(self, name="entropy"):
"""The entropy of Gamma distribution(s).
This is defined to be
```
entropy = alpha - log(beta) + log(Gamma(alpha))
+ (1-alpha)digamma(alpha)
```
where digamma(alpha) is the digamma function.
Args:
name: The name to give this op.
Returns:
entropy: tensor of dtype `dtype`, the entropy.
"""
with ops.name_scope(self.name):
with ops.name_scope(name, values=[self.alpha, self._beta]):
alpha = self._alpha
beta = self._beta
return (alpha - math_ops.log(beta) + math_ops.lgamma(alpha) +
(1 - alpha) * math_ops.digamma(alpha))
def sample_n(self, n, seed=None, name="sample_n"):
"""Draws `n` samples from the Gamma distribution(s).
See the doc for tf.random_gamma for further detail.
Args:
n: Python integer, the number of observations to sample from each
distribution.
seed: Python integer, the random seed for this operation.
name: Optional name for the operation.
Returns:
samples: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape`
with values of type `self.dtype`.
"""
with ops.name_scope(self.name, values=[n, self.alpha, self._beta]):
return random_ops.random_gamma([n],
self.alpha,
beta=self._beta,
dtype=self.dtype,
seed=seed,
name=name)
@property
def is_reparameterized(self):
return False
def _ones(self):
return array_ops.ones_like(self._alpha + self._beta, dtype=self.dtype)
@property
def is_continuous(self):
return True
| tensorflow/contrib/distributions/python/ops/gamma.py | 12,568 | The `Gamma` distribution with parameter alpha and beta.
The parameters are the shape and inverse scale parameters alpha, beta.
The PDF of this distribution is:
```pdf(x) = (beta^alpha)(x^(alpha-1))e^(-x*beta)/Gamma(alpha), x > 0```
and the CDF of this distribution is:
```cdf(x) = GammaInc(alpha, beta * x) / Gamma(alpha), x > 0```
where GammaInc is the incomplete lower Gamma function.
Examples:
```python
dist = Gamma(alpha=3.0, beta=2.0)
dist2 = Gamma(alpha=[3.0, 4.0], beta=[2.0, 3.0])
```
Construct Gamma distributions with parameters `alpha` and `beta`.
The parameters `alpha` and `beta` must be shaped in a way that supports
broadcasting (e.g. `alpha + beta` is a valid operation).
Args:
alpha: Floating point tensor, the shape params of the
distribution(s).
alpha must contain only positive values.
beta: Floating point tensor, the inverse scale params of the
distribution(s).
beta must contain only positive values.
validate_args: Whether to assert that `a > 0, b > 0`, and that `x > 0` in
the methods `prob(x)` and `log_prob(x)`. If `validate_args` is `False`
and the inputs are invalid, correct behavior is not guaranteed.
allow_nan_stats: Boolean, default `False`. If `False`, raise an
exception if a statistic (e.g. mean/mode/etc...) is undefined for any
batch member. If `True`, batch members with valid parameters leading to
undefined statistics will return NaN for this statistic.
name: The name to prepend to all ops created by this distribution.
Raises:
TypeError: if `alpha` and `beta` are different dtypes.
Boolean describing behavior when a stat is undefined for batch member.
Shape parameter.
Batch dimensions of this instance as a 1-D int32 `Tensor`.
The product of the dimensions of the `batch_shape` is the number of
independent distributions of this kind the instance represents.
Args:
name: name to give to the op
Returns:
`Tensor` `batch_shape`
Inverse scale parameter.
CDF of observations `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
cdf: tensor of dtype `dtype`, the CDFs of `x`.
dtype of samples from this distribution.
The entropy of Gamma distribution(s).
This is defined to be
```
entropy = alpha - log(beta) + log(Gamma(alpha))
+ (1-alpha)digamma(alpha)
```
where digamma(alpha) is the digamma function.
Args:
name: The name to give this op.
Returns:
entropy: tensor of dtype `dtype`, the entropy.
Shape of a sample from a single distribution as a 1-D int32 `Tensor`.
Args:
name: name to give to the op
Returns:
`Tensor` `event_shape`
`TensorShape` available at graph construction time.
Same meaning as `batch_shape`. May be only partially defined.
Returns:
`TensorShape` object.
`TensorShape` available at graph construction time.
Same meaning as `event_shape`. May be only partially defined.
Returns:
`TensorShape` object.
Log CDF of observations `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
log_cdf: tensor of dtype `dtype`, the log-CDFs of `x`.
Log prob of observations in `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
log_prob: tensor of dtype `dtype`, the log-PDFs of `x`.
Raises:
TypeError: if `x` and `alpha` are different dtypes.
Mean of each batch member.
Mode of each batch member.
The mode of a gamma distribution is `(alpha - 1) / beta` when `alpha > 1`,
and `NaN` otherwise. If `self.allow_nan_stats` is `False`, an exception
will be raised rather than returning `NaN`.
Args:
name: A name to give this op.
Returns:
The mode for every batch member, a `Tensor` with same `dtype` as self.
Name to prepend to all ops.
Pdf of observations in `x` under these Gamma distribution(s).
Args:
x: tensor of dtype `dtype`, must be broadcastable with `alpha` and `beta`.
name: The name to give this op.
Returns:
prob: tensor of dtype `dtype`, the PDFs of `x`
Raises:
TypeError: if `x` and `alpha` are different dtypes.
Draws `n` samples from the Gamma distribution(s).
See the doc for tf.random_gamma for further detail.
Args:
n: Python integer, the number of observations to sample from each
distribution.
seed: Python integer, the random seed for this operation.
name: Optional name for the operation.
Returns:
samples: a `Tensor` of shape `(n,) + self.batch_shape + self.event_shape`
with values of type `self.dtype`.
Standard deviation of this distribution.
Boolean describing behavior on invalid input.
Variance of each batch member.
The Gamma distribution class.
Copyright 2016 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================== pylint: disable=line-too-long pylint: disable=line-too-long Note that igamma returns the regularized incomplete gamma function, which is what we want for the CDF. | 5,651 | en | 0.729537 |
import logging
import sys
from abc import ABC, abstractmethod
logger = logging.getLogger(__name__)
class PaddownException(Exception):
pass
class Paddown(ABC):
@abstractmethod
def has_valid_padding(self, ciphertext: bytes) -> bool:
"""
Override this method and send off the ciphertext to check for valid padding.
:param bytes ciphertext: The ciphertext to check, send this to your padding oracle.
:rtype: True for valid padding, False otherwise.
"""
raise PaddownException("Not implemented")
def __init__(self, ciphertext: bytes, blocksize: int = 16):
if not isinstance(ciphertext, bytes):
raise Exception(f"Ciphertext {type(ciphertext)} not an instance of {bytes}")
self.ciphertext = ciphertext
self.blocksize = blocksize
def find_c_prime_at_index(self, ciphertext: bytearray, index: int):
if not isinstance(ciphertext, bytearray):
raise PaddownException(f"ciphertext not an instance of {bytearray}")
# Replace ciphertext at index with a guessed byte
ciphertext_temp = ciphertext
for c_prime in range(256):
ciphertext_temp[index] = c_prime
if self.has_valid_padding(ciphertext_temp):
return c_prime
raise PaddownException(f"No valid padding found, is .has_valid_padding(...) implemented correctly?")
def decrypt_block(self, c_i):
if not isinstance(c_i, bytearray):
raise PaddownException(f"block c_i not an instance of {bytearray}")
c_previous = bytearray(b"\x00" * self.blocksize)
intermediate = bytearray(b"\x00" * self.blocksize)
for i in range(self.blocksize):
self.progress_bar(i, self.blocksize - 1, "Decrypting ")
for j in range(i):
c_previous[(self.blocksize - 1) - j] = intermediate[(self.blocksize - 1) - j] ^ (i + 1)
c_prime = self.find_c_prime_at_index(c_previous + c_i, (self.blocksize - 1) - i)
intermediate[(self.blocksize - 1) - i] = c_prime ^ (i + 1)
logger.debug(f"intermediate: {[hex(x)[2:] for x in intermediate]}")
return intermediate
def get_intermediate(self, ciphertext) -> bytes:
key = b""
blocks = len(ciphertext) // self.blocksize
# Iterate blocks last to first
for i in range(blocks):
block_start = len(ciphertext) - (i + 1) * self.blocksize
block_end = len(ciphertext) - (i * self.blocksize)
key = self.decrypt_block(ciphertext[block_start:block_end]) + key
return key
def decrypt(self) -> bytes:
logger.debug(f"Ciphertext length: {len(self.ciphertext)}")
logger.debug(f"Blocks to decrypt: {len(self.ciphertext) // self.blocksize}")
# Convert self.ciphertext to mutable bytearray
self.ciphertext = bytearray(self.ciphertext)
key = self.get_intermediate(self.ciphertext)
plaintext = bytearray()
for i in range(len(self.ciphertext) - self.blocksize):
b = self.ciphertext[i] ^ key[i + self.blocksize]
plaintext += (b).to_bytes(1, byteorder="big")
print("\n") # print variable on new line from progress bar
return plaintext
def progress_bar(self, i, total_length, post_text):
n_bar = 100 # size of progress bar
j = i / total_length
sys.stdout.write("\r")
sys.stdout.write(f"[{'#' * int(n_bar * j):{n_bar}s}] {int(100 * j)}% {post_text}")
sys.stdout.flush()
| paddown.py | 3,558 | Override this method and send off the ciphertext to check for valid padding.
:param bytes ciphertext: The ciphertext to check, send this to your padding oracle.
:rtype: True for valid padding, False otherwise.
Replace ciphertext at index with a guessed byte Iterate blocks last to first Convert self.ciphertext to mutable bytearray print variable on new line from progress bar size of progress bar | 400 | en | 0.621989 |
"""
WSGI config for CongressionalRecord project.
This module contains the WSGI application used by Django's development server
and any production WSGI deployments. It should expose a module-level variable
named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover
this application via the ``WSGI_APPLICATION`` setting.
Usually you will have the standard Django WSGI application here, but it also
might make sense to replace the whole Django WSGI application with a custom one
that later delegates to the Django one. For example, you could introduce WSGI
middleware here, or combine a Django application with an application of another
framework.
"""
import os
# We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks
# if running multiple sites in the same mod_wsgi process. To fix this, use
# mod_wsgi daemon mode with each site in its own daemon process, or use
# os.environ["DJANGO_SETTINGS_MODULE"] = "CongressionalRecord.settings"
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "CongressionalRecord.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
| CongressionalRecord/wsgi.py | 1,458 | WSGI config for CongressionalRecord project.
This module contains the WSGI application used by Django's development server
and any production WSGI deployments. It should expose a module-level variable
named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover
this application via the ``WSGI_APPLICATION`` setting.
Usually you will have the standard Django WSGI application here, but it also
might make sense to replace the whole Django WSGI application with a custom one
that later delegates to the Django one. For example, you could introduce WSGI
middleware here, or combine a Django application with an application of another
framework.
We defer to a DJANGO_SETTINGS_MODULE already in the environment. This breaks if running multiple sites in the same mod_wsgi process. To fix this, use mod_wsgi daemon mode with each site in its own daemon process, or use os.environ["DJANGO_SETTINGS_MODULE"] = "CongressionalRecord.settings" This application object is used by any WSGI server configured to use this file. This includes Django's development server, if the WSGI_APPLICATION setting points here. Apply WSGI middleware here. from helloworld.wsgi import HelloWorldApplication application = HelloWorldApplication(application) | 1,250 | en | 0.843154 |
"""Basic pyon logging (with or without container)
NOTE: the functionality of this module has moved to ooi.logging.config.
currently this module is maintained for API compatability, but is implemented using the new package.
"""
import logging
from ooi.logging import config
DEFAULT_LOGGING_PATHS = ['res/config/logging.yml', 'res/config/logging.local.yml']
logging_was_configured = False
def configure_logging(logging_conf_paths, logging_config_override=None):
"""
Public call to configure and initialize logging.
@param logging_conf_paths List of paths to logging config YML files (in read order)
@param config_override Dict with config entries overriding files read
"""
global logging_was_configured
logging_was_configured = True
for path in logging_conf_paths:
try:
config.add_configuration(path)
except Exception, e:
print 'WARNING: could not load logging configuration file %s: %s' % (path, e)
if logging_config_override:
try:
config.add_configuration(logging_config_override)
except Exception,e:
print 'WARNING: failed to apply logging override %r: %e' % (logging_config_override,e)
# direct warnings mechanism to loggers
logging.captureWarnings(True)
def is_logging_configured():
""" allow caller to determine if logging has already been configured in this container """
global logging_was_configured
return logging_was_configured or config.get_configuration()
| pyon/core/log.py | 1,537 | direct warnings mechanism to loggers | 36 | en | 0.830046 |
"""Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: boosted_trees_ops.cc
"""
import collections as _collections
import six as _six
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
from tensorflow.python.eager import context as _context
from tensorflow.python.eager import core as _core
from tensorflow.python.eager import execute as _execute
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import errors as _errors
from tensorflow.python.framework import tensor_shape as _tensor_shape
from tensorflow.core.framework import op_def_pb2 as _op_def_pb2
# Needed to trigger the call to _set_call_cpp_shape_fn.
from tensorflow.python.framework import common_shapes as _common_shapes
from tensorflow.python.framework import op_def_registry as _op_def_registry
from tensorflow.python.framework import ops as _ops
from tensorflow.python.framework import op_def_library as _op_def_library
from tensorflow.python.util.tf_export import tf_export
_boosted_trees_calculate_best_gains_per_feature_outputs = ["node_ids_list",
"gains_list",
"thresholds_list",
"left_node_contribs_list",
"right_node_contribs_list"]
_BoostedTreesCalculateBestGainsPerFeatureOutput = _collections.namedtuple(
"BoostedTreesCalculateBestGainsPerFeature",
_boosted_trees_calculate_best_gains_per_feature_outputs)
def boosted_trees_calculate_best_gains_per_feature(node_id_range, stats_summary_list, l1, l2, tree_complexity, min_node_weight, max_splits, name=None):
r"""Calculates gains for each feature and returns the best possible split information for the feature.
The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.
It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.
In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).
The length of output lists are all of the same length, `num_features`.
The output shapes are compatible in a way that the first dimension of all tensors of all lists are the same and equal to the number of possible split nodes for each feature.
Args:
node_id_range: A `Tensor` of type `int32`.
A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).
stats_summary_list: A list of at least 1 `Tensor` objects with type `float32`.
A list of Rank 3 tensor (#shape=[max_splits, bucket, 2]) for accumulated stats summary (gradient/hessian) per node per buckets for each feature. The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used.
l1: A `Tensor` of type `float32`.
l1 regularization factor on leaf weights, per instance based.
l2: A `Tensor` of type `float32`.
l2 regularization factor on leaf weights, per instance based.
tree_complexity: A `Tensor` of type `float32`.
adjustment to the gain, per leaf based.
min_node_weight: A `Tensor` of type `float32`.
mininum avg of hessians in a node before required for the node to be considered for splitting.
max_splits: An `int` that is `>= 1`.
the number of nodes that can be split in the whole tree. Used as a dimension of output tensors.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (node_ids_list, gains_list, thresholds_list, left_node_contribs_list, right_node_contribs_list).
node_ids_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`.
gains_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.
thresholds_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`.
left_node_contribs_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.
right_node_contribs_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
if not isinstance(stats_summary_list, (list, tuple)):
raise TypeError(
"Expected list for 'stats_summary_list' argument to "
"'boosted_trees_calculate_best_gains_per_feature' Op, not %r." % stats_summary_list)
_attr_num_features = len(stats_summary_list)
max_splits = _execute.make_int(max_splits, "max_splits")
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesCalculateBestGainsPerFeature",
node_id_range=node_id_range, stats_summary_list=stats_summary_list,
l1=l1, l2=l2, tree_complexity=tree_complexity,
min_node_weight=min_node_weight, max_splits=max_splits, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("max_splits", _op.get_attr("max_splits"), "num_features",
_op.get_attr("num_features"))
_execute.record_gradient(
"BoostedTreesCalculateBestGainsPerFeature", _inputs_flat, _attrs, _result, name)
_result = [_result[:_attr_num_features]] + _result[_attr_num_features:]
_result = _result[:1] + [_result[1:1 + _attr_num_features]] + _result[1 + _attr_num_features:]
_result = _result[:2] + [_result[2:2 + _attr_num_features]] + _result[2 + _attr_num_features:]
_result = _result[:3] + [_result[3:3 + _attr_num_features]] + _result[3 + _attr_num_features:]
_result = _result[:4] + [_result[4:]]
_result = _BoostedTreesCalculateBestGainsPerFeatureOutput._make(_result)
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesCalculateBestGainsPerFeature", name,
_ctx._post_execution_callbacks, node_id_range, stats_summary_list, l1,
l2, tree_complexity, min_node_weight, "max_splits", max_splits)
_result = _BoostedTreesCalculateBestGainsPerFeatureOutput._make(_result)
return _result
except _core._FallbackException:
return boosted_trees_calculate_best_gains_per_feature_eager_fallback(
node_id_range, stats_summary_list, l1, l2, tree_complexity,
min_node_weight, max_splits=max_splits, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_calculate_best_gains_per_feature_eager_fallback(node_id_range, stats_summary_list, l1, l2, tree_complexity, min_node_weight, max_splits, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_calculate_best_gains_per_feature
"""
_ctx = ctx if ctx else _context.context()
if not isinstance(stats_summary_list, (list, tuple)):
raise TypeError(
"Expected list for 'stats_summary_list' argument to "
"'boosted_trees_calculate_best_gains_per_feature' Op, not %r." % stats_summary_list)
_attr_num_features = len(stats_summary_list)
max_splits = _execute.make_int(max_splits, "max_splits")
node_id_range = _ops.convert_to_tensor(node_id_range, _dtypes.int32)
stats_summary_list = _ops.convert_n_to_tensor(stats_summary_list, _dtypes.float32)
l1 = _ops.convert_to_tensor(l1, _dtypes.float32)
l2 = _ops.convert_to_tensor(l2, _dtypes.float32)
tree_complexity = _ops.convert_to_tensor(tree_complexity, _dtypes.float32)
min_node_weight = _ops.convert_to_tensor(min_node_weight, _dtypes.float32)
_inputs_flat = [node_id_range] + list(stats_summary_list) + [l1, l2, tree_complexity, min_node_weight]
_attrs = ("max_splits", max_splits, "num_features", _attr_num_features)
_result = _execute.execute(b"BoostedTreesCalculateBestGainsPerFeature",
_attr_num_features + _attr_num_features +
_attr_num_features + _attr_num_features +
_attr_num_features, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"BoostedTreesCalculateBestGainsPerFeature", _inputs_flat, _attrs, _result, name)
_result = [_result[:_attr_num_features]] + _result[_attr_num_features:]
_result = _result[:1] + [_result[1:1 + _attr_num_features]] + _result[1 + _attr_num_features:]
_result = _result[:2] + [_result[2:2 + _attr_num_features]] + _result[2 + _attr_num_features:]
_result = _result[:3] + [_result[3:3 + _attr_num_features]] + _result[3 + _attr_num_features:]
_result = _result[:4] + [_result[4:]]
_result = _BoostedTreesCalculateBestGainsPerFeatureOutput._make(_result)
return _result
def boosted_trees_create_ensemble(tree_ensemble_handle, stamp_token, tree_ensemble_serialized, name=None):
r"""Creates a tree ensemble model and returns a handle to it.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble resource to be created.
stamp_token: A `Tensor` of type `int64`.
Token to use as the initial value of the resource stamp.
tree_ensemble_serialized: A `Tensor` of type `string`.
Serialized proto of the tree ensemble.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesCreateEnsemble",
tree_ensemble_handle=tree_ensemble_handle, stamp_token=stamp_token,
tree_ensemble_serialized=tree_ensemble_serialized, name=name)
return _op
_result = None
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesCreateEnsemble", name, _ctx._post_execution_callbacks,
tree_ensemble_handle, stamp_token, tree_ensemble_serialized)
return _result
except _core._FallbackException:
return boosted_trees_create_ensemble_eager_fallback(
tree_ensemble_handle, stamp_token, tree_ensemble_serialized,
name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_create_ensemble_eager_fallback(tree_ensemble_handle, stamp_token, tree_ensemble_serialized, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_create_ensemble
"""
_ctx = ctx if ctx else _context.context()
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
stamp_token = _ops.convert_to_tensor(stamp_token, _dtypes.int64)
tree_ensemble_serialized = _ops.convert_to_tensor(tree_ensemble_serialized, _dtypes.string)
_inputs_flat = [tree_ensemble_handle, stamp_token, tree_ensemble_serialized]
_attrs = None
_result = _execute.execute(b"BoostedTreesCreateEnsemble", 0,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_result = None
return _result
def boosted_trees_deserialize_ensemble(tree_ensemble_handle, stamp_token, tree_ensemble_serialized, name=None):
r"""Deserializes a serialized tree ensemble config and replaces current tree
ensemble.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble.
stamp_token: A `Tensor` of type `int64`.
Token to use as the new value of the resource stamp.
tree_ensemble_serialized: A `Tensor` of type `string`.
Serialized proto of the ensemble.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesDeserializeEnsemble",
tree_ensemble_handle=tree_ensemble_handle, stamp_token=stamp_token,
tree_ensemble_serialized=tree_ensemble_serialized, name=name)
return _op
_result = None
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesDeserializeEnsemble", name,
_ctx._post_execution_callbacks, tree_ensemble_handle, stamp_token,
tree_ensemble_serialized)
return _result
except _core._FallbackException:
return boosted_trees_deserialize_ensemble_eager_fallback(
tree_ensemble_handle, stamp_token, tree_ensemble_serialized,
name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_deserialize_ensemble_eager_fallback(tree_ensemble_handle, stamp_token, tree_ensemble_serialized, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_deserialize_ensemble
"""
_ctx = ctx if ctx else _context.context()
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
stamp_token = _ops.convert_to_tensor(stamp_token, _dtypes.int64)
tree_ensemble_serialized = _ops.convert_to_tensor(tree_ensemble_serialized, _dtypes.string)
_inputs_flat = [tree_ensemble_handle, stamp_token, tree_ensemble_serialized]
_attrs = None
_result = _execute.execute(b"BoostedTreesDeserializeEnsemble", 0,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_result = None
return _result
def boosted_trees_ensemble_resource_handle_op(container="", shared_name="", name=None):
r"""Creates a handle to a BoostedTreesEnsembleResource
Args:
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesEnsembleResourceHandleOp", container=container,
shared_name=shared_name, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"BoostedTreesEnsembleResourceHandleOp", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesEnsembleResourceHandleOp", name,
_ctx._post_execution_callbacks, "container", container, "shared_name",
shared_name)
return _result
except _core._FallbackException:
return boosted_trees_ensemble_resource_handle_op_eager_fallback(
container=container, shared_name=shared_name, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_ensemble_resource_handle_op_eager_fallback(container="", shared_name="", name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_ensemble_resource_handle_op
"""
_ctx = ctx if ctx else _context.context()
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name)
_result = _execute.execute(b"BoostedTreesEnsembleResourceHandleOp", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"BoostedTreesEnsembleResourceHandleOp", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
_boosted_trees_get_ensemble_states_outputs = ["stamp_token", "num_trees",
"num_finalized_trees",
"num_attempted_layers",
"last_layer_nodes_range"]
_BoostedTreesGetEnsembleStatesOutput = _collections.namedtuple(
"BoostedTreesGetEnsembleStates",
_boosted_trees_get_ensemble_states_outputs)
def boosted_trees_get_ensemble_states(tree_ensemble_handle, name=None):
r"""Retrieves the tree ensemble resource stamp token, number of trees and growing statistics.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (stamp_token, num_trees, num_finalized_trees, num_attempted_layers, last_layer_nodes_range).
stamp_token: A `Tensor` of type `int64`.
num_trees: A `Tensor` of type `int32`.
num_finalized_trees: A `Tensor` of type `int32`.
num_attempted_layers: A `Tensor` of type `int32`.
last_layer_nodes_range: A `Tensor` of type `int32`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesGetEnsembleStates",
tree_ensemble_handle=tree_ensemble_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"BoostedTreesGetEnsembleStates", _inputs_flat, _attrs, _result, name)
_result = _BoostedTreesGetEnsembleStatesOutput._make(_result)
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesGetEnsembleStates", name, _ctx._post_execution_callbacks,
tree_ensemble_handle)
_result = _BoostedTreesGetEnsembleStatesOutput._make(_result)
return _result
except _core._FallbackException:
return boosted_trees_get_ensemble_states_eager_fallback(
tree_ensemble_handle, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_get_ensemble_states_eager_fallback(tree_ensemble_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_get_ensemble_states
"""
_ctx = ctx if ctx else _context.context()
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
_inputs_flat = [tree_ensemble_handle]
_attrs = None
_result = _execute.execute(b"BoostedTreesGetEnsembleStates", 5,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"BoostedTreesGetEnsembleStates", _inputs_flat, _attrs, _result, name)
_result = _BoostedTreesGetEnsembleStatesOutput._make(_result)
return _result
def boosted_trees_make_stats_summary(node_ids, gradients, hessians, bucketized_features_list, max_splits, num_buckets, name=None):
r"""Makes the summary of accumulated stats for the batch.
The summary stats contains gradients and hessians accumulated into the corresponding node and bucket for each example.
Args:
node_ids: A `Tensor` of type `int32`.
int32 Rank 1 Tensor containing node ids, which each example falls into for the requested layer.
gradients: A `Tensor` of type `float32`.
float32; Rank 2 Tensor (shape=[#examples, 1]) for gradients.
hessians: A `Tensor` of type `float32`.
float32; Rank 2 Tensor (shape=[#examples, 1]) for hessians.
bucketized_features_list: A list of at least 1 `Tensor` objects with type `int32`.
int32 list of Rank 1 Tensors, each containing the bucketized feature (for each feature column).
max_splits: An `int` that is `>= 1`.
int; the maximum number of splits possible in the whole tree.
num_buckets: An `int` that is `>= 1`.
int; equals to the maximum possible value of bucketized feature.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `float32`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
if not isinstance(bucketized_features_list, (list, tuple)):
raise TypeError(
"Expected list for 'bucketized_features_list' argument to "
"'boosted_trees_make_stats_summary' Op, not %r." % bucketized_features_list)
_attr_num_features = len(bucketized_features_list)
max_splits = _execute.make_int(max_splits, "max_splits")
num_buckets = _execute.make_int(num_buckets, "num_buckets")
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesMakeStatsSummary", node_ids=node_ids,
gradients=gradients, hessians=hessians,
bucketized_features_list=bucketized_features_list,
max_splits=max_splits, num_buckets=num_buckets, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("max_splits", _op.get_attr("max_splits"), "num_buckets",
_op.get_attr("num_buckets"), "num_features",
_op.get_attr("num_features"))
_execute.record_gradient(
"BoostedTreesMakeStatsSummary", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesMakeStatsSummary", name, _ctx._post_execution_callbacks,
node_ids, gradients, hessians, bucketized_features_list, "max_splits",
max_splits, "num_buckets", num_buckets)
return _result
except _core._FallbackException:
return boosted_trees_make_stats_summary_eager_fallback(
node_ids, gradients, hessians, bucketized_features_list,
max_splits=max_splits, num_buckets=num_buckets, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_make_stats_summary_eager_fallback(node_ids, gradients, hessians, bucketized_features_list, max_splits, num_buckets, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_make_stats_summary
"""
_ctx = ctx if ctx else _context.context()
if not isinstance(bucketized_features_list, (list, tuple)):
raise TypeError(
"Expected list for 'bucketized_features_list' argument to "
"'boosted_trees_make_stats_summary' Op, not %r." % bucketized_features_list)
_attr_num_features = len(bucketized_features_list)
max_splits = _execute.make_int(max_splits, "max_splits")
num_buckets = _execute.make_int(num_buckets, "num_buckets")
node_ids = _ops.convert_to_tensor(node_ids, _dtypes.int32)
gradients = _ops.convert_to_tensor(gradients, _dtypes.float32)
hessians = _ops.convert_to_tensor(hessians, _dtypes.float32)
bucketized_features_list = _ops.convert_n_to_tensor(bucketized_features_list, _dtypes.int32)
_inputs_flat = [node_ids, gradients, hessians] + list(bucketized_features_list)
_attrs = ("max_splits", max_splits, "num_buckets", num_buckets,
"num_features", _attr_num_features)
_result = _execute.execute(b"BoostedTreesMakeStatsSummary", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"BoostedTreesMakeStatsSummary", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def boosted_trees_predict(tree_ensemble_handle, bucketized_features, logits_dimension, name=None):
r"""Runs multiple additive regression ensemble predictors on input instances and
computes the logits. It is designed to be used during prediction.
It traverses all the trees and calculates the final score for each instance.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.
A list of rank 1 Tensors containing bucket id for each
feature.
logits_dimension: An `int`.
scalar, dimension of the logits, to be used for partial logits
shape.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `float32`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
if not isinstance(bucketized_features, (list, tuple)):
raise TypeError(
"Expected list for 'bucketized_features' argument to "
"'boosted_trees_predict' Op, not %r." % bucketized_features)
_attr_num_bucketized_features = len(bucketized_features)
logits_dimension = _execute.make_int(logits_dimension, "logits_dimension")
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesPredict", tree_ensemble_handle=tree_ensemble_handle,
bucketized_features=bucketized_features,
logits_dimension=logits_dimension, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("num_bucketized_features",
_op.get_attr("num_bucketized_features"), "logits_dimension",
_op.get_attr("logits_dimension"))
_execute.record_gradient(
"BoostedTreesPredict", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesPredict", name, _ctx._post_execution_callbacks,
tree_ensemble_handle, bucketized_features, "logits_dimension",
logits_dimension)
return _result
except _core._FallbackException:
return boosted_trees_predict_eager_fallback(
tree_ensemble_handle, bucketized_features,
logits_dimension=logits_dimension, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_predict_eager_fallback(tree_ensemble_handle, bucketized_features, logits_dimension, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_predict
"""
_ctx = ctx if ctx else _context.context()
if not isinstance(bucketized_features, (list, tuple)):
raise TypeError(
"Expected list for 'bucketized_features' argument to "
"'boosted_trees_predict' Op, not %r." % bucketized_features)
_attr_num_bucketized_features = len(bucketized_features)
logits_dimension = _execute.make_int(logits_dimension, "logits_dimension")
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
bucketized_features = _ops.convert_n_to_tensor(bucketized_features, _dtypes.int32)
_inputs_flat = [tree_ensemble_handle] + list(bucketized_features)
_attrs = ("num_bucketized_features", _attr_num_bucketized_features,
"logits_dimension", logits_dimension)
_result = _execute.execute(b"BoostedTreesPredict", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"BoostedTreesPredict", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
_boosted_trees_serialize_ensemble_outputs = ["stamp_token",
"tree_ensemble_serialized"]
_BoostedTreesSerializeEnsembleOutput = _collections.namedtuple(
"BoostedTreesSerializeEnsemble",
_boosted_trees_serialize_ensemble_outputs)
def boosted_trees_serialize_ensemble(tree_ensemble_handle, name=None):
r"""Serializes the tree ensemble to a proto.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (stamp_token, tree_ensemble_serialized).
stamp_token: A `Tensor` of type `int64`.
tree_ensemble_serialized: A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesSerializeEnsemble",
tree_ensemble_handle=tree_ensemble_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"BoostedTreesSerializeEnsemble", _inputs_flat, _attrs, _result, name)
_result = _BoostedTreesSerializeEnsembleOutput._make(_result)
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesSerializeEnsemble", name, _ctx._post_execution_callbacks,
tree_ensemble_handle)
_result = _BoostedTreesSerializeEnsembleOutput._make(_result)
return _result
except _core._FallbackException:
return boosted_trees_serialize_ensemble_eager_fallback(
tree_ensemble_handle, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_serialize_ensemble_eager_fallback(tree_ensemble_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_serialize_ensemble
"""
_ctx = ctx if ctx else _context.context()
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
_inputs_flat = [tree_ensemble_handle]
_attrs = None
_result = _execute.execute(b"BoostedTreesSerializeEnsemble", 2,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"BoostedTreesSerializeEnsemble", _inputs_flat, _attrs, _result, name)
_result = _BoostedTreesSerializeEnsembleOutput._make(_result)
return _result
_boosted_trees_training_predict_outputs = ["partial_logits", "tree_ids",
"node_ids"]
_BoostedTreesTrainingPredictOutput = _collections.namedtuple(
"BoostedTreesTrainingPredict", _boosted_trees_training_predict_outputs)
def boosted_trees_training_predict(tree_ensemble_handle, cached_tree_ids, cached_node_ids, bucketized_features, logits_dimension, name=None):
r"""Runs multiple additive regression ensemble predictors on input instances and
computes the update to cached logits. It is designed to be used during training.
It traverses the trees starting from cached tree id and cached node id and
calculates the updates to be pushed to the cache.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
cached_tree_ids: A `Tensor` of type `int32`.
Rank 1 Tensor containing cached tree ids which is the starting
tree of prediction.
cached_node_ids: A `Tensor` of type `int32`.
Rank 1 Tensor containing cached node id which is the starting
node of prediction.
bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.
A list of rank 1 Tensors containing bucket id for each
feature.
logits_dimension: An `int`.
scalar, dimension of the logits, to be used for partial logits
shape.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (partial_logits, tree_ids, node_ids).
partial_logits: A `Tensor` of type `float32`.
tree_ids: A `Tensor` of type `int32`.
node_ids: A `Tensor` of type `int32`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
if not isinstance(bucketized_features, (list, tuple)):
raise TypeError(
"Expected list for 'bucketized_features' argument to "
"'boosted_trees_training_predict' Op, not %r." % bucketized_features)
_attr_num_bucketized_features = len(bucketized_features)
logits_dimension = _execute.make_int(logits_dimension, "logits_dimension")
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesTrainingPredict",
tree_ensemble_handle=tree_ensemble_handle,
cached_tree_ids=cached_tree_ids, cached_node_ids=cached_node_ids,
bucketized_features=bucketized_features,
logits_dimension=logits_dimension, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("num_bucketized_features",
_op.get_attr("num_bucketized_features"), "logits_dimension",
_op.get_attr("logits_dimension"))
_execute.record_gradient(
"BoostedTreesTrainingPredict", _inputs_flat, _attrs, _result, name)
_result = _BoostedTreesTrainingPredictOutput._make(_result)
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesTrainingPredict", name, _ctx._post_execution_callbacks,
tree_ensemble_handle, cached_tree_ids, cached_node_ids,
bucketized_features, "logits_dimension", logits_dimension)
_result = _BoostedTreesTrainingPredictOutput._make(_result)
return _result
except _core._FallbackException:
return boosted_trees_training_predict_eager_fallback(
tree_ensemble_handle, cached_tree_ids, cached_node_ids,
bucketized_features, logits_dimension=logits_dimension, name=name,
ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_training_predict_eager_fallback(tree_ensemble_handle, cached_tree_ids, cached_node_ids, bucketized_features, logits_dimension, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_training_predict
"""
_ctx = ctx if ctx else _context.context()
if not isinstance(bucketized_features, (list, tuple)):
raise TypeError(
"Expected list for 'bucketized_features' argument to "
"'boosted_trees_training_predict' Op, not %r." % bucketized_features)
_attr_num_bucketized_features = len(bucketized_features)
logits_dimension = _execute.make_int(logits_dimension, "logits_dimension")
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
cached_tree_ids = _ops.convert_to_tensor(cached_tree_ids, _dtypes.int32)
cached_node_ids = _ops.convert_to_tensor(cached_node_ids, _dtypes.int32)
bucketized_features = _ops.convert_n_to_tensor(bucketized_features, _dtypes.int32)
_inputs_flat = [tree_ensemble_handle, cached_tree_ids, cached_node_ids] + list(bucketized_features)
_attrs = ("num_bucketized_features", _attr_num_bucketized_features,
"logits_dimension", logits_dimension)
_result = _execute.execute(b"BoostedTreesTrainingPredict", 3,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"BoostedTreesTrainingPredict", _inputs_flat, _attrs, _result, name)
_result = _BoostedTreesTrainingPredictOutput._make(_result)
return _result
def boosted_trees_update_ensemble(tree_ensemble_handle, feature_ids, node_ids, gains, thresholds, left_node_contribs, right_node_contribs, max_depth, learning_rate, pruning_mode, name=None):
r"""Updates the tree ensemble by either adding a layer to the last tree being grown
or by starting a new tree.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the ensemble variable.
feature_ids: A `Tensor` of type `int32`.
Rank 1 tensor with ids for each feature. This is the real id of
the feature that will be used in the split.
node_ids: A list of `Tensor` objects with type `int32`.
List of rank 1 tensors representing the nodes for which this feature
has a split.
gains: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.
List of rank 1 tensors representing the gains for each of the feature's
split.
thresholds: A list with the same length as `node_ids` of `Tensor` objects with type `int32`.
List of rank 1 tensors representing the thesholds for each of the
feature's split.
left_node_contribs: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.
List of rank 2 tensors with left leaf contribs for each of
the feature's splits. Will be added to the previous node values to constitute
the values of the left nodes.
right_node_contribs: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.
List of rank 2 tensors with right leaf contribs for each
of the feature's splits. Will be added to the previous node values to constitute
the values of the right nodes.
max_depth: A `Tensor` of type `int32`. Max depth of the tree to build.
learning_rate: A `Tensor` of type `float32`.
shrinkage const for each new tree.
pruning_mode: An `int` that is `>= 0`.
0-No pruning, 1-Pre-pruning, 2-Post-pruning.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
if not isinstance(node_ids, (list, tuple)):
raise TypeError(
"Expected list for 'node_ids' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % node_ids)
_attr_num_features = len(node_ids)
if not isinstance(gains, (list, tuple)):
raise TypeError(
"Expected list for 'gains' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % gains)
if len(gains) != _attr_num_features:
raise ValueError(
"List argument 'gains' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(gains), _attr_num_features))
if not isinstance(thresholds, (list, tuple)):
raise TypeError(
"Expected list for 'thresholds' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % thresholds)
if len(thresholds) != _attr_num_features:
raise ValueError(
"List argument 'thresholds' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(thresholds), _attr_num_features))
if not isinstance(left_node_contribs, (list, tuple)):
raise TypeError(
"Expected list for 'left_node_contribs' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % left_node_contribs)
if len(left_node_contribs) != _attr_num_features:
raise ValueError(
"List argument 'left_node_contribs' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(left_node_contribs), _attr_num_features))
if not isinstance(right_node_contribs, (list, tuple)):
raise TypeError(
"Expected list for 'right_node_contribs' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % right_node_contribs)
if len(right_node_contribs) != _attr_num_features:
raise ValueError(
"List argument 'right_node_contribs' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(right_node_contribs), _attr_num_features))
pruning_mode = _execute.make_int(pruning_mode, "pruning_mode")
_, _, _op = _op_def_lib._apply_op_helper(
"BoostedTreesUpdateEnsemble",
tree_ensemble_handle=tree_ensemble_handle, feature_ids=feature_ids,
node_ids=node_ids, gains=gains, thresholds=thresholds,
left_node_contribs=left_node_contribs,
right_node_contribs=right_node_contribs, max_depth=max_depth,
learning_rate=learning_rate, pruning_mode=pruning_mode, name=name)
return _op
_result = None
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"BoostedTreesUpdateEnsemble", name, _ctx._post_execution_callbacks,
tree_ensemble_handle, feature_ids, node_ids, gains, thresholds,
left_node_contribs, right_node_contribs, max_depth, learning_rate,
"pruning_mode", pruning_mode)
return _result
except _core._FallbackException:
return boosted_trees_update_ensemble_eager_fallback(
tree_ensemble_handle, feature_ids, node_ids, gains, thresholds,
left_node_contribs, right_node_contribs, max_depth, learning_rate,
pruning_mode=pruning_mode, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def boosted_trees_update_ensemble_eager_fallback(tree_ensemble_handle, feature_ids, node_ids, gains, thresholds, left_node_contribs, right_node_contribs, max_depth, learning_rate, pruning_mode, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function boosted_trees_update_ensemble
"""
_ctx = ctx if ctx else _context.context()
if not isinstance(node_ids, (list, tuple)):
raise TypeError(
"Expected list for 'node_ids' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % node_ids)
_attr_num_features = len(node_ids)
if not isinstance(gains, (list, tuple)):
raise TypeError(
"Expected list for 'gains' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % gains)
if len(gains) != _attr_num_features:
raise ValueError(
"List argument 'gains' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(gains), _attr_num_features))
if not isinstance(thresholds, (list, tuple)):
raise TypeError(
"Expected list for 'thresholds' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % thresholds)
if len(thresholds) != _attr_num_features:
raise ValueError(
"List argument 'thresholds' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(thresholds), _attr_num_features))
if not isinstance(left_node_contribs, (list, tuple)):
raise TypeError(
"Expected list for 'left_node_contribs' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % left_node_contribs)
if len(left_node_contribs) != _attr_num_features:
raise ValueError(
"List argument 'left_node_contribs' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(left_node_contribs), _attr_num_features))
if not isinstance(right_node_contribs, (list, tuple)):
raise TypeError(
"Expected list for 'right_node_contribs' argument to "
"'boosted_trees_update_ensemble' Op, not %r." % right_node_contribs)
if len(right_node_contribs) != _attr_num_features:
raise ValueError(
"List argument 'right_node_contribs' to 'boosted_trees_update_ensemble' Op with length %d "
"must match length %d of argument 'node_ids'." %
(len(right_node_contribs), _attr_num_features))
pruning_mode = _execute.make_int(pruning_mode, "pruning_mode")
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
feature_ids = _ops.convert_to_tensor(feature_ids, _dtypes.int32)
node_ids = _ops.convert_n_to_tensor(node_ids, _dtypes.int32)
gains = _ops.convert_n_to_tensor(gains, _dtypes.float32)
thresholds = _ops.convert_n_to_tensor(thresholds, _dtypes.int32)
left_node_contribs = _ops.convert_n_to_tensor(left_node_contribs, _dtypes.float32)
right_node_contribs = _ops.convert_n_to_tensor(right_node_contribs, _dtypes.float32)
max_depth = _ops.convert_to_tensor(max_depth, _dtypes.int32)
learning_rate = _ops.convert_to_tensor(learning_rate, _dtypes.float32)
_inputs_flat = [tree_ensemble_handle, feature_ids] + list(node_ids) + list(gains) + list(thresholds) + list(left_node_contribs) + list(right_node_contribs) + [max_depth, learning_rate]
_attrs = ("pruning_mode", pruning_mode, "num_features", _attr_num_features)
_result = _execute.execute(b"BoostedTreesUpdateEnsemble", 0,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_result = None
return _result
def is_boosted_trees_ensemble_initialized(tree_ensemble_handle, name=None):
r"""Checks whether a tree ensemble has been initialized.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble resouce.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `bool`.
"""
_ctx = _context._context
if _ctx is None or not _ctx._eager_context.is_eager:
_, _, _op = _op_def_lib._apply_op_helper(
"IsBoostedTreesEnsembleInitialized",
tree_ensemble_handle=tree_ensemble_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"IsBoostedTreesEnsembleInitialized", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
else:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"IsBoostedTreesEnsembleInitialized", name,
_ctx._post_execution_callbacks, tree_ensemble_handle)
return _result
except _core._FallbackException:
return is_boosted_trees_ensemble_initialized_eager_fallback(
tree_ensemble_handle, name=name, ctx=_ctx)
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
def is_boosted_trees_ensemble_initialized_eager_fallback(tree_ensemble_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function is_boosted_trees_ensemble_initialized
"""
_ctx = ctx if ctx else _context.context()
tree_ensemble_handle = _ops.convert_to_tensor(tree_ensemble_handle, _dtypes.resource)
_inputs_flat = [tree_ensemble_handle]
_attrs = None
_result = _execute.execute(b"IsBoostedTreesEnsembleInitialized", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"IsBoostedTreesEnsembleInitialized", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def _InitOpDefLibrary(op_list_proto_bytes):
op_list = _op_def_pb2.OpList()
op_list.ParseFromString(op_list_proto_bytes)
_op_def_registry.register_op_list(op_list)
op_def_lib = _op_def_library.OpDefLibrary()
op_def_lib.add_op_list(op_list)
return op_def_lib
# op {
# name: "BoostedTreesCalculateBestGainsPerFeature"
# input_arg {
# name: "node_id_range"
# type: DT_INT32
# }
# input_arg {
# name: "stats_summary_list"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# input_arg {
# name: "l1"
# type: DT_FLOAT
# }
# input_arg {
# name: "l2"
# type: DT_FLOAT
# }
# input_arg {
# name: "tree_complexity"
# type: DT_FLOAT
# }
# input_arg {
# name: "min_node_weight"
# type: DT_FLOAT
# }
# output_arg {
# name: "node_ids_list"
# type: DT_INT32
# number_attr: "num_features"
# }
# output_arg {
# name: "gains_list"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# output_arg {
# name: "thresholds_list"
# type: DT_INT32
# number_attr: "num_features"
# }
# output_arg {
# name: "left_node_contribs_list"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# output_arg {
# name: "right_node_contribs_list"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# attr {
# name: "max_splits"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# attr {
# name: "num_features"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# }
# op {
# name: "BoostedTreesCreateEnsemble"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "stamp_token"
# type: DT_INT64
# }
# input_arg {
# name: "tree_ensemble_serialized"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesDeserializeEnsemble"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "stamp_token"
# type: DT_INT64
# }
# input_arg {
# name: "tree_ensemble_serialized"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesEnsembleResourceHandleOp"
# output_arg {
# name: "resource"
# type: DT_RESOURCE
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesGetEnsembleStates"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "stamp_token"
# type: DT_INT64
# }
# output_arg {
# name: "num_trees"
# type: DT_INT32
# }
# output_arg {
# name: "num_finalized_trees"
# type: DT_INT32
# }
# output_arg {
# name: "num_attempted_layers"
# type: DT_INT32
# }
# output_arg {
# name: "last_layer_nodes_range"
# type: DT_INT32
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesMakeStatsSummary"
# input_arg {
# name: "node_ids"
# type: DT_INT32
# }
# input_arg {
# name: "gradients"
# type: DT_FLOAT
# }
# input_arg {
# name: "hessians"
# type: DT_FLOAT
# }
# input_arg {
# name: "bucketized_features_list"
# type: DT_INT32
# number_attr: "num_features"
# }
# output_arg {
# name: "stats_summary"
# type: DT_FLOAT
# }
# attr {
# name: "max_splits"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# attr {
# name: "num_buckets"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# attr {
# name: "num_features"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# }
# op {
# name: "BoostedTreesPredict"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "bucketized_features"
# type: DT_INT32
# number_attr: "num_bucketized_features"
# }
# output_arg {
# name: "logits"
# type: DT_FLOAT
# }
# attr {
# name: "num_bucketized_features"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# attr {
# name: "logits_dimension"
# type: "int"
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesSerializeEnsemble"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "stamp_token"
# type: DT_INT64
# }
# output_arg {
# name: "tree_ensemble_serialized"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesTrainingPredict"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "cached_tree_ids"
# type: DT_INT32
# }
# input_arg {
# name: "cached_node_ids"
# type: DT_INT32
# }
# input_arg {
# name: "bucketized_features"
# type: DT_INT32
# number_attr: "num_bucketized_features"
# }
# output_arg {
# name: "partial_logits"
# type: DT_FLOAT
# }
# output_arg {
# name: "tree_ids"
# type: DT_INT32
# }
# output_arg {
# name: "node_ids"
# type: DT_INT32
# }
# attr {
# name: "num_bucketized_features"
# type: "int"
# has_minimum: true
# minimum: 1
# }
# attr {
# name: "logits_dimension"
# type: "int"
# }
# is_stateful: true
# }
# op {
# name: "BoostedTreesUpdateEnsemble"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "feature_ids"
# type: DT_INT32
# }
# input_arg {
# name: "node_ids"
# type: DT_INT32
# number_attr: "num_features"
# }
# input_arg {
# name: "gains"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# input_arg {
# name: "thresholds"
# type: DT_INT32
# number_attr: "num_features"
# }
# input_arg {
# name: "left_node_contribs"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# input_arg {
# name: "right_node_contribs"
# type: DT_FLOAT
# number_attr: "num_features"
# }
# input_arg {
# name: "max_depth"
# type: DT_INT32
# }
# input_arg {
# name: "learning_rate"
# type: DT_FLOAT
# }
# attr {
# name: "pruning_mode"
# type: "int"
# has_minimum: true
# }
# attr {
# name: "num_features"
# type: "int"
# has_minimum: true
# }
# is_stateful: true
# }
# op {
# name: "IsBoostedTreesEnsembleInitialized"
# input_arg {
# name: "tree_ensemble_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "is_initialized"
# type: DT_BOOL
# }
# is_stateful: true
# }
_op_def_lib = _InitOpDefLibrary(b"\n\206\003\n(BoostedTreesCalculateBestGainsPerFeature\022\021\n\rnode_id_range\030\003\022$\n\022stats_summary_list\030\001*\014num_features\022\006\n\002l1\030\001\022\006\n\002l2\030\001\022\023\n\017tree_complexity\030\001\022\023\n\017min_node_weight\030\001\032\037\n\rnode_ids_list\030\003*\014num_features\032\034\n\ngains_list\030\001*\014num_features\032!\n\017thresholds_list\030\003*\014num_features\032)\n\027left_node_contribs_list\030\001*\014num_features\032*\n\030right_node_contribs_list\030\001*\014num_features\"\025\n\nmax_splits\022\003int(\0010\001\"\027\n\014num_features\022\003int(\0010\001\nh\n\032BoostedTreesCreateEnsemble\022\030\n\024tree_ensemble_handle\030\024\022\017\n\013stamp_token\030\t\022\034\n\030tree_ensemble_serialized\030\007\210\001\001\nm\n\037BoostedTreesDeserializeEnsemble\022\030\n\024tree_ensemble_handle\030\024\022\017\n\013stamp_token\030\t\022\034\n\030tree_ensemble_serialized\030\007\210\001\001\nk\n$BoostedTreesEnsembleResourceHandleOp\032\014\n\010resource\030\024\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\210\001\001\n\253\001\n\035BoostedTreesGetEnsembleStates\022\030\n\024tree_ensemble_handle\030\024\032\017\n\013stamp_token\030\t\032\r\n\tnum_trees\030\003\032\027\n\023num_finalized_trees\030\003\032\030\n\024num_attempted_layers\030\003\032\032\n\026last_layer_nodes_range\030\003\210\001\001\n\320\001\n\034BoostedTreesMakeStatsSummary\022\014\n\010node_ids\030\003\022\r\n\tgradients\030\001\022\014\n\010hessians\030\001\022*\n\030bucketized_features_list\030\003*\014num_features\032\021\n\rstats_summary\030\001\"\025\n\nmax_splits\022\003int(\0010\001\"\026\n\013num_buckets\022\003int(\0010\001\"\027\n\014num_features\022\003int(\0010\001\n\255\001\n\023BoostedTreesPredict\022\030\n\024tree_ensemble_handle\030\024\0220\n\023bucketized_features\030\003*\027num_bucketized_features\032\n\n\006logits\030\001\"\"\n\027num_bucketized_features\022\003int(\0010\001\"\027\n\020logits_dimension\022\003int\210\001\001\nk\n\035BoostedTreesSerializeEnsemble\022\030\n\024tree_ensemble_handle\030\024\032\017\n\013stamp_token\030\t\032\034\n\030tree_ensemble_serialized\030\007\210\001\001\n\203\002\n\033BoostedTreesTrainingPredict\022\030\n\024tree_ensemble_handle\030\024\022\023\n\017cached_tree_ids\030\003\022\023\n\017cached_node_ids\030\003\0220\n\023bucketized_features\030\003*\027num_bucketized_features\032\022\n\016partial_logits\030\001\032\014\n\010tree_ids\030\003\032\014\n\010node_ids\030\003\"\"\n\027num_bucketized_features\022\003int(\0010\001\"\027\n\020logits_dimension\022\003int\210\001\001\n\272\002\n\032BoostedTreesUpdateEnsemble\022\030\n\024tree_ensemble_handle\030\024\022\017\n\013feature_ids\030\003\022\032\n\010node_ids\030\003*\014num_features\022\027\n\005gains\030\001*\014num_features\022\034\n\nthresholds\030\003*\014num_features\022$\n\022left_node_contribs\030\001*\014num_features\022%\n\023right_node_contribs\030\001*\014num_features\022\r\n\tmax_depth\030\003\022\021\n\rlearning_rate\030\001\"\025\n\014pruning_mode\022\003int(\001\"\025\n\014num_features\022\003int(\001\210\001\001\nT\n!IsBoostedTreesEnsembleInitialized\022\030\n\024tree_ensemble_handle\030\024\032\022\n\016is_initialized\030\n\210\001\001")
| Keras_tensorflow_nightly/source2.7/tensorflow/python/ops/gen_boosted_trees_ops.py | 58,905 | Calculates gains for each feature and returns the best possible split information for the feature.
The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.
It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.
In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).
The length of output lists are all of the same length, `num_features`.
The output shapes are compatible in a way that the first dimension of all tensors of all lists are the same and equal to the number of possible split nodes for each feature.
Args:
node_id_range: A `Tensor` of type `int32`.
A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).
stats_summary_list: A list of at least 1 `Tensor` objects with type `float32`.
A list of Rank 3 tensor (#shape=[max_splits, bucket, 2]) for accumulated stats summary (gradient/hessian) per node per buckets for each feature. The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used.
l1: A `Tensor` of type `float32`.
l1 regularization factor on leaf weights, per instance based.
l2: A `Tensor` of type `float32`.
l2 regularization factor on leaf weights, per instance based.
tree_complexity: A `Tensor` of type `float32`.
adjustment to the gain, per leaf based.
min_node_weight: A `Tensor` of type `float32`.
mininum avg of hessians in a node before required for the node to be considered for splitting.
max_splits: An `int` that is `>= 1`.
the number of nodes that can be split in the whole tree. Used as a dimension of output tensors.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (node_ids_list, gains_list, thresholds_list, left_node_contribs_list, right_node_contribs_list).
node_ids_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`.
gains_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.
thresholds_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `int32`.
left_node_contribs_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.
right_node_contribs_list: A list with the same length as `stats_summary_list` of `Tensor` objects with type `float32`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_calculate_best_gains_per_feature
Creates a tree ensemble model and returns a handle to it.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble resource to be created.
stamp_token: A `Tensor` of type `int64`.
Token to use as the initial value of the resource stamp.
tree_ensemble_serialized: A `Tensor` of type `string`.
Serialized proto of the tree ensemble.
name: A name for the operation (optional).
Returns:
The created Operation.
This is the slowpath function for Eager mode.
This is for function boosted_trees_create_ensemble
Deserializes a serialized tree ensemble config and replaces current tree
ensemble.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble.
stamp_token: A `Tensor` of type `int64`.
Token to use as the new value of the resource stamp.
tree_ensemble_serialized: A `Tensor` of type `string`.
Serialized proto of the ensemble.
name: A name for the operation (optional).
Returns:
The created Operation.
This is the slowpath function for Eager mode.
This is for function boosted_trees_deserialize_ensemble
Creates a handle to a BoostedTreesEnsembleResource
Args:
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_ensemble_resource_handle_op
Retrieves the tree ensemble resource stamp token, number of trees and growing statistics.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (stamp_token, num_trees, num_finalized_trees, num_attempted_layers, last_layer_nodes_range).
stamp_token: A `Tensor` of type `int64`.
num_trees: A `Tensor` of type `int32`.
num_finalized_trees: A `Tensor` of type `int32`.
num_attempted_layers: A `Tensor` of type `int32`.
last_layer_nodes_range: A `Tensor` of type `int32`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_get_ensemble_states
Makes the summary of accumulated stats for the batch.
The summary stats contains gradients and hessians accumulated into the corresponding node and bucket for each example.
Args:
node_ids: A `Tensor` of type `int32`.
int32 Rank 1 Tensor containing node ids, which each example falls into for the requested layer.
gradients: A `Tensor` of type `float32`.
float32; Rank 2 Tensor (shape=[#examples, 1]) for gradients.
hessians: A `Tensor` of type `float32`.
float32; Rank 2 Tensor (shape=[#examples, 1]) for hessians.
bucketized_features_list: A list of at least 1 `Tensor` objects with type `int32`.
int32 list of Rank 1 Tensors, each containing the bucketized feature (for each feature column).
max_splits: An `int` that is `>= 1`.
int; the maximum number of splits possible in the whole tree.
num_buckets: An `int` that is `>= 1`.
int; equals to the maximum possible value of bucketized feature.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `float32`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_make_stats_summary
Runs multiple additive regression ensemble predictors on input instances and
computes the logits. It is designed to be used during prediction.
It traverses all the trees and calculates the final score for each instance.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.
A list of rank 1 Tensors containing bucket id for each
feature.
logits_dimension: An `int`.
scalar, dimension of the logits, to be used for partial logits
shape.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `float32`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_predict
Serializes the tree ensemble to a proto.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (stamp_token, tree_ensemble_serialized).
stamp_token: A `Tensor` of type `int64`.
tree_ensemble_serialized: A `Tensor` of type `string`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_serialize_ensemble
Runs multiple additive regression ensemble predictors on input instances and
computes the update to cached logits. It is designed to be used during training.
It traverses the trees starting from cached tree id and cached node id and
calculates the updates to be pushed to the cache.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
cached_tree_ids: A `Tensor` of type `int32`.
Rank 1 Tensor containing cached tree ids which is the starting
tree of prediction.
cached_node_ids: A `Tensor` of type `int32`.
Rank 1 Tensor containing cached node id which is the starting
node of prediction.
bucketized_features: A list of at least 1 `Tensor` objects with type `int32`.
A list of rank 1 Tensors containing bucket id for each
feature.
logits_dimension: An `int`.
scalar, dimension of the logits, to be used for partial logits
shape.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (partial_logits, tree_ids, node_ids).
partial_logits: A `Tensor` of type `float32`.
tree_ids: A `Tensor` of type `int32`.
node_ids: A `Tensor` of type `int32`.
This is the slowpath function for Eager mode.
This is for function boosted_trees_training_predict
Updates the tree ensemble by either adding a layer to the last tree being grown
or by starting a new tree.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the ensemble variable.
feature_ids: A `Tensor` of type `int32`.
Rank 1 tensor with ids for each feature. This is the real id of
the feature that will be used in the split.
node_ids: A list of `Tensor` objects with type `int32`.
List of rank 1 tensors representing the nodes for which this feature
has a split.
gains: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.
List of rank 1 tensors representing the gains for each of the feature's
split.
thresholds: A list with the same length as `node_ids` of `Tensor` objects with type `int32`.
List of rank 1 tensors representing the thesholds for each of the
feature's split.
left_node_contribs: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.
List of rank 2 tensors with left leaf contribs for each of
the feature's splits. Will be added to the previous node values to constitute
the values of the left nodes.
right_node_contribs: A list with the same length as `node_ids` of `Tensor` objects with type `float32`.
List of rank 2 tensors with right leaf contribs for each
of the feature's splits. Will be added to the previous node values to constitute
the values of the right nodes.
max_depth: A `Tensor` of type `int32`. Max depth of the tree to build.
learning_rate: A `Tensor` of type `float32`.
shrinkage const for each new tree.
pruning_mode: An `int` that is `>= 0`.
0-No pruning, 1-Pre-pruning, 2-Post-pruning.
name: A name for the operation (optional).
Returns:
The created Operation.
This is the slowpath function for Eager mode.
This is for function boosted_trees_update_ensemble
Checks whether a tree ensemble has been initialized.
Args:
tree_ensemble_handle: A `Tensor` of type `resource`.
Handle to the tree ensemble resouce.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `bool`.
This is the slowpath function for Eager mode.
This is for function is_boosted_trees_ensemble_initialized
Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: boosted_trees_ops.cc
Needed to trigger the call to _set_call_cpp_shape_fn. op { name: "BoostedTreesCalculateBestGainsPerFeature" input_arg { name: "node_id_range" type: DT_INT32 } input_arg { name: "stats_summary_list" type: DT_FLOAT number_attr: "num_features" } input_arg { name: "l1" type: DT_FLOAT } input_arg { name: "l2" type: DT_FLOAT } input_arg { name: "tree_complexity" type: DT_FLOAT } input_arg { name: "min_node_weight" type: DT_FLOAT } output_arg { name: "node_ids_list" type: DT_INT32 number_attr: "num_features" } output_arg { name: "gains_list" type: DT_FLOAT number_attr: "num_features" } output_arg { name: "thresholds_list" type: DT_INT32 number_attr: "num_features" } output_arg { name: "left_node_contribs_list" type: DT_FLOAT number_attr: "num_features" } output_arg { name: "right_node_contribs_list" type: DT_FLOAT number_attr: "num_features" } attr { name: "max_splits" type: "int" has_minimum: true minimum: 1 } attr { name: "num_features" type: "int" has_minimum: true minimum: 1 } } op { name: "BoostedTreesCreateEnsemble" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } input_arg { name: "stamp_token" type: DT_INT64 } input_arg { name: "tree_ensemble_serialized" type: DT_STRING } is_stateful: true } op { name: "BoostedTreesDeserializeEnsemble" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } input_arg { name: "stamp_token" type: DT_INT64 } input_arg { name: "tree_ensemble_serialized" type: DT_STRING } is_stateful: true } op { name: "BoostedTreesEnsembleResourceHandleOp" output_arg { name: "resource" type: DT_RESOURCE } attr { name: "container" type: "string" default_value { s: "" } } attr { name: "shared_name" type: "string" default_value { s: "" } } is_stateful: true } op { name: "BoostedTreesGetEnsembleStates" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } output_arg { name: "stamp_token" type: DT_INT64 } output_arg { name: "num_trees" type: DT_INT32 } output_arg { name: "num_finalized_trees" type: DT_INT32 } output_arg { name: "num_attempted_layers" type: DT_INT32 } output_arg { name: "last_layer_nodes_range" type: DT_INT32 } is_stateful: true } op { name: "BoostedTreesMakeStatsSummary" input_arg { name: "node_ids" type: DT_INT32 } input_arg { name: "gradients" type: DT_FLOAT } input_arg { name: "hessians" type: DT_FLOAT } input_arg { name: "bucketized_features_list" type: DT_INT32 number_attr: "num_features" } output_arg { name: "stats_summary" type: DT_FLOAT } attr { name: "max_splits" type: "int" has_minimum: true minimum: 1 } attr { name: "num_buckets" type: "int" has_minimum: true minimum: 1 } attr { name: "num_features" type: "int" has_minimum: true minimum: 1 } } op { name: "BoostedTreesPredict" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } input_arg { name: "bucketized_features" type: DT_INT32 number_attr: "num_bucketized_features" } output_arg { name: "logits" type: DT_FLOAT } attr { name: "num_bucketized_features" type: "int" has_minimum: true minimum: 1 } attr { name: "logits_dimension" type: "int" } is_stateful: true } op { name: "BoostedTreesSerializeEnsemble" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } output_arg { name: "stamp_token" type: DT_INT64 } output_arg { name: "tree_ensemble_serialized" type: DT_STRING } is_stateful: true } op { name: "BoostedTreesTrainingPredict" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } input_arg { name: "cached_tree_ids" type: DT_INT32 } input_arg { name: "cached_node_ids" type: DT_INT32 } input_arg { name: "bucketized_features" type: DT_INT32 number_attr: "num_bucketized_features" } output_arg { name: "partial_logits" type: DT_FLOAT } output_arg { name: "tree_ids" type: DT_INT32 } output_arg { name: "node_ids" type: DT_INT32 } attr { name: "num_bucketized_features" type: "int" has_minimum: true minimum: 1 } attr { name: "logits_dimension" type: "int" } is_stateful: true } op { name: "BoostedTreesUpdateEnsemble" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } input_arg { name: "feature_ids" type: DT_INT32 } input_arg { name: "node_ids" type: DT_INT32 number_attr: "num_features" } input_arg { name: "gains" type: DT_FLOAT number_attr: "num_features" } input_arg { name: "thresholds" type: DT_INT32 number_attr: "num_features" } input_arg { name: "left_node_contribs" type: DT_FLOAT number_attr: "num_features" } input_arg { name: "right_node_contribs" type: DT_FLOAT number_attr: "num_features" } input_arg { name: "max_depth" type: DT_INT32 } input_arg { name: "learning_rate" type: DT_FLOAT } attr { name: "pruning_mode" type: "int" has_minimum: true } attr { name: "num_features" type: "int" has_minimum: true } is_stateful: true } op { name: "IsBoostedTreesEnsembleInitialized" input_arg { name: "tree_ensemble_handle" type: DT_RESOURCE } output_arg { name: "is_initialized" type: DT_BOOL } is_stateful: true } | 17,000 | en | 0.656209 |
#!/usr/bin/env python
"""JIP module that handles job profiles.
A job profile contains all compute-cluster and execution related meta-data of a
job, such as the number of threads reserved for the job or the time limit.
Profiles can be named and stored in the user configuration.
In addition, hierarchical updates of profiles can be applied. For example, a
default profile can be loaded from the configuration. This profile can then be
refined by a pipeline script or command line options.
This enable you to start with a *hard-coded* profile in your tool
implementation and then gradually modify and change the profile when the
tool is embedded in another pipeline or from the command line at execution
or submission time.
.. note:: Please note that the interpretation of some of the profiles
properties depends on the cluster implementation.
The following properties are supported by a profile and can be maintained
and updated.
General properties
------------------
The following properties are considered *general* and usually always
used and interpreted, independent of where and how you execute the tool
or pipeline:
name
You can assign an arbitrary name to your profiles. This name
will be used either as a job name, if the profile is applied
to a tool, or as a pipeline name if applied to a pipeline.
prefix
A name prefix that is applied to all embedded jobs. This can
be useful if, in a pipeline context, you want to allow your
tool to take their own name, but you want to prefix all tools
that are part of a single pipeline.
threads
The number of threads or compute slots allocated by the execution.
Although this property and its interpretation also depends on
the cluster or grid implementation, this is considered a general
property that is also considered when you execute a pipeline or
tool outside of a compute grid.
working_dir or dir
The working directory for a job. This is initialized to the
current working directory of the process that creates the profile.
temp
A boolean property that you can used to *mark* a job as temporary.
Temporary jobs are treated specially in a pipeline execution.
You can find more information about temporary jobs in the
:class:`~jip.pipelines.Pipeline` documentation.
env
Dictionary that can be used to extend the jobs shell environment
description
Optional field that describes the profile and can be used to
describe custom profiles in the user configuration
Cluster/Grid specific properties
--------------------------------
The following properties can be set or modified, but their interpretation
depends on the cluster implementation and the capabilities of the cluster:
tasks
Number of tasks assigned to a single job
tasks_per_node
If multiple nodes are reserved by a single job, this is the
number of tasks assigned to each node.
nodes
Number of nodes requested by the job
queue
The *queue* the job is sent to
priority
A priority assigned to a job
environment
The name of the *environment* assigned to a job. This is **not**
the shell environment, but an arbitrary name that is used, for
example, in the *Sun Grid Engine* implementation to identify
the *parallel environment* the job is submitted to.
account
Name of the account for this job
mem
The memory limit for the job. This is stored here as a string
and passed on *as is* to the cluster implementation
time
The time limit for the job. Here, the time limit is specified
as a string and passed on to the cluster implementation *as is*.
out
Path to the ``stdout`` log file for this job
log
path to the ``stderr`` log file for this job
err
path to the ``stderr`` log file for this job
extra
This is an array that takes additional options that are
used when the submission command is constructed.
.. note:: Most of the
"""
import collections
import fnmatch
import re
import os
import json
import logging
import jip.utils
from jip.templates import render_template
log = logging.getLogger("jip.profile")
#: global specs
specs = None
class Profile(object):
"""A Profile contains cluster and runtime specific information about
a job.
"""
def __init__(self, name=None, threads=None, nodes=None, tasks=None,
tasks_per_node=None, environment=None, time=None, queue=None,
priority=None, log=None, out=None, account=None, mem=0,
extra=None, profile=None, prefix=None, temp=False, _load=True,
env=None, tool_name=None, working_dir=None, description=None,
specs=None, _name=None, **kwargs):
self._name = name if not _name else _name # render_template(name)
self.environment = render_template(environment)
self.nodes = render_template(nodes)
self.threads = render_template(threads)
self.tasks = render_template(tasks)
self.tasks_per_node = render_template(tasks_per_node)
self.profile = render_template(profile)
self.queue = render_template(queue)
self.time = render_template(time)
self.mem = render_template(mem)
self.priority = render_template(priority)
self.log = log
self.out = out
self.account = render_template(account)
self.prefix = render_template(prefix)
self.description = description
self.env = env
self.temp = temp
self.extra = extra
self.tool_name = tool_name
self.working_dir = working_dir
if self.working_dir is None and kwargs.get('dir', None):
self.working_dir = kwargs['dir']
self.specs = specs if specs else {}
if profile is not None and _load:
self.load(profile)
def apply_to_pipeline(self, pipeline):
"""Apply this profile to the pipeline
:param pipeline: the pipeline
:type pipeline: :class:`jip.pipeline.Pipeline`
"""
for node in pipeline.nodes():
self.apply_to_node(node)
def apply_to_node(self, node):
# check if there is a matching spec for the node
node_profile = self.specs.get(node.name, None)
if not node_profile:
node_profile = self.specs.get(node._name, None)
# check via regexp
for spec_name, spec in self.specs.iteritems():
if fnmatch.fnmatch(node.name, spec_name):
#if re.match(spec_name, node.name):
if not node_profile:
node_profile = spec()
else:
node_profile.update(spec)
if node_profile:
node._job.update(node_profile)
if node._pipeline_profile:
node._pipeline_profile.update(node_profile)
# apply global profile, don't overwrite
node._job.update(self, overwrite=False)
if node._pipeline_profile:
node._pipeline_profile.update(self, overwrite=False)
@property
def err(self):
"""Set the jobs error log file
:getter: access the jobs name
:setter: set the jobs name
:type: string
"""
return self.log
@err.setter
def err(self, value):
self.log = value
@property
def dir(self):
"""Set the jobs working directory
:getter: access the jobs working directory
:setter: set the jobs working directory
:type: string
"""
return self.working_dir
@dir.setter
def dir(self, value):
self.working_dir = value
@property
def name(self):
"""Set the jobs name
:getter: access the jobs name
:setter: set the jobs name
:type: string
"""
return self._name
@name.setter
def name(self, name):
self._name = name
def load(self, profile_name):
"""Set this profiles values to the values loaded from the profile
stored under the given name. An exception is raised if no profile of
that name could be found.
:param profile_name: the name of the profile that will be loaded
:type profile_name: string
"""
import jip
profiles = jip.config.get('profiles', {})
if profile_name not in profiles:
raise ValueError("Profile %s not found!" % profile_name)
profile = profiles[profile_name]
self.threads = profile.get('threads', self.threads)
self.nodes = profile.get('nodes', self.nodes)
self.tasks = profile.get('tasks', self.tasks)
self.tasks_per_node = profile.get('tasks_per_node',
self.tasks_per_node)
self.environment = profile.get('environment', self.environment)
self.time = profile.get('time', self.time)
self.queue = profile.get('queue', self.queue)
self.priority = profile.get('priority', self.priority)
self.log = profile.get('log', self.log)
self.out = profile.get('out', self.out)
self.account = profile.get('account', self.account)
self.mem = profile.get('mem', self.mem)
self.extra = profile.get('extra', self.extra)
self.env = profile.get('env', self.env)
self.description = profile.get('description', self.description)
def load_args(self, args):
"""Update this profile from the given dictionary of command line
arguments. The argument names must match the profile attributes
"""
for k, v in args.iteritems():
k = re.sub("^-+", "", k)
k = re.sub("-", "_", k)
if v and hasattr(self, k):
# check for multiple values
for single in v.split(" "):
tup = single.split("=")
if len(tup) == 1:
setattr(self, k, single)
else:
# find or create a spec for the given key
spec_profile = self.specs.get(tup[0], Profile())
setattr(spec_profile, k, tup[1])
self.specs[tup[0]] = spec_profile
def _render_job_name(self, job):
ctx = {}
for o in job.tool.options:
ctx[o.name] = o
name = job.name
if not name:
name = self.name
if not name:
name = job.tool.name
return render_template(
"%s%s" % ("" if not self.prefix else self.prefix, name), **ctx
)
def _render(self, job, name):
ctx = {}
for o in job.tool.options:
ctx[o.name] = o
ctx['name'] = self.name
ctx['job'] = self
return render_template(
"%s%s" % ("" if not self.prefix else self.prefix, name), **ctx
)
def apply_overwrite(self, job):
"""Apply the profile and overwrite all settings that are set
in this profile
"""
log.debug("Profiles | Overwriting job profile to %s", job)
if self.name:
job.name = self._render_job_name(job)
if self.threads:
job.threads = int(self.threads)
if self.nodes is not None:
job.nodes = self.nodes
if self.tasks is not None:
job.tasks = self.tasks
if self.tasks_per_node is not None:
job.tasks_per_node = self.tasks_per_node
if self.environment is not None:
job.environment = self.environment
if self.queue is not None:
job.queue = self.queue
if self.priority is not None:
job.priority = self.priority
if self.time is not None:
job.max_time = jip.utils.parse_time(self.time)
if self.mem is not None:
job.max_memory = jip.utils.parse_mem(self.mem)
if self.log is not None:
job.stderr = self._render(job, self.log)
if self.out is not None:
job.stdout = self._render(job, self.out)
if self.account is not None:
job.account = self.account
if self.temp is not None:
job.temp = self.temp
if self.extra is not None:
job.extra = self.extra
if self.working_dir is not None:
job.working_directory = os.path.abspath(self.working_dir)
# make log files absolute
if job.stdout and not job.stdout.startswith("/"):
job.stdout = os.path.join(job.working_directory, job.stdout)
if job.stderr and not job.stderr.startswith("/"):
job.stderr = os.path.join(job.working_directory, job.stderr)
# load environment
if self.env:
current = os.environ.copy()
if job.env:
current.update(job.env)
rendered = {}
for k, v in self.env.iteritems():
rendered[k] = render_template(v, **current)
job.env.update(rendered)
if hasattr(job, 'pipe_to'):
for child in job.pipe_to:
self.apply_overwrite(child)
# check specs
for spec_name, spec in self.specs.iteritems():
if fnmatch.fnmatch(job.name, spec_name):
spec.apply_overwrite(job)
def apply(self, job, pipeline=False, overwrite=False):
"""Apply this profile to the given job."""
log.debug("Profiles | Applying job profile to %s", job)
if overwrite:
self.apply_overwrite(job)
return
# set the job name or the pipeline name
# if this is a job or a pipeline
if not pipeline:
job.name = self._render_job_name(job)
elif self.name is not None:
log.info("Apply pipeline name to job: %s %s", job, self.name)
job.pipeline = self._render(job, self.name)
if self.threads and job.threads is None:
job.threads = int(self.threads)
if self.nodes is not None and job.nodes is None:
job.nodes = self.nodes
if self.tasks is not None and job.tasks is None:
job.tasks = self.tasks
if self.tasks_per_node is not None and job.tasts_per_node is None:
job.tasks_per_node = self.tasks_per_node
if self.environment is not None and job.environment is None:
job.environment = self.environment
if self.queue is not None and job.queue is None:
job.queue = self.queue
if self.priority is not None and job.priority is None:
job.priority = self.priority
if self.time is not None and job.max_time is None:
job.max_time = jip.utils.parse_time(self.time)
if self.mem is not None:
if job.max_memory is None:
job.max_memory = 0
job.max_memory += jip.utils.parse_mem(self.mem)
if self.log is not None and job.stderr is None:
job.stderr = self._render(job, self.log)
if self.out is not None and job.stdout is None:
job.stdout = self._render(job, self.out)
if self.account is not None and job.account is None:
job.account = self.account
if self.temp is not None and job.temp is None:
job.temp = self.temp
if self.extra is not None and job.extra is None:
job.extra = self.extra
if self.working_dir is not None and job.working_directory is None:
job.working_directory = os.path.abspath(self.working_dir)
# make log files absolute
if job.stdout and not job.stdout.startswith("/"):
job.stdout = os.path.join(job.working_directory, job.stdout)
if job.stderr and not job.stderr.startswith("/"):
job.stderr = os.path.join(job.working_directory, job.stderr)
# load environment
if self.env:
current = os.environ.copy()
if job.env:
current.update(job.env)
rendered = {}
for k, v in self.env.iteritems():
rendered[k] = render_template(v, **current)
job.env.update(rendered)
if hasattr(job, 'pipe_to'):
for child in job.pipe_to:
self.apply(child)
def update(self, profile, overwrite=True):
"""Update this profile from a given profile. All values that are
not None in the other profile are applied to this
profile
:param profile: the other profile
:type profile: :class:`Profile`
:param overwrite: if True, value will be set regardless. Otherwise, the
new value will only be applied if the old value
is None
"""
attrs = ["environment", "nodes", "threads",
"tasks", "tasks_per_node", "queue",
"time", "mem", "priority", "log", "out",
"account", "prefix", "env", "temp", "extra", "working_dir"]
for attr in attrs:
other = profile.__getattribute__(attr)
if other is not None and (overwrite or
self.__getattribute__(attr) is None):
setattr(self, attr, other)
def merge(self, master):
"""Merge this profile with the given master profile.
Currently this merges the working directory of jobs
:param master: the master profile
"""
self.working_dir = master.working_dir if self.working_dir is None\
else self.working_dir
def __call__(self, name=None, threads=None, nodes=None, tasks=None,
tasks_per_node=None, environment=None, time=None, queue=None,
priority=None, log=None, out=None, err=None, account=None,
mem=None, profile=None, prefix=None, temp=None, extra=None,
dir=None, description=None, env=None):
clone = self.__class__(
name=name if name is not None else self._name,
threads=threads if threads is not None else self.threads,
tasks=tasks if tasks is not None else self.tasks,
tasks_per_node=tasks_per_node if tasks_per_node is not None else
self.tasks_per_node,
environment=environment if environment is not None
else self.environment,
env=env if env is not None else self.env,
nodes=nodes if nodes is not None else self.nodes,
profile=profile if profile is not None else self.profile,
queue=queue if queue is not None else self.queue,
time=time if time is not None else self.time,
priority=priority if priority is not None else self.priority,
log=log if log is not None else
(err if err is not None else self.log),
out=out if out is not None else self.out,
account=account if account is not None else self.account,
mem=mem if mem is not None else self.mem,
prefix=prefix if prefix is not None else self.prefix,
temp=temp if temp is not None else self.temp,
extra=extra if extra is not None else self.extra,
working_dir=dir if dir is not None else self.working_dir,
description=description if description is not None
else self.description,
_load=False
)
for name, spec in self.specs.iteritems():
clone.specs[name] = spec()
return clone
def __repr__(self):
return str(vars(self))
@classmethod
def from_job(cls, job):
"""Create a profile based on a given job. All properties
are set according to the given job, except the jobs temp state,
which will be kept unmodified.
:param job: the job
:returns: new profile generated from the job
"""
profile = cls()
profile.threads = job.threads if job.threads > 0 else None
profile.nodes = job.nodes
profile.tasks = job.tasks
profile.tasts_per_node = job.tasks_per_node
profile.environment = job.environment
profile.queue = job.queue
profile.priority = job.priority
profile.time = job.max_time
profile.mem = job.max_memory
profile.log = job.stderr
profile.out = job.stdout
profile.account = job.account
profile.extra = job.extra
profile.working_dir = job.working_directory
profile.env = job.env
return profile
@classmethod
def from_file(cls, file_name):
"""Load a profile from a json file
:param file_name: the name of the input file
"""
with open(file_name) as of:
try:
data = json.load(of)
except ValueError:
log.error("Malformed json file %s", file_name)
raise jip.ValidationError('jip.profiles', "Malformed json file %s" % (file_name))
return cls.from_dict(data)
@classmethod
def from_dict(cls, data):
"""Load a profile from a dictionary"""
profile = cls()
# apply all the params
for k, v in data.iteritems():
if k != 'jobs':
profile.__setattr__(k, v)
if "jobs" in data:
for name, spec in data["jobs"].iteritems():
profile.specs[name] = cls.from_dict(spec)
return profile
def get(name='default', tool=None):
"""Load a profile by name. If tool is specified, the specs are
searched to the tool and if found, the specs are applied.
"""
# check the name for specs
s = name.split(' ')
p = Profile()
for ss in s:
tup = ss.split("=")
if len(tup) == 1:
# update global
l = Profile(profile=tup[0])
p.update(l)
else:
# update or create spec
spec = p.specs.get(tup[0], Profile())
spec.update(Profile(profile=tup[1]))
p.specs[tup[0]] = spec
return p
def get_specs(path=None):
"""Load specs form default locations and then update from specs in given
path if specified.
:param path: optional path to an additional spec file
"""
def load_json(jf):
with open(jf) as of:
try:
data = json.load(of)
except ValueError:
log.error("Malformed json file %s", jf)
raise jip.ValidationError('jip.profiles', "Malformed json file %s" % (jf))
return data
global specs
cwd = os.path.join(os.getcwd(), "jip.specs")
home = os.path.join(os.getenv("HOME", ""), ".jip/jip.specs")
specs = {}
if os.path.exists(home):
specs = _update(specs, load_json(home))
if os.path.exists(cwd):
specs = _update(specs, load_json(cwd))
if path and os.path.exists(path):
specs = _update(specs, load_json(path))
return specs
def _update(config, other):
for k, v in other.iteritems():
if isinstance(v, collections.Mapping):
r = _update(config.get(k, {}), v)
config[k] = r
else:
config[k] = other[k]
return config
| jip/profiles.py | 23,630 | A Profile contains cluster and runtime specific information about
a job.
Apply this profile to the given job.
Apply the profile and overwrite all settings that are set
in this profile
Apply this profile to the pipeline
:param pipeline: the pipeline
:type pipeline: :class:`jip.pipeline.Pipeline`
Set the jobs working directory
:getter: access the jobs working directory
:setter: set the jobs working directory
:type: string
Set the jobs error log file
:getter: access the jobs name
:setter: set the jobs name
:type: string
Load a profile from a dictionary
Load a profile from a json file
:param file_name: the name of the input file
Create a profile based on a given job. All properties
are set according to the given job, except the jobs temp state,
which will be kept unmodified.
:param job: the job
:returns: new profile generated from the job
Load a profile by name. If tool is specified, the specs are
searched to the tool and if found, the specs are applied.
Load specs form default locations and then update from specs in given
path if specified.
:param path: optional path to an additional spec file
Set this profiles values to the values loaded from the profile
stored under the given name. An exception is raised if no profile of
that name could be found.
:param profile_name: the name of the profile that will be loaded
:type profile_name: string
Update this profile from the given dictionary of command line
arguments. The argument names must match the profile attributes
Merge this profile with the given master profile.
Currently this merges the working directory of jobs
:param master: the master profile
Set the jobs name
:getter: access the jobs name
:setter: set the jobs name
:type: string
Update this profile from a given profile. All values that are
not None in the other profile are applied to this
profile
:param profile: the other profile
:type profile: :class:`Profile`
:param overwrite: if True, value will be set regardless. Otherwise, the
new value will only be applied if the old value
is None
JIP module that handles job profiles.
A job profile contains all compute-cluster and execution related meta-data of a
job, such as the number of threads reserved for the job or the time limit.
Profiles can be named and stored in the user configuration.
In addition, hierarchical updates of profiles can be applied. For example, a
default profile can be loaded from the configuration. This profile can then be
refined by a pipeline script or command line options.
This enable you to start with a *hard-coded* profile in your tool
implementation and then gradually modify and change the profile when the
tool is embedded in another pipeline or from the command line at execution
or submission time.
.. note:: Please note that the interpretation of some of the profiles
properties depends on the cluster implementation.
The following properties are supported by a profile and can be maintained
and updated.
General properties
------------------
The following properties are considered *general* and usually always
used and interpreted, independent of where and how you execute the tool
or pipeline:
name
You can assign an arbitrary name to your profiles. This name
will be used either as a job name, if the profile is applied
to a tool, or as a pipeline name if applied to a pipeline.
prefix
A name prefix that is applied to all embedded jobs. This can
be useful if, in a pipeline context, you want to allow your
tool to take their own name, but you want to prefix all tools
that are part of a single pipeline.
threads
The number of threads or compute slots allocated by the execution.
Although this property and its interpretation also depends on
the cluster or grid implementation, this is considered a general
property that is also considered when you execute a pipeline or
tool outside of a compute grid.
working_dir or dir
The working directory for a job. This is initialized to the
current working directory of the process that creates the profile.
temp
A boolean property that you can used to *mark* a job as temporary.
Temporary jobs are treated specially in a pipeline execution.
You can find more information about temporary jobs in the
:class:`~jip.pipelines.Pipeline` documentation.
env
Dictionary that can be used to extend the jobs shell environment
description
Optional field that describes the profile and can be used to
describe custom profiles in the user configuration
Cluster/Grid specific properties
--------------------------------
The following properties can be set or modified, but their interpretation
depends on the cluster implementation and the capabilities of the cluster:
tasks
Number of tasks assigned to a single job
tasks_per_node
If multiple nodes are reserved by a single job, this is the
number of tasks assigned to each node.
nodes
Number of nodes requested by the job
queue
The *queue* the job is sent to
priority
A priority assigned to a job
environment
The name of the *environment* assigned to a job. This is **not**
the shell environment, but an arbitrary name that is used, for
example, in the *Sun Grid Engine* implementation to identify
the *parallel environment* the job is submitted to.
account
Name of the account for this job
mem
The memory limit for the job. This is stored here as a string
and passed on *as is* to the cluster implementation
time
The time limit for the job. Here, the time limit is specified
as a string and passed on to the cluster implementation *as is*.
out
Path to the ``stdout`` log file for this job
log
path to the ``stderr`` log file for this job
err
path to the ``stderr`` log file for this job
extra
This is an array that takes additional options that are
used when the submission command is constructed.
.. note:: Most of the
!/usr/bin/env python: global specs render_template(name) check if there is a matching spec for the node check via regexpif re.match(spec_name, node.name): apply global profile, don't overwrite check for multiple values find or create a spec for the given key make log files absolute load environment check specs set the job name or the pipeline name if this is a job or a pipeline make log files absolute load environment apply all the params check the name for specs update global update or create spec | 6,968 | en | 0.85788 |
#!/usr/bin/env python
# Copyright 2016 Intel
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
repository_tags = """
========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/syntribos.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. image:: https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat
:target: https://docs.openstack.org/syntribos/latest/
.. image:: https://img.shields.io/pypi/v/syntribos.svg
:target: https://pypi.python.org/pypi/syntribos/
.. image:: https://img.shields.io/pypi/pyversions/syntribos.svg
:target: https://pypi.python.org/pypi/syntribos/
.. image:: https://img.shields.io/pypi/wheel/syntribos.svg
:target: https://pypi.python.org/pypi/syntribos/
.. image:: https://img.shields.io/irc/%23openstack-security.png
:target: https://webchat.freenode.net/?channels=openstack-security
"""
def find_docs():
"""Yields files as per the whitelist."""
loc = "../doc/source/{}.rst"
whitelist = [
"about", "installation",
"configuration", "commands",
"running", "logging",
"test-anatomy", "unittests",
"contributing"]
for fname in whitelist:
fpath = loc.format(fname)
if os.path.isfile(fpath):
yield fpath
def concat_docs():
"""Concatinates files yielded by the generator `find_docs`."""
file_path = os.path.dirname(os.path.realpath(__file__))
head, tail = os.path.split(file_path)
outfile = head + "/README.rst"
if not os.path.isfile(outfile):
print("../README.rst not found, exiting!")
exit(1)
with open(outfile, 'w') as readme_handle:
readme_handle.write(repository_tags)
for doc in find_docs():
with open(doc, 'r') as doc_handle:
for line in doc_handle:
readme_handle.write(line)
readme_handle.write("\n")
if __name__ == '__main__':
"""Generate README.rst from docs."""
concat_docs()
print("\nREADME.rst created!\n")
| scripts/readme.py | 2,623 | Concatinates files yielded by the generator `find_docs`.
Yields files as per the whitelist.
!/usr/bin/env python Copyright 2016 Intel Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 655 | en | 0.844662 |
# -*- coding: utf-8 -*-
"""
Benchmark Results
Updated: 18.02.2022 (6618fa3c36b0c9f3a9d7a21bcdb00bf4fd258ee8))
------------------------------------------------------------------------------------------
| Model | Batch Size | Epochs | KNN Test Accuracy | Time | Peak GPU Usage |
------------------------------------------------------------------------------------------
| BarlowTwins | 128 | 200 | 0.835 | 193.4 Min | 2.2 GByte |
| BYOL | 128 | 200 | 0.872 | 217.0 Min | 2.3 GByte |
| DINO | 128 | 200 | 0.868 | 220.7 Min | 2.3 GByte |
| Moco | 128 | 200 | 0.838 | 229.5 Min | 2.3 GByte |
| NNCLR | 128 | 200 | 0.838 | 198.7 Min | 2.2 GByte |
| SimCLR | 128 | 200 | 0.822 | 182.7 Min | 2.2 GByte |
| SimSiam | 128 | 200 | 0.779 | 182.6 Min | 2.3 GByte |
| SwaV | 128 | 200 | 0.806 | 182.4 Min | 2.2 GByte |
------------------------------------------------------------------------------------------
| BarlowTwins | 512 | 200 | 0.827 | 160.7 Min | 7.5 GByte |
| BYOL | 512 | 200 | 0.872 | 188.5 Min | 7.7 GByte |
| DINO | 512 | 200 | 0.862 | 191.1 Min | 7.5 GByte |
| Moco (*) | 512 | 200 | 0.850 | 196.8 Min | 7.8 GByte |
| NNCLR (*) | 512 | 200 | 0.836 | 164.7 Min | 7.6 GByte |
| SimCLR | 512 | 200 | 0.828 | 158.2 Min | 7.5 GByte |
| SimSiam | 512 | 200 | 0.814 | 159.0 Min | 7.6 GByte |
| SwaV | 512 | 200 | 0.833 | 158.4 Min | 7.5 GByte |
------------------------------------------------------------------------------------------
| BarlowTwins | 512 | 800 | 0.857 | 641.5 Min | 7.5 GByte |
| BYOL | 512 | 800 | 0.911 | 754.2 Min | 7.8 GByte |
| DINO | 512 | 800 | 0.884 | 765.5 Min | 7.6 GByte |
| Moco (*) | 512 | 800 | 0.900 | 787.7 Min | 7.8 GByte |
| NNCLR (*) | 512 | 800 | 0.896 | 659.2 Min | 7.6 GByte |
| SimCLR | 512 | 800 | 0.875 | 632.5 Min | 7.5 GByte |
| SimSiam | 512 | 800 | 0.906 | 636.5 Min | 7.6 GByte |
| SwaV | 512 | 800 | 0.881 | 634.9 Min | 7.5 GByte |
------------------------------------------------------------------------------------------
(*): Increased size of memory bank from 4096 to 8192 to avoid too quickly
changing memory bank due to larger batch size.
The benchmarks were created on a single NVIDIA RTX A6000.
Note that this benchmark also supports a multi-GPU setup. If you run it on
a system with multiple GPUs make sure that you kill all the processes when
killing the application. Due to the way we setup this benchmark the distributed
processes might continue the benchmark if one of the nodes is killed.
If you know how to fix this don't hesitate to create an issue or PR :)
"""
import copy
import os
import time
import lightly
import numpy as np
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torchvision
from lightly.models import modules
from lightly.models.modules import heads
from lightly.models import utils
from lightly.utils import BenchmarkModule
from pytorch_lightning.loggers import TensorBoardLogger
logs_root_dir = os.path.join(os.getcwd(), 'benchmark_logs')
# set max_epochs to 800 for long run (takes around 10h on a single V100)
max_epochs = 1
num_workers = 8
knn_k = 200
knn_t = 0.1
classes = 10
# Set to True to enable Distributed Data Parallel training.
distributed = True
# Set to True to enable Synchronized Batch Norm (requires distributed=True).
# If enabled the batch norm is calculated over all gpus, otherwise the batch
# norm is only calculated from samples on the same gpu.
sync_batchnorm = False
# Set to True to gather features from all gpus before calculating
# the loss (requires distributed=True).
# If enabled then the loss on every gpu is calculated with features from all
# gpus, otherwise only features from the same gpu are used.
gather_distributed = True
# benchmark
n_runs = 1 # optional, increase to create multiple runs and report mean + std
batch_size = 512
lr_factor = batch_size / 128 # scales the learning rate linearly with batch size
# use a GPU if available
#gpus = torch.cuda.device_count() if torch.cuda.is_available() else 0
gpus = 4 if torch.cuda.is_available() else 0
print(gpus)
if distributed:
distributed_backend = 'ddp'
# reduce batch size for distributed training
batch_size = batch_size // gpus
else:
distributed_backend = None
# limit to single gpu if not using distributed training
gpus = min(gpus, 1)
# Adapted from our MoCo Tutorial on CIFAR-10
#
# Replace the path with the location of your CIFAR-10 dataset.
# We assume we have a train folder with subfolders
# for each class and .png images inside.
#
# You can download `CIFAR-10 in folders from kaggle
# <https://www.kaggle.com/swaroopkml/cifar10-pngs-in-folders>`_.
# The dataset structure should be like this:
# cifar10/train/
# L airplane/
# L 10008_airplane.png
# L ...
# L automobile/
# L bird/
# L cat/
# L deer/
# L dog/
# L frog/
# L horse/
# L ship/
# L truck/
path_to_train = './data/cifar10/train/'
path_to_test = './data/cifar10/test/'
# Use SimCLR augmentations, additionally, disable blur for cifar10
collate_fn = lightly.data.SimCLRCollateFunction(
input_size=32,
gaussian_blur=0.,
)
# Multi crop augmentation for SwAV, additionally, disable blur for cifar10
swav_collate_fn = lightly.data.SwaVCollateFunction(
crop_sizes=[32],
crop_counts=[2], # 2 crops @ 32x32px
crop_min_scales=[0.14],
gaussian_blur=0,
)
# Multi crop augmentation for DINO, additionally, disable blur for cifar10
dino_collate_fn = lightly.data.DINOCollateFunction(
global_crop_size=32,
n_local_views=0,
gaussian_blur=(0, 0, 0),
)
# No additional augmentations for the test set
test_transforms = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
mean=lightly.data.collate.imagenet_normalize['mean'],
std=lightly.data.collate.imagenet_normalize['std'],
)
])
dataset_train_ssl = lightly.data.LightlyDataset(
input_dir=path_to_train
)
# we use test transformations for getting the feature for kNN on train data
dataset_train_kNN = lightly.data.LightlyDataset(
input_dir=path_to_train,
transform=test_transforms
)
dataset_test = lightly.data.LightlyDataset(
input_dir=path_to_test,
transform=test_transforms
)
def get_data_loaders(batch_size: int, model):
"""Helper method to create dataloaders for ssl, kNN train and kNN test
Args:
batch_size: Desired batch size for all dataloaders
"""
col_fn = collate_fn
if isinstance(model, SwaVModel):
col_fn = swav_collate_fn
elif isinstance(model, DINOModel):
col_fn = dino_collate_fn
dataloader_train_ssl = torch.utils.data.DataLoader(
dataset_train_ssl,
batch_size=batch_size,
shuffle=True,
collate_fn=col_fn,
drop_last=True,
num_workers=num_workers
)
dataloader_train_kNN = torch.utils.data.DataLoader(
dataset_train_kNN,
batch_size=batch_size,
shuffle=False,
drop_last=False,
num_workers=num_workers
)
dataloader_test = torch.utils.data.DataLoader(
dataset_test,
batch_size=batch_size,
shuffle=False,
drop_last=False,
num_workers=num_workers
)
return dataloader_train_ssl, dataloader_train_kNN, dataloader_test
class MocoModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
num_splits = 0 if sync_batchnorm else 8
resnet = lightly.models.ResNetGenerator('resnet-18', num_splits=num_splits)
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
# create a moco model based on ResNet
self.projection_head = heads.MoCoProjectionHead(512, 512, 128)
self.backbone_momentum = copy.deepcopy(self.backbone)
self.projection_head_momentum = copy.deepcopy(self.projection_head)
utils.deactivate_requires_grad(self.backbone_momentum)
utils.deactivate_requires_grad(self.projection_head_momentum)
# create our loss with the optional memory bank
self.criterion = lightly.loss.NTXentLoss(
temperature=0.1,
memory_bank_size=4096,
)
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
return self.projection_head(x)
def training_step(self, batch, batch_idx):
(x0, x1), _, _ = batch
# update momentum
utils.update_momentum(self.backbone, self.backbone_momentum, 0.99)
utils.update_momentum(self.projection_head, self.projection_head_momentum, 0.99)
def step(x0_, x1_):
x1_, shuffle = utils.batch_shuffle(x1_, distributed=distributed)
x0_ = self.backbone(x0_).flatten(start_dim=1)
x0_ = self.projection_head(x0_)
x1_ = self.backbone_momentum(x1_).flatten(start_dim=1)
x1_ = self.projection_head_momentum(x1_)
x1_ = utils.batch_unshuffle(x1_, shuffle, distributed=distributed)
return x0_, x1_
# We use a symmetric loss (model trains faster at little compute overhead)
# https://colab.research.google.com/github/facebookresearch/moco/blob/colab-notebook/colab/moco_cifar10_demo.ipynb
loss_1 = self.criterion(*step(x0, x1))
loss_2 = self.criterion(*step(x1, x0))
loss = 0.5 * (loss_1 + loss_2)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
params = list(self.backbone.parameters()) + list(self.projection_head.parameters())
optim = torch.optim.SGD(
params,
lr=6e-2 * lr_factor,
momentum=0.9,
weight_decay=5e-4,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class SimCLRModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.projection_head = heads.SimCLRProjectionHead(512, 512, 128)
self.criterion = lightly.loss.NTXentLoss()
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(x)
return z
def training_step(self, batch, batch_index):
(x0, x1), _, _ = batch
z0 = self.forward(x0)
z1 = self.forward(x1)
loss = self.criterion(z0, z1)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2 * lr_factor,
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class SimSiamModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.prediction_head = heads.SimSiamPredictionHead(2048, 512, 2048)
# use a 2-layer projection head for cifar10 as described in the paper
self.projection_head = heads.ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
2048,
nn.BatchNorm1d(2048),
None
)
])
self.criterion = lightly.loss.NegativeCosineSimilarity()
def forward(self, x):
f = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(f)
p = self.prediction_head(z)
z = z.detach()
return z, p
def training_step(self, batch, batch_idx):
(x0, x1), _, _ = batch
z0, p0 = self.forward(x0)
z1, p1 = self.forward(x1)
loss = 0.5 * (self.criterion(z0, p1) + self.criterion(z1, p0))
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2, # no lr-scaling, results in better training stability
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class BarlowTwinsModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
# use a 2-layer projection head for cifar10 as described in the paper
self.projection_head = heads.ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
2048,
None,
None
)
])
self.criterion = lightly.loss.BarlowTwinsLoss(gather_distributed=gather_distributed)
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(x)
return z
def training_step(self, batch, batch_index):
(x0, x1), _, _ = batch
z0 = self.forward(x0)
z1 = self.forward(x1)
loss = self.criterion(z0, z1)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2 * lr_factor,
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class BYOLModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
# create a byol model based on ResNet
self.projection_head = heads.BYOLProjectionHead(512, 1024, 256)
self.prediction_head = heads.BYOLProjectionHead(256, 1024, 256)
self.backbone_momentum = copy.deepcopy(self.backbone)
self.projection_head_momentum = copy.deepcopy(self.projection_head)
utils.deactivate_requires_grad(self.backbone_momentum)
utils.deactivate_requires_grad(self.projection_head_momentum)
self.criterion = lightly.loss.NegativeCosineSimilarity()
def forward(self, x):
y = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(y)
p = self.prediction_head(z)
return p
def forward_momentum(self, x):
y = self.backbone_momentum(x).flatten(start_dim=1)
z = self.projection_head_momentum(y)
z = z.detach()
return z
def training_step(self, batch, batch_idx):
utils.update_momentum(self.backbone, self.backbone_momentum, m=0.99)
utils.update_momentum(self.projection_head, self.projection_head_momentum, m=0.99)
(x0, x1), _, _ = batch
p0 = self.forward(x0)
z0 = self.forward_momentum(x0)
p1 = self.forward(x1)
z1 = self.forward_momentum(x1)
loss = 0.5 * (self.criterion(p0, z1) + self.criterion(p1, z0))
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
params = list(self.backbone.parameters()) \
+ list(self.projection_head.parameters()) \
+ list(self.prediction_head.parameters())
optim = torch.optim.SGD(
params,
lr=6e-2 * lr_factor,
momentum=0.9,
weight_decay=5e-4,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class SwaVModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.projection_head = heads.SwaVProjectionHead(512, 512, 128)
self.prototypes = heads.SwaVPrototypes(128, 512) # use 512 prototypes
self.criterion = lightly.loss.SwaVLoss(sinkhorn_gather_distributed=gather_distributed)
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
x = self.projection_head(x)
x = nn.functional.normalize(x, dim=1, p=2)
return self.prototypes(x)
def training_step(self, batch, batch_idx):
# normalize the prototypes so they are on the unit sphere
self.prototypes.normalize()
# the multi-crop dataloader returns a list of image crops where the
# first two items are the high resolution crops and the rest are low
# resolution crops
multi_crops, _, _ = batch
multi_crop_features = [self.forward(x) for x in multi_crops]
# split list of crop features into high and low resolution
high_resolution_features = multi_crop_features[:2]
low_resolution_features = multi_crop_features[2:]
# calculate the SwaV loss
loss = self.criterion(
high_resolution_features,
low_resolution_features
)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.Adam(
self.parameters(),
lr=1e-3 * lr_factor,
weight_decay=1e-6,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class NNCLRModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.prediction_head = heads.NNCLRPredictionHead(256, 4096, 256)
# use only a 2-layer projection head for cifar10
self.projection_head = heads.ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
256,
nn.BatchNorm1d(256),
None
)
])
self.criterion = lightly.loss.NTXentLoss()
self.memory_bank = modules.NNMemoryBankModule(size=4096)
def forward(self, x):
y = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(y)
p = self.prediction_head(z)
z = z.detach()
return z, p
def training_step(self, batch, batch_idx):
(x0, x1), _, _ = batch
z0, p0 = self.forward(x0)
z1, p1 = self.forward(x1)
z0 = self.memory_bank(z0, update=False)
z1 = self.memory_bank(z1, update=True)
loss = 0.5 * (self.criterion(z0, p1) + self.criterion(z1, p0))
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2 * lr_factor,
momentum=0.9,
weight_decay=5e-4,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
class DINOModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = lightly.models.ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.head = self._build_projection_head()
self.teacher_backbone = copy.deepcopy(self.backbone)
self.teacher_head = self._build_projection_head()
utils.deactivate_requires_grad(self.teacher_backbone)
utils.deactivate_requires_grad(self.teacher_head)
self.criterion = lightly.loss.DINOLoss(output_dim=2048)
def _build_projection_head(self):
head = heads.DINOProjectionHead(512, 2048, 256, 2048, batch_norm=True)
# use only 2 layers for cifar10
head.layers = heads.ProjectionHead([
(512, 2048, nn.BatchNorm1d(2048), nn.GELU()),
(2048, 256, None, None),
]).layers
return head
def forward(self, x):
y = self.backbone(x).flatten(start_dim=1)
z = self.head(y)
return z
def forward_teacher(self, x):
y = self.teacher_backbone(x).flatten(start_dim=1)
z = self.teacher_head(y)
return z
def training_step(self, batch, batch_idx):
utils.update_momentum(self.backbone, self.teacher_backbone, m=0.99)
utils.update_momentum(self.head, self.teacher_head, m=0.99)
views, _, _ = batch
views = [view.to(self.device) for view in views]
global_views = views[:2]
teacher_out = [self.forward_teacher(view) for view in global_views]
student_out = [self.forward(view) for view in views]
loss = self.criterion(teacher_out, student_out, epoch=self.current_epoch)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
param = list(self.backbone.parameters()) \
+ list(self.head.parameters())
optim = torch.optim.SGD(
param,
lr=6e-2 * lr_factor,
momentum=0.9,
weight_decay=5e-4,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, max_epochs)
return [optim], [scheduler]
models = [
BarlowTwinsModel,
BYOLModel,
DINOModel,
MocoModel,
NNCLRModel,
SimCLRModel,
SimSiamModel,
SwaVModel,
]
bench_results = dict()
experiment_version = None
# loop through configurations and train models
for BenchmarkModel in models:
runs = []
model_name = BenchmarkModel.__name__.replace('Model', '')
for seed in range(n_runs):
pl.seed_everything(seed)
dataloader_train_ssl, dataloader_train_kNN, dataloader_test = get_data_loaders(
batch_size=batch_size,
model=BenchmarkModel,
)
benchmark_model = BenchmarkModel(dataloader_train_kNN, classes)
# Save logs to: {CWD}/benchmark_logs/cifar10/{experiment_version}/{model_name}/
# If multiple runs are specified a subdirectory for each run is created.
sub_dir = model_name if n_runs <= 1 else f'{model_name}/run{seed}'
logger = TensorBoardLogger(
save_dir=os.path.join(logs_root_dir, 'cifar10'),
name='',
sub_dir=sub_dir,
version=experiment_version,
)
if experiment_version is None:
# Save results of all models under same version directory
experiment_version = logger.version
checkpoint_callback = pl.callbacks.ModelCheckpoint(
dirpath=os.path.join(logger.log_dir, 'checkpoints')
)
trainer = pl.Trainer(
max_epochs=max_epochs,
gpus=gpus,
default_root_dir=logs_root_dir,
strategy=distributed_backend,
sync_batchnorm=sync_batchnorm,
logger=logger,
callbacks=[checkpoint_callback]
)
start = time.time()
trainer.fit(
benchmark_model,
train_dataloaders=dataloader_train_ssl,
val_dataloaders=dataloader_test
)
end = time.time()
run = {
'model': model_name,
'batch_size': batch_size,
'epochs': max_epochs,
'max_accuracy': benchmark_model.max_accuracy,
'runtime': end - start,
'gpu_memory_usage': torch.cuda.max_memory_allocated(),
'seed': seed,
}
runs.append(run)
print(run)
# delete model and trainer + free up cuda memory
del benchmark_model
del trainer
torch.cuda.reset_peak_memory_stats()
torch.cuda.empty_cache()
bench_results[model_name] = runs
# print results table
header = (
f"| {'Model':<13} | {'Batch Size':>10} | {'Epochs':>6} "
f"| {'KNN Test Accuracy':>18} | {'Time':>10} | {'Peak GPU Usage':>14} |"
)
print('-' * len(header))
print(header)
print('-' * len(header))
for model, results in bench_results.items():
runtime = np.array([result['runtime'] for result in results])
runtime = runtime.mean() / 60 # convert to min
accuracy = np.array([result['max_accuracy'] for result in results])
gpu_memory_usage = np.array([result['gpu_memory_usage'] for result in results])
gpu_memory_usage = gpu_memory_usage.max() / (1024**3) # convert to gbyte
if len(accuracy) > 1:
accuracy_msg = f"{accuracy.mean():>8.3f} +- {accuracy.std():>4.3f}"
else:
accuracy_msg = f"{accuracy.mean():>18.3f}"
print(
f"| {model:<13} | {batch_size:>10} | {max_epochs:>6} "
f"| {accuracy_msg} | {runtime:>6.1f} Min "
f"| {gpu_memory_usage:>8.1f} GByte |",
flush=True
)
print('-' * len(header))
| docs/source/getting_started/benchmarks/cifar10_benchmark.py | 27,350 | Helper method to create dataloaders for ssl, kNN train and kNN test
Args:
batch_size: Desired batch size for all dataloaders
Benchmark Results
Updated: 18.02.2022 (6618fa3c36b0c9f3a9d7a21bcdb00bf4fd258ee8))
------------------------------------------------------------------------------------------
| Model | Batch Size | Epochs | KNN Test Accuracy | Time | Peak GPU Usage |
------------------------------------------------------------------------------------------
| BarlowTwins | 128 | 200 | 0.835 | 193.4 Min | 2.2 GByte |
| BYOL | 128 | 200 | 0.872 | 217.0 Min | 2.3 GByte |
| DINO | 128 | 200 | 0.868 | 220.7 Min | 2.3 GByte |
| Moco | 128 | 200 | 0.838 | 229.5 Min | 2.3 GByte |
| NNCLR | 128 | 200 | 0.838 | 198.7 Min | 2.2 GByte |
| SimCLR | 128 | 200 | 0.822 | 182.7 Min | 2.2 GByte |
| SimSiam | 128 | 200 | 0.779 | 182.6 Min | 2.3 GByte |
| SwaV | 128 | 200 | 0.806 | 182.4 Min | 2.2 GByte |
------------------------------------------------------------------------------------------
| BarlowTwins | 512 | 200 | 0.827 | 160.7 Min | 7.5 GByte |
| BYOL | 512 | 200 | 0.872 | 188.5 Min | 7.7 GByte |
| DINO | 512 | 200 | 0.862 | 191.1 Min | 7.5 GByte |
| Moco (*) | 512 | 200 | 0.850 | 196.8 Min | 7.8 GByte |
| NNCLR (*) | 512 | 200 | 0.836 | 164.7 Min | 7.6 GByte |
| SimCLR | 512 | 200 | 0.828 | 158.2 Min | 7.5 GByte |
| SimSiam | 512 | 200 | 0.814 | 159.0 Min | 7.6 GByte |
| SwaV | 512 | 200 | 0.833 | 158.4 Min | 7.5 GByte |
------------------------------------------------------------------------------------------
| BarlowTwins | 512 | 800 | 0.857 | 641.5 Min | 7.5 GByte |
| BYOL | 512 | 800 | 0.911 | 754.2 Min | 7.8 GByte |
| DINO | 512 | 800 | 0.884 | 765.5 Min | 7.6 GByte |
| Moco (*) | 512 | 800 | 0.900 | 787.7 Min | 7.8 GByte |
| NNCLR (*) | 512 | 800 | 0.896 | 659.2 Min | 7.6 GByte |
| SimCLR | 512 | 800 | 0.875 | 632.5 Min | 7.5 GByte |
| SimSiam | 512 | 800 | 0.906 | 636.5 Min | 7.6 GByte |
| SwaV | 512 | 800 | 0.881 | 634.9 Min | 7.5 GByte |
------------------------------------------------------------------------------------------
(*): Increased size of memory bank from 4096 to 8192 to avoid too quickly
changing memory bank due to larger batch size.
The benchmarks were created on a single NVIDIA RTX A6000.
Note that this benchmark also supports a multi-GPU setup. If you run it on
a system with multiple GPUs make sure that you kill all the processes when
killing the application. Due to the way we setup this benchmark the distributed
processes might continue the benchmark if one of the nodes is killed.
If you know how to fix this don't hesitate to create an issue or PR :)
-*- coding: utf-8 -*- set max_epochs to 800 for long run (takes around 10h on a single V100) Set to True to enable Distributed Data Parallel training. Set to True to enable Synchronized Batch Norm (requires distributed=True). If enabled the batch norm is calculated over all gpus, otherwise the batch norm is only calculated from samples on the same gpu. Set to True to gather features from all gpus before calculating the loss (requires distributed=True). If enabled then the loss on every gpu is calculated with features from all gpus, otherwise only features from the same gpu are used. benchmark optional, increase to create multiple runs and report mean + std scales the learning rate linearly with batch size use a GPU if availablegpus = torch.cuda.device_count() if torch.cuda.is_available() else 0 reduce batch size for distributed training limit to single gpu if not using distributed training Adapted from our MoCo Tutorial on CIFAR-10 Replace the path with the location of your CIFAR-10 dataset. We assume we have a train folder with subfolders for each class and .png images inside. You can download `CIFAR-10 in folders from kaggle <https://www.kaggle.com/swaroopkml/cifar10-pngs-in-folders>`_. The dataset structure should be like this: cifar10/train/ L airplane/ L 10008_airplane.png L ... L automobile/ L bird/ L cat/ L deer/ L dog/ L frog/ L horse/ L ship/ L truck/ Use SimCLR augmentations, additionally, disable blur for cifar10 Multi crop augmentation for SwAV, additionally, disable blur for cifar10 2 crops @ 32x32px Multi crop augmentation for DINO, additionally, disable blur for cifar10 No additional augmentations for the test set we use test transformations for getting the feature for kNN on train data create a ResNet backbone and remove the classification head create a moco model based on ResNet create our loss with the optional memory bank update momentum We use a symmetric loss (model trains faster at little compute overhead) https://colab.research.google.com/github/facebookresearch/moco/blob/colab-notebook/colab/moco_cifar10_demo.ipynb create a ResNet backbone and remove the classification head create a ResNet backbone and remove the classification head use a 2-layer projection head for cifar10 as described in the paper no lr-scaling, results in better training stability create a ResNet backbone and remove the classification head use a 2-layer projection head for cifar10 as described in the paper create a ResNet backbone and remove the classification head create a byol model based on ResNet create a ResNet backbone and remove the classification head use 512 prototypes normalize the prototypes so they are on the unit sphere the multi-crop dataloader returns a list of image crops where the first two items are the high resolution crops and the rest are low resolution crops split list of crop features into high and low resolution calculate the SwaV loss create a ResNet backbone and remove the classification head use only a 2-layer projection head for cifar10 create a ResNet backbone and remove the classification head use only 2 layers for cifar10 loop through configurations and train models Save logs to: {CWD}/benchmark_logs/cifar10/{experiment_version}/{model_name}/ If multiple runs are specified a subdirectory for each run is created. Save results of all models under same version directory delete model and trainer + free up cuda memory print results table convert to min convert to gbyte | 6,970 | en | 0.672573 |
from core.models import Item, Listing, PromoCode, Address, UserProfile
from core.zipcode import zipcodes
from datetime import datetime, timedelta
from decimal import *
from django import forms
from django.core.files.base import ContentFile
from django.core.files.images import get_image_dimensions
from io import BytesIO
from PIL import Image
class ItemListingForm(forms.ModelForm):
title = forms.CharField(widget=forms.TextInput(attrs={'class': 'validate'}), label="Title", max_length=100)
description = forms.CharField(widget=forms.Textarea(attrs={'class': 'materialize-textarea validate'}),
label="Description")
category = forms.ChoiceField(widget=forms.Select(attrs={'class': 'form-control'}), choices=Item.CATEGORY_CHOICES)
price = forms.DecimalField(widget=forms.NumberInput(attrs={'class': 'validate', 'onchange': 'change()'}),
label='Buy now price')
zipcode = forms.IntegerField(widget=forms.NumberInput(attrs={'class': 'validate'}), label='Pickup zipcode')
# For image cropping purposes
crop_x = forms.IntegerField(widget=forms.NumberInput(attrs={'class': 'crop-params'}))
crop_y = forms.IntegerField(widget=forms.NumberInput(attrs={'class': 'crop-params'}))
crop_height = forms.IntegerField(widget=forms.NumberInput(attrs={'class': 'crop-params'}))
crop_width = forms.IntegerField(widget=forms.NumberInput(attrs={'class': 'crop-params'}))
# Make sure starting offer is at least $5.00
def clean_price(self):
price = self.cleaned_data['price']
if price < 5:
raise forms.ValidationError("The minimum price is $5.00.")
return price
# Make sure a category is chosen
def clean_category(self):
category = self.cleaned_data['category']
if category is '0':
raise forms.ValidationError("You must choose a category for your item.")
return category
# Make sure shipping zip code is one we deliver to
def clean_zipcode(self):
zip_code = self.cleaned_data['zipcode']
if zip_code not in zipcodes():
raise forms.ValidationError("Unfortunately, Circa is not yet available in that zip code.")
return zip_code
def clean_crop_width(self):
width = int(self.cleaned_data['crop_width'])
height = int(self.cleaned_data['crop_height'])
if width < 450 or height < 450:
raise forms.ValidationError("Your cropped image must be at least 450 by 450.")
if width != height:
raise forms.ValidationError("Width and height must match.")
return width
def __init__(self, *args, **kwargs):
self.seller = kwargs.pop('seller')
super().__init__(*args, **kwargs)
def save(self, commit=True):
item = super().save(commit=False)
self.process_image(item)
listing = Listing.objects.create(
price=self.cleaned_data['price'],
zipcode=self.cleaned_data['zipcode']
)
item.listing = listing
item.seller = self.seller
item.save()
return item
def process_image(self, item):
image = Image.open(item.photo)
left = int(self.cleaned_data['crop_x'])
top = int(self.cleaned_data['crop_y'])
width = int(self.cleaned_data['crop_width'])
height = int(self.cleaned_data['crop_height'])
box = (left, top, left+width, top+height)
image = image.crop(box)
f = BytesIO()
try:
image.save(f, format='jpeg')
s = f.getvalue()
item.photo.save(item.photo.name, ContentFile(s))
finally:
f.close()
class Meta:
model = Item
fields = {'title', 'description', 'category', 'photo'}
class PromoForm(forms.Form):
code = forms.CharField()
def __init__(self, *args, **kwargs):
self.user = kwargs.pop('user') # Grabs current user
self.listing = kwargs.pop('listing') # Grabs listing
super(PromoForm, self).__init__(*args, **kwargs)
def clean_code(self):
found = False
promo_code = self.cleaned_data['code']
if PromoCode.objects.all().count() == 0:
raise forms.ValidationError("Sorry, that code isn't valid.")
codes = PromoCode.objects.all()
for promotional_code in codes:
if promotional_code.code == promo_code:
if promotional_code.redeemed:
raise forms.ValidationError("Sorry, promo code already used.")
elif promotional_code.user != self.user:
raise forms.ValidationError("Sorry, that's not your code!")
else:
found = True
break
if not found:
raise forms.ValidationError("Sorry, that code is not valid.")
return promo_code
def save(self):
promo = PromoCode.objects.filter(code=self.cleaned_data['code'])[0]
promo.listing = self.listing
promo.save()
self.listing.save()
class AddressForm(forms.Form):
address_line_1 = forms.CharField()
address_line_2 = forms.CharField(required=False)
city = forms.CharField()
# Must be changed when we branch to different states!
state = forms.CharField(widget=forms.HiddenInput())
INITIAL_STATE = 'GA'
zipcode = forms.CharField()
special_instructions = forms.CharField(required=False)
def __init__(self, *args, **kwargs):
self.user = kwargs.pop('user')
super().__init__(*args, **kwargs)
def save(self):
if not hasattr(self.user, 'userprofile'):
UserProfile.objects.create(user=self.user)
address = Address.objects.create(
address_line_1=self.cleaned_data['address_line_1'],
address_line_2=self.cleaned_data['address_line_2'],
city=self.cleaned_data['city'],
state=self.cleaned_data['state'],
zipcode=self.cleaned_data['zipcode'],
special_instructions=self.cleaned_data['special_instructions']
)
self.user.userprofile.address = address
self.user.userprofile.save()
class EditListingForm(forms.Form):
# Information for Item
title = forms.CharField(widget=forms.TextInput(attrs={'class': 'validate'}), label="Title", max_length=100)
description = forms.CharField(widget=forms.Textarea(attrs={'class': 'materialize-textarea validate'}),
label="Description")
category = forms.ChoiceField(widget=forms.Select(attrs={'class': 'form-control'}), choices=Item.CATEGORY_CHOICES)
# Information for Listing
price = forms.DecimalField(widget=forms.NumberInput(attrs={'class': 'validate'}))
zipcode = forms.IntegerField(widget=forms.NumberInput(attrs={'class': 'validate'}), label='Pickup zipcode')
def __init__(self, *args, **kwargs):
self.listing = kwargs.pop('listing') # Grabs current listing
super(EditListingForm, self).__init__(*args, **kwargs)
# Make sure starting offer is at least $5.00, and that no offers have yet been made
def clean_price(self):
price = Decimal(self.cleaned_data['price'])
if price < 5:
raise forms.ValidationError("The minimum price is $5.00.")
return price
# Make sure a category is chosen
def clean_category(self):
category = self.cleaned_data['category']
if category is '0':
raise forms.ValidationError("You must choose a category for your item.")
return category
# make sure shipping zip code is one we deliver to
def clean_zipcode(self):
zip_code = self.cleaned_data['zipcode']
if zip_code not in zipcodes():
raise forms.ValidationError("Unfortunately, Circa is not yet available in that zip code.")
return zip_code
def save(self):
self.listing.item.title = self.cleaned_data['title']
self.listing.item.description = self.cleaned_data['description']
self.listing.item.category = self.cleaned_data['category']
self.listing.price = self.cleaned_data['price']
self.listing.zipcode = self.cleaned_data['zipcode']
self.listing.item.save()
self.listing.save()
# This is a special form used to get a user's email if they did not provide one via Facebook
class EmailRequestForm(forms.Form):
email = forms.EmailField()
| circa/core/forms.py | 8,688 | For image cropping purposes Make sure starting offer is at least $5.00 Make sure a category is chosen Make sure shipping zip code is one we deliver to Grabs current user Grabs listing Must be changed when we branch to different states! Information for Item Information for Listing Grabs current listing Make sure starting offer is at least $5.00, and that no offers have yet been made Make sure a category is chosen make sure shipping zip code is one we deliver to This is a special form used to get a user's email if they did not provide one via Facebook | 555 | en | 0.953803 |
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from parlai.core.teachers import FbDeprecatedDialogTeacher
from .build import build
from parlai.utils.data import DatatypeHelper
import copy
import os
def _path(opt, filtered):
# Build the data if it doesn't exist.
build(opt)
dt = opt['datatype'].split(':')[0]
return os.path.join(opt['datapath'], 'CornellMovie', dt + filtered + '.txt')
class DefaultTeacher(FbDeprecatedDialogTeacher):
def __init__(self, opt, shared=None):
opt = copy.deepcopy(opt)
opt['datafile'] = _path(opt, '')
opt['cands_datafile'] = opt['datafile']
self.fold = DatatypeHelper.fold(opt['datatype'])
super().__init__(opt, shared)
def num_examples(self):
if self.fold == 'train':
return 133125
elif self.fold == 'valid':
return 16759
elif self.fold == 'test':
return 16611
def num_episodes(self):
if self.fold == 'train':
return 66478
elif self.fold == 'valid':
return 8310
elif self.fold == 'test':
return 8309
class DoubleTeacher(DefaultTeacher):
"""
This version creates text-label pairs from the perspective of both speakers.
"""
def num_examples(self):
if self.fold == 'train':
return 176975
elif self.fold == 'valid':
return 22349
elif self.fold == 'test':
return 22013
def num_episodes(self):
if self.fold == 'train':
return 102401
elif self.fold == 'valid':
return 12806
elif self.fold == 'test':
return 12790
def _rebuild(self, entries):
new_list = []
if len(entries) > 0:
# add all ( y_t => x_(t+1) ) pairs
new_list.extend(
[
(entries[i][1][0], [entries[i + 1][0]])
for i in range(len(entries) - 1)
]
)
return new_list
def _is_valid(self, entry):
if entry[0] == '' or entry[1] is None:
return False
return True
def setup_data(self, path):
"""
Adds additional perspectives. For example, in the conversation:
x1 y1
x2 y2
x3
Creates the additional dialog:
y1 x2
y2 x3
"""
# this shows conversations in both directions
alternate = []
for entry, new in super().setup_data(path):
if new:
for i, e in enumerate(self._rebuild(alternate)):
if self._is_valid(e):
yield e, i == 0
alternate.clear()
alternate.append(entry)
if self._is_valid(entry):
yield entry, new
if alternate:
for i, e in enumerate(self._rebuild(alternate)):
if self._is_valid(e):
yield e, i == 0
| doc/integrations/pytorch/parlai/tasks/cornell_movie/agents.py | 3,245 | This version creates text-label pairs from the perspective of both speakers.
Adds additional perspectives. For example, in the conversation:
x1 y1
x2 y2
x3
Creates the additional dialog:
y1 x2
y2 x3
!/usr/bin/env python3 Copyright (c) Facebook, Inc. and its affiliates. This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. Build the data if it doesn't exist. add all ( y_t => x_(t+1) ) pairs this shows conversations in both directions | 506 | en | 0.834633 |
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
from nose.tools import eq_, assert_raises
from socorrolib.lib import MissingArgumentError
from socorro.external.postgresql.field import Field
from .unittestbase import PostgreSQLTestCase
class IntegrationTestField(PostgreSQLTestCase):
'''Test socorro.external.postgresql.field.Field class. '''
def setUp(self):
super(IntegrationTestField, self).setUp()
cursor = self.connection.cursor()
cursor.execute('''
INSERT INTO data_dictionary
(raw_field, transforms, product)
VALUES
(
'field1',
'{}',
'WaterWolf'
),
(
'field2',
'{"processor": "some notes"}',
'WaterWolf'
);
''')
self.connection.commit()
def tearDown(self):
'''Clean up the database, delete tables and functions. '''
cursor = self.connection.cursor()
cursor.execute('''
TRUNCATE data_dictionary CASCADE
''')
self.connection.commit()
super(IntegrationTestField, self).tearDown()
def test_get(self):
api = Field(config=self.config)
# expect a result
res = api.get(name='field1')
res_expected = {
'name': 'field1',
'transforms': {},
'product': 'WaterWolf'
}
eq_(res, res_expected)
# expect a result
res = api.get(name='field2')
res_expected = {
'name': 'field2',
'transforms': {'processor': 'some notes'},
'product': 'WaterWolf'
}
eq_(res, res_expected)
# expect no result
res = api.get(name='i-do-not-exist')
res_expected = {
'name': None,
'transforms': None,
'product': None
}
eq_(res, res_expected)
# expect a failure
assert_raises(MissingArgumentError, api.get)
| socorro/unittest/external/postgresql/test_field.py | 2,184 | Test socorro.external.postgresql.field.Field class.
Clean up the database, delete tables and functions.
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. expect a result expect a result expect no result expect a failure | 366 | en | 0.85742 |
'''
Custom interpolation methods for representing approximations to functions.
It also includes wrapper classes to enforce standard methods across classes.
Each interpolation class must have a distance() method that compares itself to
another instance; this is used in HARK.core's solve() method to check for solution
convergence. The interpolator classes currently in this module inherit their
distance method from HARKobject.
'''
from __future__ import division, print_function
from __future__ import absolute_import
from builtins import range
import numpy as np
from .core import HARKobject
from copy import deepcopy
def _isscalar(x):
'''
Check whether x is if a scalar type, or 0-dim.
Parameters
----------
x : anything
An input to be checked for scalar-ness.
Returns
-------
is_scalar : boolean
True if the input is a scalar, False otherwise.
'''
return np.isscalar(x) or hasattr(x, 'shape') and x.shape == ()
class HARKinterpolator1D(HARKobject):
'''
A wrapper class for 1D interpolation methods in HARK.
'''
distance_criteria = []
def __call__(self,x):
'''
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
'''
z = np.asarray(x)
return (self._evaluate(z.flatten())).reshape(z.shape)
def derivative(self,x):
'''
Evaluates the derivative of the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
'''
z = np.asarray(x)
return (self._der(z.flatten())).reshape(z.shape)
def eval_with_derivative(self,x):
'''
Evaluates the interpolated function and its derivative at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
'''
z = np.asarray(x)
y, dydx = self._evalAndDer(z.flatten())
return y.reshape(z.shape), dydx.reshape(z.shape)
def _evaluate(self,x):
'''
Interpolated function evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _der(self,x):
'''
Interpolated function derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _evalAndDer(self,x):
'''
Interpolated function and derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
class HARKinterpolator2D(HARKobject):
'''
A wrapper class for 2D interpolation methods in HARK.
'''
distance_criteria = []
def __call__(self,x,y):
'''
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
fxy : np.array or float
The interpolated function evaluated at x,y: fxy = f(x,y), with the
same shape as x and y.
'''
xa = np.asarray(x)
ya = np.asarray(y)
return (self._evaluate(xa.flatten(),ya.flatten())).reshape(xa.shape)
def derivativeX(self,x,y):
'''
Evaluates the partial derivative of interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative of the interpolated function with respect to x, eval-
uated at x,y: dfdx = f_x(x,y), with the same shape as x and y.
'''
xa = np.asarray(x)
ya = np.asarray(y)
return (self._derX(xa.flatten(),ya.flatten())).reshape(xa.shape)
def derivativeY(self,x,y):
'''
Evaluates the partial derivative of interpolated function with respect
to y (the second argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdy : np.array or float
The derivative of the interpolated function with respect to y, eval-
uated at x,y: dfdx = f_y(x,y), with the same shape as x and y.
'''
xa = np.asarray(x)
ya = np.asarray(y)
return (self._derY(xa.flatten(),ya.flatten())).reshape(xa.shape)
def _evaluate(self,x,y):
'''
Interpolated function evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derX(self,x,y):
'''
Interpolated function x-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derY(self,x,y):
'''
Interpolated function y-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
class HARKinterpolator3D(HARKobject):
'''
A wrapper class for 3D interpolation methods in HARK.
'''
distance_criteria = []
def __call__(self,x,y,z):
'''
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
fxyz : np.array or float
The interpolated function evaluated at x,y,z: fxyz = f(x,y,z), with
the same shape as x, y, and z.
'''
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._evaluate(xa.flatten(),ya.flatten(),za.flatten())).reshape(xa.shape)
def derivativeX(self,x,y,z):
'''
Evaluates the partial derivative of the interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative with respect to x of the interpolated function evaluated
at x,y,z: dfdx = f_x(x,y,z), with the same shape as x, y, and z.
'''
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derX(xa.flatten(),ya.flatten(),za.flatten())).reshape(xa.shape)
def derivativeY(self,x,y,z):
'''
Evaluates the partial derivative of the interpolated function with respect
to y (the second argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function evaluated
at x,y,z: dfdy = f_y(x,y,z), with the same shape as x, y, and z.
'''
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derY(xa.flatten(),ya.flatten(),za.flatten())).reshape(xa.shape)
def derivativeZ(self,x,y,z):
'''
Evaluates the partial derivative of the interpolated function with respect
to z (the third argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function evaluated
at x,y,z: dfdz = f_z(x,y,z), with the same shape as x, y, and z.
'''
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derZ(xa.flatten(),ya.flatten(),za.flatten())).reshape(xa.shape)
def _evaluate(self,x,y,z):
'''
Interpolated function evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derX(self,x,y,z):
'''
Interpolated function x-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derY(self,x,y,z):
'''
Interpolated function y-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derZ(self,x,y,z):
'''
Interpolated function y-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
class HARKinterpolator4D(HARKobject):
'''
A wrapper class for 4D interpolation methods in HARK.
'''
distance_criteria = []
def __call__(self,w,x,y,z):
'''
Evaluates the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
fwxyz : np.array or float
The interpolated function evaluated at w,x,y,z: fwxyz = f(w,x,y,z),
with the same shape as w, x, y, and z.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._evaluate(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
def derivativeW(self,w,x,y,z):
'''
Evaluates the partial derivative with respect to w (the first argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdw : np.array or float
The derivative with respect to w of the interpolated function eval-
uated at w,x,y,z: dfdw = f_w(w,x,y,z), with the same shape as inputs.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derW(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
def derivativeX(self,w,x,y,z):
'''
Evaluates the partial derivative with respect to x (the second argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdx : np.array or float
The derivative with respect to x of the interpolated function eval-
uated at w,x,y,z: dfdx = f_x(w,x,y,z), with the same shape as inputs.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derX(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
def derivativeY(self,w,x,y,z):
'''
Evaluates the partial derivative with respect to y (the third argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function eval-
uated at w,x,y,z: dfdy = f_y(w,x,y,z), with the same shape as inputs.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derY(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
def derivativeZ(self,w,x,y,z):
'''
Evaluates the partial derivative with respect to z (the fourth argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function eval-
uated at w,x,y,z: dfdz = f_z(w,x,y,z), with the same shape as inputs.
'''
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derZ(wa.flatten(),xa.flatten(),ya.flatten(),za.flatten())).reshape(wa.shape)
def _evaluate(self,w,x,y,z):
'''
Interpolated function evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derW(self,w,x,y,z):
'''
Interpolated function w-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derX(self,w,x,y,z):
'''
Interpolated function w-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derY(self,w,x,y,z):
'''
Interpolated function w-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
def _derZ(self,w,x,y,z):
'''
Interpolated function w-derivative evaluator, to be defined in subclasses.
'''
raise NotImplementedError()
class IdentityFunction(HARKobject):
'''
A fairly trivial interpolator that simply returns one of its arguments. Useful for avoiding
numeric error in extreme cases.
'''
distance_criteria = ['i_dim']
def __init__(self,i_dim=0,n_dims=1):
'''
Constructor for a new IdentityFunction.
Parameters
----------
i_dim : int
Index of the dimension on which the identity is defined. f(*x) = x[i]
n_dims : int
Total number of input dimensions for this function.
Returns
-------
None
'''
self.i_dim = i_dim
self.n_dims = n_dims
def __call__(self,*args):
'''
Evaluate the identity function.
'''
return args[self.i_dim]
def derivative(self,*args):
'''
Returns the derivative of the function with respect to the first dimension.
'''
if self.i_dim == 0:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeX(self,*args):
'''
Returns the derivative of the function with respect to the X dimension.
This is the first input whenever n_dims < 4 and the second input otherwise.
'''
if self.n_dims >= 4:
j = 1
else:
j = 0
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeY(self,*args):
'''
Returns the derivative of the function with respect to the Y dimension.
This is the second input whenever n_dims < 4 and the third input otherwise.
'''
if self.n_dims >= 4:
j = 2
else:
j = 1
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeZ(self,*args):
'''
Returns the derivative of the function with respect to the Z dimension.
This is the third input whenever n_dims < 4 and the fourth input otherwise.
'''
if self.n_dims >= 4:
j = 3
else:
j = 2
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeW(self,*args):
'''
Returns the derivative of the function with respect to the W dimension.
This should only exist when n_dims >= 4.
'''
if self.n_dims >= 4:
j = 0
else:
assert False, "Derivative with respect to W can't be called when n_dims < 4!"
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
class ConstantFunction(HARKobject):
'''
A class for representing trivial functions that return the same real output for any input. This
is convenient for models where an object might be a (non-trivial) function, but in some variations
that object is just a constant number. Rather than needing to make a (Bi/Tri/Quad)-
LinearInterpolation with trivial state grids and the same f_value in every entry, ConstantFunction
allows the user to quickly make a constant/trivial function. This comes up, e.g., in models
with endogenous pricing of insurance contracts; a contract's premium might depend on some state
variables of the individual, but in some variations the premium of a contract is just a number.
'''
convergence_criteria = ['value']
def __init__(self,value):
'''
Make a new ConstantFunction object.
Parameters
----------
value : float
The constant value that the function returns.
Returns
-------
None
'''
self.value = float(value)
def __call__(self,*args):
'''
Evaluate the constant function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists).
'''
if len(args) > 0: # If there is at least one argument, return appropriately sized array
if _isscalar(args[0]):
return self.value
else:
shape = args[0].shape
return self.value*np.ones(shape)
else: # Otherwise, return a single instance of the constant value
return self.value
def _der(self,*args):
'''
Evaluate the derivative of the function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists). This is an array of zeros.
'''
if len(args) > 0:
if _isscalar(args[0]):
return 0.0
else:
shape = args[0].shape
return np.zeros(shape)
else:
return 0.0
# All other derivatives are also zero everywhere, so these methods just point to derivative
derivative = _der
derivativeX = derivative
derivativeY = derivative
derivativeZ = derivative
derivativeW = derivative
derivativeXX= derivative
class LinearInterp(HARKinterpolator1D):
'''
A "from scratch" 1D linear interpolation class. Allows for linear or decay
extrapolation (approaching a limiting linear function from below).
'''
distance_criteria = ['x_list','y_list']
def __init__(self,x_list,y_list,intercept_limit=None,slope_limit=None,lower_extrap=False):
'''
The interpolation constructor to make a new linear spline interpolation.
Parameters
----------
x_list : np.array
List of x values composing the grid.
y_list : np.array
List of y values, representing f(x) at the points in x_list.
intercept_limit : float
Intercept of limiting linear function.
slope_limit : float
Slope of limiting linear function.
lower_extrap : boolean
Indicator for whether lower extrapolation is allowed. False means
f(x) = NaN for x < min(x_list); True means linear extrapolation.
Returns
-------
new instance of LinearInterp
NOTE: When no input is given for the limiting linear function, linear
extrapolation is used above the highest gridpoint.
'''
# Make the basic linear spline interpolation
self.x_list = np.array(x_list)
self.y_list = np.array(y_list)
self.lower_extrap = lower_extrap
self.x_n = self.x_list.size
# Make a decay extrapolation
if intercept_limit is not None and slope_limit is not None:
slope_at_top = (y_list[-1] - y_list[-2])/(x_list[-1] - x_list[-2])
level_diff = intercept_limit + slope_limit*x_list[-1] - y_list[-1]
slope_diff = slope_limit - slope_at_top
self.decay_extrap_A = level_diff
self.decay_extrap_B = -slope_diff/level_diff
self.intercept_limit = intercept_limit
self.slope_limit = slope_limit
self.decay_extrap = True
else:
self.decay_extrap = False
def _evalOrDer(self,x,_eval,_Der):
'''
Returns the level and/or first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Parameters
----------
x_list : scalar or np.array
Set of points where we want to evlauate the interpolated function and/or its derivative..
_eval : boolean
Indicator for whether to evalute the level of the interpolated function.
_Der : boolean
Indicator for whether to evaluate the derivative of the interpolated function.
Returns
-------
A list including the level and/or derivative of the interpolated function where requested.
'''
i = np.maximum(np.searchsorted(self.x_list[:-1],x),1)
alpha = (x-self.x_list[i-1])/(self.x_list[i]-self.x_list[i-1])
if _eval:
y = (1.-alpha)*self.y_list[i-1] + alpha*self.y_list[i]
if _Der:
dydx = (self.y_list[i] - self.y_list[i-1])/(self.x_list[i] - self.x_list[i-1])
if not self.lower_extrap:
below_lower_bound = x < self.x_list[0]
if _eval:
y[below_lower_bound] = np.nan
if _Der:
dydx[below_lower_bound] = np.nan
if self.decay_extrap:
above_upper_bound = x > self.x_list[-1]
x_temp = x[above_upper_bound] - self.x_list[-1]
if _eval:
y[above_upper_bound] = self.intercept_limit + \
self.slope_limit*x[above_upper_bound] - \
self.decay_extrap_A*np.exp(-self.decay_extrap_B*x_temp)
if _Der:
dydx[above_upper_bound] = self.slope_limit + \
self.decay_extrap_B*self.decay_extrap_A*\
np.exp(-self.decay_extrap_B*x_temp)
output = []
if _eval:
output += [y,]
if _Der:
output += [dydx,]
return output
def _evaluate(self,x,return_indices = False):
'''
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
'''
return self._evalOrDer(x,True,False)[0]
def _der(self,x):
'''
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
'''
return self._evalOrDer(x,False,True)[0]
def _evalAndDer(self,x):
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
'''
y,dydx = self._evalOrDer(x,True,True)
return y,dydx
class CubicInterp(HARKinterpolator1D):
'''
An interpolating function using piecewise cubic splines. Matches level and
slope of 1D function at gridpoints, smoothly interpolating in between.
Extrapolation above highest gridpoint approaches a limiting linear function
if desired (linear extrapolation also enabled.)
'''
distance_criteria = ['x_list','y_list','dydx_list']
def __init__(self,x_list,y_list,dydx_list,intercept_limit=None,slope_limit=None,lower_extrap=False):
'''
The interpolation constructor to make a new cubic spline interpolation.
Parameters
----------
x_list : np.array
List of x values composing the grid.
y_list : np.array
List of y values, representing f(x) at the points in x_list.
dydx_list : np.array
List of dydx values, representing f'(x) at the points in x_list
intercept_limit : float
Intercept of limiting linear function.
slope_limit : float
Slope of limiting linear function.
lower_extrap : boolean
Indicator for whether lower extrapolation is allowed. False means
f(x) = NaN for x < min(x_list); True means linear extrapolation.
Returns
-------
new instance of CubicInterp
NOTE: When no input is given for the limiting linear function, linear
extrapolation is used above the highest gridpoint.
'''
self.x_list = np.asarray(x_list)
self.y_list = np.asarray(y_list)
self.dydx_list = np.asarray(dydx_list)
self.n = len(x_list)
# Define lower extrapolation as linear function (or just NaN)
if lower_extrap:
self.coeffs = [[y_list[0],dydx_list[0],0,0]]
else:
self.coeffs = [[np.nan,np.nan,np.nan,np.nan]]
# Calculate interpolation coefficients on segments mapped to [0,1]
for i in range(self.n-1):
x0 = x_list[i]
y0 = y_list[i]
x1 = x_list[i+1]
y1 = y_list[i+1]
Span = x1 - x0
dydx0 = dydx_list[i]*Span
dydx1 = dydx_list[i+1]*Span
temp = [y0, dydx0, 3*(y1 - y0) - 2*dydx0 - dydx1, 2*(y0 - y1) + dydx0 + dydx1];
self.coeffs.append(temp)
# Calculate extrapolation coefficients as a decay toward limiting function y = mx+b
if slope_limit is None and intercept_limit is None:
slope_limit = dydx_list[-1]
intercept_limit = y_list[-1] - slope_limit*x_list[-1]
gap = slope_limit*x1 + intercept_limit - y1
slope = slope_limit - dydx_list[self.n-1]
if (gap != 0) and (slope <= 0):
temp = [intercept_limit, slope_limit, gap, slope/gap]
elif slope > 0:
temp = [intercept_limit, slope_limit, 0, 0] # fixing a problem when slope is positive
else:
temp = [intercept_limit, slope_limit, gap, 0]
self.coeffs.append(temp)
self.coeffs = np.array(self.coeffs)
def _evaluate(self,x):
'''
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
'''
if _isscalar(x):
pos = np.searchsorted(self.x_list,x)
if pos == 0:
y = self.coeffs[0,0] + self.coeffs[0,1]*(x - self.x_list[0])
elif (pos < self.n):
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
y = self.coeffs[pos,0] + alpha*(self.coeffs[pos,1] + alpha*(self.coeffs[pos,2] + alpha*self.coeffs[pos,3]))
else:
alpha = x - self.x_list[self.n-1]
y = self.coeffs[pos,0] + x*self.coeffs[pos,1] - self.coeffs[pos,2]*np.exp(alpha*self.coeffs[pos,3])
else:
m = len(x)
pos = np.searchsorted(self.x_list,x)
y = np.zeros(m)
if y.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i,:]
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
y[in_bnds] = coeffs_in[:,0] + alpha*(coeffs_in[:,1] + alpha*(coeffs_in[:,2] + alpha*coeffs_in[:,3]))
# Do the "out of bounds" evaluation points
y[out_bot] = self.coeffs[0,0] + self.coeffs[0,1]*(x[out_bot] - self.x_list[0])
alpha = x[out_top] - self.x_list[self.n-1]
y[out_top] = self.coeffs[self.n,0] + x[out_top]*self.coeffs[self.n,1] - self.coeffs[self.n,2]*np.exp(alpha*self.coeffs[self.n,3])
return y
def _der(self,x):
'''
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
'''
if _isscalar(x):
pos = np.searchsorted(self.x_list,x)
if pos == 0:
dydx = self.coeffs[0,1]
elif (pos < self.n):
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
dydx = (self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3]))/(self.x_list[pos] - self.x_list[pos-1])
else:
alpha = x - self.x_list[self.n-1]
dydx = self.coeffs[pos,1] - self.coeffs[pos,2]*self.coeffs[pos,3]*np.exp(alpha*self.coeffs[pos,3])
else:
m = len(x)
pos = np.searchsorted(self.x_list,x)
dydx = np.zeros(m)
if dydx.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i,:]
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
dydx[in_bnds] = (coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3]))/(self.x_list[i] - self.x_list[i-1])
# Do the "out of bounds" evaluation points
dydx[out_bot] = self.coeffs[0,1]
alpha = x[out_top] - self.x_list[self.n-1]
dydx[out_top] = self.coeffs[self.n,1] - self.coeffs[self.n,2]*self.coeffs[self.n,3]*np.exp(alpha*self.coeffs[self.n,3])
return dydx
def _evalAndDer(self,x):
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
'''
if _isscalar(x):
pos = np.searchsorted(self.x_list,x)
if pos == 0:
y = self.coeffs[0,0] + self.coeffs[0,1]*(x - self.x_list[0])
dydx = self.coeffs[0,1]
elif (pos < self.n):
alpha = (x - self.x_list[pos-1])/(self.x_list[pos] - self.x_list[pos-1])
y = self.coeffs[pos,0] + alpha*(self.coeffs[pos,1] + alpha*(self.coeffs[pos,2] + alpha*self.coeffs[pos,3]))
dydx = (self.coeffs[pos,1] + alpha*(2*self.coeffs[pos,2] + alpha*3*self.coeffs[pos,3]))/(self.x_list[pos] - self.x_list[pos-1])
else:
alpha = x - self.x_list[self.n-1]
y = self.coeffs[pos,0] + x*self.coeffs[pos,1] - self.coeffs[pos,2]*np.exp(alpha*self.coeffs[pos,3])
dydx = self.coeffs[pos,1] - self.coeffs[pos,2]*self.coeffs[pos,3]*np.exp(alpha*self.coeffs[pos,3])
else:
m = len(x)
pos = np.searchsorted(self.x_list,x)
y = np.zeros(m)
dydx = np.zeros(m)
if y.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i,:]
alpha = (x[in_bnds] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
y[in_bnds] = coeffs_in[:,0] + alpha*(coeffs_in[:,1] + alpha*(coeffs_in[:,2] + alpha*coeffs_in[:,3]))
dydx[in_bnds] = (coeffs_in[:,1] + alpha*(2*coeffs_in[:,2] + alpha*3*coeffs_in[:,3]))/(self.x_list[i] - self.x_list[i-1])
# Do the "out of bounds" evaluation points
y[out_bot] = self.coeffs[0,0] + self.coeffs[0,1]*(x[out_bot] - self.x_list[0])
dydx[out_bot] = self.coeffs[0,1]
alpha = x[out_top] - self.x_list[self.n-1]
y[out_top] = self.coeffs[self.n,0] + x[out_top]*self.coeffs[self.n,1] - self.coeffs[self.n,2]*np.exp(alpha*self.coeffs[self.n,3])
dydx[out_top] = self.coeffs[self.n,1] - self.coeffs[self.n,2]*self.coeffs[self.n,3]*np.exp(alpha*self.coeffs[self.n,3])
return y, dydx
class BilinearInterp(HARKinterpolator2D):
'''
Bilinear full (or tensor) grid interpolation of a function f(x,y).
'''
distance_criteria = ['x_list','y_list','f_values']
def __init__(self,f_values,x_list,y_list,xSearchFunc=None,ySearchFunc=None):
'''
Constructor to make a new bilinear interpolation.
Parameters
----------
f_values : numpy.array
An array of size (x_n,y_n) such that f_values[i,j] = f(x_list[i],y_list[j])
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
Returns
-------
new instance of BilinearInterp
'''
self.f_values = f_values
self.x_list = x_list
self.y_list = y_list
self.x_n = x_list.size
self.y_n = y_list.size
if xSearchFunc is None:
xSearchFunc = np.searchsorted
if ySearchFunc is None:
ySearchFunc = np.searchsorted
self.xSearchFunc = xSearchFunc
self.ySearchFunc = ySearchFunc
def _evaluate(self,x,y):
'''
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
f = (
(1-alpha)*(1-beta)*self.f_values[x_pos-1,y_pos-1]
+ (1-alpha)*beta*self.f_values[x_pos-1,y_pos]
+ alpha*(1-beta)*self.f_values[x_pos,y_pos-1]
+ alpha*beta*self.f_values[x_pos,y_pos])
return f
def _derX(self,x,y):
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdx = (
((1-beta)*self.f_values[x_pos,y_pos-1]
+ beta*self.f_values[x_pos,y_pos]) -
((1-beta)*self.f_values[x_pos-1,y_pos-1]
+ beta*self.f_values[x_pos-1,y_pos]))/(self.x_list[x_pos] - self.x_list[x_pos-1])
return dfdx
def _derY(self,x,y):
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
dfdy = (
((1-alpha)*self.f_values[x_pos-1,y_pos]
+ alpha*self.f_values[x_pos,y_pos]) -
((1-alpha)*self.f_values[x_pos-1,y_pos-1]
+ alpha*self.f_values[x_pos,y_pos-1]))/(self.y_list[y_pos] - self.y_list[y_pos-1])
return dfdy
class TrilinearInterp(HARKinterpolator3D):
'''
Trilinear full (or tensor) grid interpolation of a function f(x,y,z).
'''
distance_criteria = ['f_values','x_list','y_list','z_list']
def __init__(self,f_values,x_list,y_list,z_list,xSearchFunc=None,ySearchFunc=None,zSearchFunc=None):
'''
Constructor to make a new trilinear interpolation.
Parameters
----------
f_values : numpy.array
An array of size (x_n,y_n,z_n) such that f_values[i,j,k] =
f(x_list[i],y_list[j],z_list[k])
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
z_list : numpy.array
An array of z values, with length designated z_n.
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
zSearchFunc : function
An optional function that returns the reference location for z values:
indices = zSearchFunc(z_list,z). Default is np.searchsorted
Returns
-------
new instance of TrilinearInterp
'''
self.f_values = f_values
self.x_list = x_list
self.y_list = y_list
self.z_list = z_list
self.x_n = x_list.size
self.y_n = y_list.size
self.z_n = z_list.size
if xSearchFunc is None:
xSearchFunc = np.searchsorted
if ySearchFunc is None:
ySearchFunc = np.searchsorted
if zSearchFunc is None:
zSearchFunc = np.searchsorted
self.xSearchFunc = xSearchFunc
self.ySearchFunc = ySearchFunc
self.zSearchFunc = zSearchFunc
def _evaluate(self,x,y,z):
'''
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
f = (
(1-alpha)*(1-beta)*(1-gamma)*self.f_values[x_pos-1,y_pos-1,z_pos-1]
+ (1-alpha)*(1-beta)*gamma*self.f_values[x_pos-1,y_pos-1,z_pos]
+ (1-alpha)*beta*(1-gamma)*self.f_values[x_pos-1,y_pos,z_pos-1]
+ (1-alpha)*beta*gamma*self.f_values[x_pos-1,y_pos,z_pos]
+ alpha*(1-beta)*(1-gamma)*self.f_values[x_pos,y_pos-1,z_pos-1]
+ alpha*(1-beta)*gamma*self.f_values[x_pos,y_pos-1,z_pos]
+ alpha*beta*(1-gamma)*self.f_values[x_pos,y_pos,z_pos-1]
+ alpha*beta*gamma*self.f_values[x_pos,y_pos,z_pos])
return f
def _derX(self,x,y,z):
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdx = (
( (1-beta)*(1-gamma)*self.f_values[x_pos,y_pos-1,z_pos-1]
+ (1-beta)*gamma*self.f_values[x_pos,y_pos-1,z_pos]
+ beta*(1-gamma)*self.f_values[x_pos,y_pos,z_pos-1]
+ beta*gamma*self.f_values[x_pos,y_pos,z_pos]) -
( (1-beta)*(1-gamma)*self.f_values[x_pos-1,y_pos-1,z_pos-1]
+ (1-beta)*gamma*self.f_values[x_pos-1,y_pos-1,z_pos]
+ beta*(1-gamma)*self.f_values[x_pos-1,y_pos,z_pos-1]
+ beta*gamma*self.f_values[x_pos-1,y_pos,z_pos]))/(self.x_list[x_pos] - self.x_list[x_pos-1])
return dfdx
def _derY(self,x,y,z):
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdy = (
( (1-alpha)*(1-gamma)*self.f_values[x_pos-1,y_pos,z_pos-1]
+ (1-alpha)*gamma*self.f_values[x_pos-1,y_pos,z_pos]
+ alpha*(1-gamma)*self.f_values[x_pos,y_pos,z_pos-1]
+ alpha*gamma*self.f_values[x_pos,y_pos,z_pos]) -
( (1-alpha)*(1-gamma)*self.f_values[x_pos-1,y_pos-1,z_pos-1]
+ (1-alpha)*gamma*self.f_values[x_pos-1,y_pos-1,z_pos]
+ alpha*(1-gamma)*self.f_values[x_pos,y_pos-1,z_pos-1]
+ alpha*gamma*self.f_values[x_pos,y_pos-1,z_pos]))/(self.y_list[y_pos] - self.y_list[y_pos-1])
return dfdy
def _derZ(self,x,y,z):
'''
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
'''
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdz = (
( (1-alpha)*(1-beta)*self.f_values[x_pos-1,y_pos-1,z_pos]
+ (1-alpha)*beta*self.f_values[x_pos-1,y_pos,z_pos]
+ alpha*(1-beta)*self.f_values[x_pos,y_pos-1,z_pos]
+ alpha*beta*self.f_values[x_pos,y_pos,z_pos]) -
( (1-alpha)*(1-beta)*self.f_values[x_pos-1,y_pos-1,z_pos-1]
+ (1-alpha)*beta*self.f_values[x_pos-1,y_pos,z_pos-1]
+ alpha*(1-beta)*self.f_values[x_pos,y_pos-1,z_pos-1]
+ alpha*beta*self.f_values[x_pos,y_pos,z_pos-1]))/(self.z_list[z_pos] - self.z_list[z_pos-1])
return dfdz
class QuadlinearInterp(HARKinterpolator4D):
'''
Quadlinear full (or tensor) grid interpolation of a function f(w,x,y,z).
'''
distance_criteria = ['f_values','w_list','x_list','y_list','z_list']
def __init__(self,f_values,w_list,x_list,y_list,z_list,wSearchFunc=None,xSearchFunc=None,ySearchFunc=None,zSearchFunc=None):
'''
Constructor to make a new quadlinear interpolation.
Parameters
----------
f_values : numpy.array
An array of size (w_n,x_n,y_n,z_n) such that f_values[i,j,k,l] =
f(w_list[i],x_list[j],y_list[k],z_list[l])
w_list : numpy.array
An array of x values, with length designated w_n.
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
z_list : numpy.array
An array of z values, with length designated z_n.
wSearchFunc : function
An optional function that returns the reference location for w values:
indices = wSearchFunc(w_list,w). Default is np.searchsorted
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
zSearchFunc : function
An optional function that returns the reference location for z values:
indices = zSearchFunc(z_list,z). Default is np.searchsorted
Returns
-------
new instance of QuadlinearInterp
'''
self.f_values = f_values
self.w_list = w_list
self.x_list = x_list
self.y_list = y_list
self.z_list = z_list
self.w_n = w_list.size
self.x_n = x_list.size
self.y_n = y_list.size
self.z_n = z_list.size
if wSearchFunc is None:
wSearchFunc = np.searchsorted
if xSearchFunc is None:
xSearchFunc = np.searchsorted
if ySearchFunc is None:
ySearchFunc = np.searchsorted
if zSearchFunc is None:
zSearchFunc = np.searchsorted
self.wSearchFunc = wSearchFunc
self.xSearchFunc = xSearchFunc
self.ySearchFunc = ySearchFunc
self.zSearchFunc = zSearchFunc
def _evaluate(self,w,x,y,z):
'''
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
'''
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list,w),self.w_n-1),1)
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
w_pos = self.wSearchFunc(self.w_list,w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n-1] = self.w_n-1
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
f = (
(1-alpha)*((1-beta)*((1-gamma)*(1-delta)*self.f_values[i-1,j-1,k-1,l-1]
+ (1-gamma)*delta*self.f_values[i-1,j-1,k-1,l]
+ gamma*(1-delta)*self.f_values[i-1,j-1,k,l-1]
+ gamma*delta*self.f_values[i-1,j-1,k,l])
+ beta*((1-gamma)*(1-delta)*self.f_values[i-1,j,k-1,l-1]
+ (1-gamma)*delta*self.f_values[i-1,j,k-1,l]
+ gamma*(1-delta)*self.f_values[i-1,j,k,l-1]
+ gamma*delta*self.f_values[i-1,j,k,l]))
+ alpha*((1-beta)*((1-gamma)*(1-delta)*self.f_values[i,j-1,k-1,l-1]
+ (1-gamma)*delta*self.f_values[i,j-1,k-1,l]
+ gamma*(1-delta)*self.f_values[i,j-1,k,l-1]
+ gamma*delta*self.f_values[i,j-1,k,l])
+ beta*((1-gamma)*(1-delta)*self.f_values[i,j,k-1,l-1]
+ (1-gamma)*delta*self.f_values[i,j,k-1,l]
+ gamma*(1-delta)*self.f_values[i,j,k,l-1]
+ gamma*delta*self.f_values[i,j,k,l])))
return f
def _derW(self,w,x,y,z):
'''
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
'''
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list,w),self.w_n-1),1)
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
w_pos = self.wSearchFunc(self.w_list,w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n-1] = self.w_n-1
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
dfdw = (
( (1-beta)*(1-gamma)*(1-delta)*self.f_values[i,j-1,k-1,l-1]
+ (1-beta)*(1-gamma)*delta*self.f_values[i,j-1,k-1,l]
+ (1-beta)*gamma*(1-delta)*self.f_values[i,j-1,k,l-1]
+ (1-beta)*gamma*delta*self.f_values[i,j-1,k,l]
+ beta*(1-gamma)*(1-delta)*self.f_values[i,j,k-1,l-1]
+ beta*(1-gamma)*delta*self.f_values[i,j,k-1,l]
+ beta*gamma*(1-delta)*self.f_values[i,j,k,l-1]
+ beta*gamma*delta*self.f_values[i,j,k,l] ) -
( (1-beta)*(1-gamma)*(1-delta)*self.f_values[i-1,j-1,k-1,l-1]
+ (1-beta)*(1-gamma)*delta*self.f_values[i-1,j-1,k-1,l]
+ (1-beta)*gamma*(1-delta)*self.f_values[i-1,j-1,k,l-1]
+ (1-beta)*gamma*delta*self.f_values[i-1,j-1,k,l]
+ beta*(1-gamma)*(1-delta)*self.f_values[i-1,j,k-1,l-1]
+ beta*(1-gamma)*delta*self.f_values[i-1,j,k-1,l]
+ beta*gamma*(1-delta)*self.f_values[i-1,j,k,l-1]
+ beta*gamma*delta*self.f_values[i-1,j,k,l] )
)/(self.w_list[i] - self.w_list[i-1])
return dfdw
def _derX(self,w,x,y,z):
'''
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
'''
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list,w),self.w_n-1),1)
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
w_pos = self.wSearchFunc(self.w_list,w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n-1] = self.w_n-1
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
dfdx = (
( (1-alpha)*(1-gamma)*(1-delta)*self.f_values[i-1,j,k-1,l-1]
+ (1-alpha)*(1-gamma)*delta*self.f_values[i-1,j,k-1,l]
+ (1-alpha)*gamma*(1-delta)*self.f_values[i-1,j,k,l-1]
+ (1-alpha)*gamma*delta*self.f_values[i-1,j,k,l]
+ alpha*(1-gamma)*(1-delta)*self.f_values[i,j,k-1,l-1]
+ alpha*(1-gamma)*delta*self.f_values[i,j,k-1,l]
+ alpha*gamma*(1-delta)*self.f_values[i,j,k,l-1]
+ alpha*gamma*delta*self.f_values[i,j,k,l] ) -
( (1-alpha)*(1-gamma)*(1-delta)*self.f_values[i-1,j-1,k-1,l-1]
+ (1-alpha)*(1-gamma)*delta*self.f_values[i-1,j-1,k-1,l]
+ (1-alpha)*gamma*(1-delta)*self.f_values[i-1,j-1,k,l-1]
+ (1-alpha)*gamma*delta*self.f_values[i-1,j-1,k,l]
+ alpha*(1-gamma)*(1-delta)*self.f_values[i,j-1,k-1,l-1]
+ alpha*(1-gamma)*delta*self.f_values[i,j-1,k-1,l]
+ alpha*gamma*(1-delta)*self.f_values[i,j-1,k,l-1]
+ alpha*gamma*delta*self.f_values[i,j-1,k,l] )
)/(self.x_list[j] - self.x_list[j-1])
return dfdx
def _derY(self,w,x,y,z):
'''
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
'''
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list,w),self.w_n-1),1)
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
w_pos = self.wSearchFunc(self.w_list,w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n-1] = self.w_n-1
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
delta = (z - self.z_list[l-1])/(self.z_list[l] - self.z_list[l-1])
dfdy = (
( (1-alpha)*(1-beta)*(1-delta)*self.f_values[i-1,j-1,k,l-1]
+ (1-alpha)*(1-beta)*delta*self.f_values[i-1,j-1,k,l]
+ (1-alpha)*beta*(1-delta)*self.f_values[i-1,j,k,l-1]
+ (1-alpha)*beta*delta*self.f_values[i-1,j,k,l]
+ alpha*(1-beta)*(1-delta)*self.f_values[i,j-1,k,l-1]
+ alpha*(1-beta)*delta*self.f_values[i,j-1,k,l]
+ alpha*beta*(1-delta)*self.f_values[i,j,k,l-1]
+ alpha*beta*delta*self.f_values[i,j,k,l] ) -
( (1-alpha)*(1-beta)*(1-delta)*self.f_values[i-1,j-1,k-1,l-1]
+ (1-alpha)*(1-beta)*delta*self.f_values[i-1,j-1,k-1,l]
+ (1-alpha)*beta*(1-delta)*self.f_values[i-1,j,k-1,l-1]
+ (1-alpha)*beta*delta*self.f_values[i-1,j,k-1,l]
+ alpha*(1-beta)*(1-delta)*self.f_values[i,j-1,k-1,l-1]
+ alpha*(1-beta)*delta*self.f_values[i,j-1,k-1,l]
+ alpha*beta*(1-delta)*self.f_values[i,j,k-1,l-1]
+ alpha*beta*delta*self.f_values[i,j,k-1,l] )
)/(self.y_list[k] - self.y_list[k-1])
return dfdy
def _derZ(self,w,x,y,z):
'''
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
'''
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list,w),self.w_n-1),1)
x_pos = max(min(self.xSearchFunc(self.x_list,x),self.x_n-1),1)
y_pos = max(min(self.ySearchFunc(self.y_list,y),self.y_n-1),1)
z_pos = max(min(self.zSearchFunc(self.z_list,z),self.z_n-1),1)
else:
w_pos = self.wSearchFunc(self.w_list,w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n-1] = self.w_n-1
x_pos = self.xSearchFunc(self.x_list,x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = self.ySearchFunc(self.y_list,y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n-1] = self.y_n-1
z_pos = self.zSearchFunc(self.z_list,z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n-1] = self.z_n-1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i-1])/(self.w_list[i] - self.w_list[i-1])
beta = (x - self.x_list[j-1])/(self.x_list[j] - self.x_list[j-1])
gamma = (y - self.y_list[k-1])/(self.y_list[k] - self.y_list[k-1])
dfdz = (
( (1-alpha)*(1-beta)*(1-gamma)*self.f_values[i-1,j-1,k-1,l]
+ (1-alpha)*(1-beta)*gamma*self.f_values[i-1,j-1,k,l]
+ (1-alpha)*beta*(1-gamma)*self.f_values[i-1,j,k-1,l]
+ (1-alpha)*beta*gamma*self.f_values[i-1,j,k,l]
+ alpha*(1-beta)*(1-gamma)*self.f_values[i,j-1,k-1,l]
+ alpha*(1-beta)*gamma*self.f_values[i,j-1,k,l]
+ alpha*beta*(1-gamma)*self.f_values[i,j,k-1,l]
+ alpha*beta*gamma*self.f_values[i,j,k,l] ) -
( (1-alpha)*(1-beta)*(1-gamma)*self.f_values[i-1,j-1,k-1,l-1]
+ (1-alpha)*(1-beta)*gamma*self.f_values[i-1,j-1,k,l-1]
+ (1-alpha)*beta*(1-gamma)*self.f_values[i-1,j,k-1,l-1]
+ (1-alpha)*beta*gamma*self.f_values[i-1,j,k,l-1]
+ alpha*(1-beta)*(1-gamma)*self.f_values[i,j-1,k-1,l-1]
+ alpha*(1-beta)*gamma*self.f_values[i,j-1,k,l-1]
+ alpha*beta*(1-gamma)*self.f_values[i,j,k-1,l-1]
+ alpha*beta*gamma*self.f_values[i,j,k,l-1] )
)/(self.z_list[l] - self.z_list[l-1])
return dfdz
class LowerEnvelope(HARKinterpolator1D):
'''
The lower envelope of a finite set of 1D functions, each of which can be of
any class that has the methods __call__, derivative, and eval_with_derivative.
Generally: it combines HARKinterpolator1Ds.
'''
distance_criteria = ['functions']
def __init__(self,*functions):
'''
Constructor to make a new lower envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator1D
Returns
-------
new instance of LowerEnvelope
'''
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self,x):
'''
Returns the level of the function at each value in x as the minimum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
'''
if _isscalar(x):
y = np.nanmin([f(x) for f in self.functions])
else:
m = len(x)
fx = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
fx[:,j] = self.functions[j](x)
y = np.nanmin(fx,axis=1)
return y
def _der(self,x):
'''
Returns the first derivative of the function at each value in x. Only
called internally by HARKinterpolator1D.derivative.
'''
y,dydx = self.eval_with_derivative(x)
return dydx # Sadly, this is the fastest / most convenient way...
def _evalAndDer(self,x):
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
'''
m = len(x)
fx = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
fx[:,j] = self.functions[j](x)
fx[np.isnan(fx)] = np.inf
i = np.argmin(fx,axis=1)
y = fx[np.arange(m),i]
dydx = np.zeros_like(y)
for j in range(self.funcCount):
c = i == j
dydx[c] = self.functions[j].derivative(x[c])
return y,dydx
class UpperEnvelope(HARKinterpolator1D):
'''
The upper envelope of a finite set of 1D functions, each of which can be of
any class that has the methods __call__, derivative, and eval_with_derivative.
Generally: it combines HARKinterpolator1Ds.
'''
distance_criteria = ['functions']
def __init__(self,*functions):
'''
Constructor to make a new upper envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator1D
Returns
-------
new instance of UpperEnvelope
'''
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self,x):
'''
Returns the level of the function at each value in x as the maximum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
'''
if _isscalar(x):
y = np.nanmax([f(x) for f in self.functions])
else:
m = len(x)
fx = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
fx[:,j] = self.functions[j](x)
y = np.nanmax(fx,axis=1)
return y
def _der(self,x):
'''
Returns the first derivative of the function at each value in x. Only
called internally by HARKinterpolator1D.derivative.
'''
y,dydx = self.eval_with_derivative(x)
return dydx # Sadly, this is the fastest / most convenient way...
def _evalAndDer(self,x):
'''
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
'''
m = len(x)
fx = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
fx[:,j] = self.functions[j](x)
fx[np.isnan(fx)] = np.inf
i = np.argmax(fx,axis=1)
y = fx[np.arange(m),i]
dydx = np.zeros_like(y)
for j in range(self.funcCount):
c = i == j
dydx[c] = self.functions[j].derivative(x[c])
return y,dydx
class LowerEnvelope2D(HARKinterpolator2D):
'''
The lower envelope of a finite set of 2D functions, each of which can be of
any class that has the methods __call__, derivativeX, and derivativeY.
Generally: it combines HARKinterpolator2Ds.
'''
distance_criteria = ['functions']
def __init__(self,*functions):
'''
Constructor to make a new lower envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator2D
Returns
-------
new instance of LowerEnvelope2D
'''
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self,x,y):
'''
Returns the level of the function at each value in (x,y) as the minimum
among all of the functions. Only called internally by
HARKinterpolator2D.__call__.
'''
if _isscalar(x):
f = np.nanmin([f(x,y) for f in self.functions])
else:
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y)
f = np.nanmin(temp,axis=1)
return f
def _derX(self,x,y):
'''
Returns the first derivative of the function with respect to X at each
value in (x,y). Only called internally by HARKinterpolator2D._derX.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
dfdx = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdx[c] = self.functions[j].derivativeX(x[c],y[c])
return dfdx
def _derY(self,x,y):
'''
Returns the first derivative of the function with respect to Y at each
value in (x,y). Only called internally by HARKinterpolator2D._derY.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
y = temp[np.arange(m),i]
dfdy = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdy[c] = self.functions[j].derivativeY(x[c],y[c])
return dfdy
class LowerEnvelope3D(HARKinterpolator3D):
'''
The lower envelope of a finite set of 3D functions, each of which can be of
any class that has the methods __call__, derivativeX, derivativeY, and
derivativeZ. Generally: it combines HARKinterpolator2Ds.
'''
distance_criteria = ['functions']
def __init__(self,*functions):
'''
Constructor to make a new lower envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator3D
Returns
-------
None
'''
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self,x,y,z):
'''
Returns the level of the function at each value in (x,y,z) as the minimum
among all of the functions. Only called internally by
HARKinterpolator3D.__call__.
'''
if _isscalar(x):
f = np.nanmin([f(x,y,z) for f in self.functions])
else:
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y,z)
f = np.nanmin(temp,axis=1)
return f
def _derX(self,x,y,z):
'''
Returns the first derivative of the function with respect to X at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derX.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y,z)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
dfdx = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdx[c] = self.functions[j].derivativeX(x[c],y[c],z[c])
return dfdx
def _derY(self,x,y,z):
'''
Returns the first derivative of the function with respect to Y at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derY.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y,z)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
y = temp[np.arange(m),i]
dfdy = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdy[c] = self.functions[j].derivativeY(x[c],y[c],z[c])
return dfdy
def _derZ(self,x,y,z):
'''
Returns the first derivative of the function with respect to Z at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derZ.
'''
m = len(x)
temp = np.zeros((m,self.funcCount))
for j in range(self.funcCount):
temp[:,j] = self.functions[j](x,y,z)
temp[np.isnan(temp)] = np.inf
i = np.argmin(temp,axis=1)
y = temp[np.arange(m),i]
dfdz = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdz[c] = self.functions[j].derivativeZ(x[c],y[c],z[c])
return dfdz
class VariableLowerBoundFunc2D(HARKobject):
'''
A class for representing a function with two real inputs whose lower bound
in the first input depends on the second input. Useful for managing curved
natural borrowing constraints, as occurs in the persistent shocks model.
'''
distance_criteria = ['func','lowerBound']
def __init__(self,func,lowerBound):
'''
Make a new instance of VariableLowerBoundFunc2D.
Parameters
----------
func : function
A function f: (R_+ x R) --> R representing the function of interest
shifted by its lower bound in the first input.
lowerBound : function
The lower bound in the first input of the function of interest, as
a function of the second input.
Returns
-------
None
'''
self.func = func
self.lowerBound = lowerBound
def __call__(self,x,y):
'''
Evaluate the function at given state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
f_out : np.array
Function evaluated at (x,y), of same shape as inputs.
'''
xShift = self.lowerBound(y)
f_out = self.func(x-xShift,y)
return f_out
def derivativeX(self,x,y):
'''
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y), of same shape as inputs.
'''
xShift = self.lowerBound(y)
dfdx_out = self.func.derivativeX(x-xShift,y)
return dfdx_out
def derivativeY(self,x,y):
'''
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y), of same shape as inputs.
'''
xShift,xShiftDer = self.lowerBound.eval_with_derivative(y)
dfdy_out = self.func.derivativeY(x-xShift,y) - xShiftDer*self.func.derivativeX(x-xShift,y)
return dfdy_out
class VariableLowerBoundFunc3D(HARKobject):
'''
A class for representing a function with three real inputs whose lower bound
in the first input depends on the second input. Useful for managing curved
natural borrowing constraints.
'''
distance_criteria = ['func','lowerBound']
def __init__(self,func,lowerBound):
'''
Make a new instance of VariableLowerBoundFunc3D.
Parameters
----------
func : function
A function f: (R_+ x R^2) --> R representing the function of interest
shifted by its lower bound in the first input.
lowerBound : function
The lower bound in the first input of the function of interest, as
a function of the second input.
Returns
-------
None
'''
self.func = func
self.lowerBound = lowerBound
def __call__(self,x,y,z):
'''
Evaluate the function at given state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
f_out : np.array
Function evaluated at (x,y,z), of same shape as inputs.
'''
xShift = self.lowerBound(y)
f_out = self.func(x-xShift,y,z)
return f_out
def derivativeX(self,x,y,z):
'''
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y,z), of same shape as inputs.
'''
xShift = self.lowerBound(y)
dfdx_out = self.func.derivativeX(x-xShift,y,z)
return dfdx_out
def derivativeY(self,x,y,z):
'''
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y,z), of same shape as inputs.
'''
xShift,xShiftDer = self.lowerBound.eval_with_derivative(y)
dfdy_out = self.func.derivativeY(x-xShift,y,z) - \
xShiftDer*self.func.derivativeX(x-xShift,y,z)
return dfdy_out
def derivativeZ(self,x,y,z):
'''
Evaluate the first derivative with respect to z of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdz_out : np.array
First derivative of function with respect to the third input,
evaluated at (x,y,z), of same shape as inputs.
'''
xShift = self.lowerBound(y)
dfdz_out = self.func.derivativeZ(x-xShift,y,z)
return dfdz_out
class LinearInterpOnInterp1D(HARKinterpolator2D):
'''
A 2D interpolator that linearly interpolates among a list of 1D interpolators.
'''
distance_criteria = ['xInterpolators','y_list']
def __init__(self,xInterpolators,y_values):
'''
Constructor for the class, generating an approximation to a function of
the form f(x,y) using interpolations over f(x,y_0) for a fixed grid of
y_0 values.
Parameters
----------
xInterpolators : [HARKinterpolator1D]
A list of 1D interpolations over the x variable. The nth element of
xInterpolators represents f(x,y_values[n]).
y_values: numpy.array
An array of y values equal in length to xInterpolators.
Returns
-------
new instance of LinearInterpOnInterp1D
'''
self.xInterpolators = xInterpolators
self.y_list = y_values
self.y_n = y_values.size
def _evaluate(self,x,y):
'''
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
f = (1-alpha)*self.xInterpolators[y_pos-1](x) + alpha*self.xInterpolators[y_pos](x)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
f = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1,self.y_n):
c = y_pos == i
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
f[c] = (1-alpha)*self.xInterpolators[i-1](x[c]) + alpha*self.xInterpolators[i](x[c])
return f
def _derX(self,x,y):
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdx = (1-alpha)*self.xInterpolators[y_pos-1]._der(x) + alpha*self.xInterpolators[y_pos]._der(x)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1,self.y_n):
c = y_pos == i
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
dfdx[c] = (1-alpha)*self.xInterpolators[i-1]._der(x[c]) + alpha*self.xInterpolators[i]._der(x[c])
return dfdx
def _derY(self,x,y):
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
dfdy = (self.xInterpolators[y_pos](x) - self.xInterpolators[y_pos-1](x))/(self.y_list[y_pos] - self.y_list[y_pos-1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1,self.y_n):
c = y_pos == i
if np.any(c):
dfdy[c] = (self.xInterpolators[i](x[c]) - self.xInterpolators[i-1](x[c]))/(self.y_list[i] - self.y_list[i-1])
return dfdy
class BilinearInterpOnInterp1D(HARKinterpolator3D):
'''
A 3D interpolator that bilinearly interpolates among a list of lists of 1D
interpolators.
'''
distance_criteria = ['xInterpolators','y_list','z_list']
def __init__(self,xInterpolators,y_values,z_values):
'''
Constructor for the class, generating an approximation to a function of
the form f(x,y,z) using interpolations over f(x,y_0,z_0) for a fixed grid
of y_0 and z_0 values.
Parameters
----------
xInterpolators : [[HARKinterpolator1D]]
A list of lists of 1D interpolations over the x variable. The i,j-th
element of xInterpolators represents f(x,y_values[i],z_values[j]).
y_values: numpy.array
An array of y values equal in length to xInterpolators.
z_values: numpy.array
An array of z values equal in length to xInterpolators[0].
Returns
-------
new instance of BilinearInterpOnInterp1D
'''
self.xInterpolators = xInterpolators
self.y_list = y_values
self.y_n = y_values.size
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self,x,y,z):
'''
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
f = ((1-alpha)*(1-beta)*self.xInterpolators[y_pos-1][z_pos-1](x)
+ (1-alpha)*beta*self.xInterpolators[y_pos-1][z_pos](x)
+ alpha*(1-beta)*self.xInterpolators[y_pos][z_pos-1](x)
+ alpha*beta*self.xInterpolators[y_pos][z_pos](x))
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
f[c] = (
(1-alpha)*(1-beta)*self.xInterpolators[i-1][j-1](x[c])
+ (1-alpha)*beta*self.xInterpolators[i-1][j](x[c])
+ alpha*(1-beta)*self.xInterpolators[i][j-1](x[c])
+ alpha*beta*self.xInterpolators[i][j](x[c]))
return f
def _derX(self,x,y,z):
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdx = ((1-alpha)*(1-beta)*self.xInterpolators[y_pos-1][z_pos-1]._der(x)
+ (1-alpha)*beta*self.xInterpolators[y_pos-1][z_pos]._der(x)
+ alpha*(1-beta)*self.xInterpolators[y_pos][z_pos-1]._der(x)
+ alpha*beta*self.xInterpolators[y_pos][z_pos]._der(x))
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
dfdx[c] = (
(1-alpha)*(1-beta)*self.xInterpolators[i-1][j-1]._der(x[c])
+ (1-alpha)*beta*self.xInterpolators[i-1][j]._der(x[c])
+ alpha*(1-beta)*self.xInterpolators[i][j-1]._der(x[c])
+ alpha*beta*self.xInterpolators[i][j]._der(x[c]))
return dfdx
def _derY(self,x,y,z):
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdy = (((1-beta)*self.xInterpolators[y_pos][z_pos-1](x) + beta*self.xInterpolators[y_pos][z_pos](x))
- ((1-beta)*self.xInterpolators[y_pos-1][z_pos-1](x) + beta*self.xInterpolators[y_pos-1][z_pos](x)))/(self.y_list[y_pos] - self.y_list[y_pos-1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
dfdy[c] = (((1-beta)*self.xInterpolators[i][j-1](x[c]) + beta*self.xInterpolators[i][j](x[c]))
- ((1-beta)*self.xInterpolators[i-1][j-1](x[c]) + beta*self.xInterpolators[i-1][j](x[c])))/(self.y_list[i] - self.y_list[i-1])
return dfdy
def _derZ(self,x,y,z):
'''
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdz = (((1-alpha)*self.xInterpolators[y_pos-1][z_pos](x) + alpha*self.xInterpolators[y_pos][z_pos](x))
- ((1-alpha)*self.xInterpolators[y_pos-1][z_pos-1](x) + alpha*self.xInterpolators[y_pos][z_pos-1](x)))/(self.z_list[z_pos] - self.z_list[z_pos-1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
dfdz[c] = (((1-alpha)*self.xInterpolators[i-1][j](x[c]) + alpha*self.xInterpolators[i][j](x[c]))
- ((1-alpha)*self.xInterpolators[i-1][j-1](x[c]) + alpha*self.xInterpolators[i][j-1](x[c])))/(self.z_list[j] - self.z_list[j-1])
return dfdz
class TrilinearInterpOnInterp1D(HARKinterpolator4D):
'''
A 4D interpolator that trilinearly interpolates among a list of lists of 1D interpolators.
'''
distance_criteria = ['wInterpolators','x_list','y_list','z_list']
def __init__(self,wInterpolators,x_values,y_values,z_values):
'''
Constructor for the class, generating an approximation to a function of
the form f(w,x,y,z) using interpolations over f(w,x_0,y_0,z_0) for a fixed
grid of y_0 and z_0 values.
Parameters
----------
wInterpolators : [[[HARKinterpolator1D]]]
A list of lists of lists of 1D interpolations over the x variable.
The i,j,k-th element of wInterpolators represents f(w,x_values[i],y_values[j],z_values[k]).
x_values: numpy.array
An array of x values equal in length to wInterpolators.
y_values: numpy.array
An array of y values equal in length to wInterpolators[0].
z_values: numpy.array
An array of z values equal in length to wInterpolators[0][0]
Returns
-------
new instance of TrilinearInterpOnInterp1D
'''
self.wInterpolators = wInterpolators
self.x_list = x_values
self.x_n = x_values.size
self.y_list = y_values
self.y_n = y_values.size
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self,w,x,y,z):
'''
Returns the level of the interpolated function at each value in w,x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
'''
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1)
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
f = (
(1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1](w)
+ (1-alpha)*(1-beta)*gamma*self.wInterpolators[x_pos-1][y_pos-1][z_pos](w)
+ (1-alpha)*beta*(1-gamma)*self.wInterpolators[x_pos-1][y_pos][z_pos-1](w)
+ (1-alpha)*beta*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos](w)
+ alpha*(1-beta)*(1-gamma)*self.wInterpolators[x_pos][y_pos-1][z_pos-1](w)
+ alpha*(1-beta)*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos](w)
+ alpha*beta*(1-gamma)*self.wInterpolators[x_pos][y_pos][z_pos-1](w)
+ alpha*beta*gamma*self.wInterpolators[x_pos][y_pos][z_pos](w))
else:
m = len(x)
x_pos = np.searchsorted(self.x_list,x)
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1,self.x_n):
for j in range(1,self.y_n):
for k in range(1,self.z_n):
c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos)
if np.any(c):
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
f[c] = (
(1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[i-1][j-1][k-1](w[c])
+ (1-alpha)*(1-beta)*gamma*self.wInterpolators[i-1][j-1][k](w[c])
+ (1-alpha)*beta*(1-gamma)*self.wInterpolators[i-1][j][k-1](w[c])
+ (1-alpha)*beta*gamma*self.wInterpolators[i-1][j][k](w[c])
+ alpha*(1-beta)*(1-gamma)*self.wInterpolators[i][j-1][k-1](w[c])
+ alpha*(1-beta)*gamma*self.wInterpolators[i][j-1][k](w[c])
+ alpha*beta*(1-gamma)*self.wInterpolators[i][j][k-1](w[c])
+ alpha*beta*gamma*self.wInterpolators[i][j][k](w[c]))
return f
def _derW(self,w,x,y,z):
'''
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
'''
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1)
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (x - self.x_list[x_pos-1])/(self.x_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdw = (
(1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1]._der(w)
+ (1-alpha)*(1-beta)*gamma*self.wInterpolators[x_pos-1][y_pos-1][z_pos]._der(w)
+ (1-alpha)*beta*(1-gamma)*self.wInterpolators[x_pos-1][y_pos][z_pos-1]._der(w)
+ (1-alpha)*beta*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos]._der(w)
+ alpha*(1-beta)*(1-gamma)*self.wInterpolators[x_pos][y_pos-1][z_pos-1]._der(w)
+ alpha*(1-beta)*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos]._der(w)
+ alpha*beta*(1-gamma)*self.wInterpolators[x_pos][y_pos][z_pos-1]._der(w)
+ alpha*beta*gamma*self.wInterpolators[x_pos][y_pos][z_pos]._der(w))
else:
m = len(x)
x_pos = np.searchsorted(self.x_list,x)
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdw = np.zeros(m) + np.nan
for i in range(1,self.x_n):
for j in range(1,self.y_n):
for k in range(1,self.z_n):
c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos)
if np.any(c):
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
dfdw[c] = (
(1-alpha)*(1-beta)*(1-gamma)*self.wInterpolators[i-1][j-1][k-1]._der(w[c])
+ (1-alpha)*(1-beta)*gamma*self.wInterpolators[i-1][j-1][k]._der(w[c])
+ (1-alpha)*beta*(1-gamma)*self.wInterpolators[i-1][j][k-1]._der(w[c])
+ (1-alpha)*beta*gamma*self.wInterpolators[i-1][j][k]._der(w[c])
+ alpha*(1-beta)*(1-gamma)*self.wInterpolators[i][j-1][k-1]._der(w[c])
+ alpha*(1-beta)*gamma*self.wInterpolators[i][j-1][k]._der(w[c])
+ alpha*beta*(1-gamma)*self.wInterpolators[i][j][k-1]._der(w[c])
+ alpha*beta*gamma*self.wInterpolators[i][j][k]._der(w[c]))
return dfdw
def _derX(self,w,x,y,z):
'''
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
'''
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1)
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdx = (
((1-beta)*(1-gamma)*self.wInterpolators[x_pos][y_pos-1][z_pos-1](w)
+ (1-beta)*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos](w)
+ beta*(1-gamma)*self.wInterpolators[x_pos][y_pos][z_pos-1](w)
+ beta*gamma*self.wInterpolators[x_pos][y_pos][z_pos](w)) -
((1-beta)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1](w)
+ (1-beta)*gamma*self.wInterpolators[x_pos-1][y_pos-1][z_pos](w)
+ beta*(1-gamma)*self.wInterpolators[x_pos-1][y_pos][z_pos-1](w)
+ beta*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos](w)))/(self.x_list[x_pos] - self.x_list[x_pos-1])
else:
m = len(x)
x_pos = np.searchsorted(self.x_list,x)
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
for i in range(1,self.x_n):
for j in range(1,self.y_n):
for k in range(1,self.z_n):
c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos)
if np.any(c):
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
dfdx[c] = (
((1-beta)*(1-gamma)*self.wInterpolators[i][j-1][k-1](w[c])
+ (1-beta)*gamma*self.wInterpolators[i][j-1][k](w[c])
+ beta*(1-gamma)*self.wInterpolators[i][j][k-1](w[c])
+ beta*gamma*self.wInterpolators[i][j][k](w[c])) -
((1-beta)*(1-gamma)*self.wInterpolators[i-1][j-1][k-1](w[c])
+ (1-beta)*gamma*self.wInterpolators[i-1][j-1][k](w[c])
+ beta*(1-gamma)*self.wInterpolators[i-1][j][k-1](w[c])
+ beta*gamma*self.wInterpolators[i-1][j][k](w[c])))/(self.x_list[i] - self.x_list[i-1])
return dfdx
def _derY(self,w,x,y,z):
'''
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
'''
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1)
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (x - self.x_list[x_pos-1])/(self.y_list[x_pos] - self.x_list[x_pos-1])
gamma = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdy = (
((1-alpha)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos][z_pos-1](w)
+ (1-alpha)*gamma*self.wInterpolators[x_pos-1][y_pos][z_pos](w)
+ alpha*(1-gamma)*self.wInterpolators[x_pos][y_pos][z_pos-1](w)
+ alpha*gamma*self.wInterpolators[x_pos][y_pos][z_pos](w)) -
((1-alpha)*(1-gamma)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1](w)
+ (1-alpha)*gamma*self.wInterpolators[x_pos-1][y_pos-1][z_pos](w)
+ alpha*(1-gamma)*self.wInterpolators[x_pos][y_pos-1][z_pos-1](w)
+ alpha*gamma*self.wInterpolators[x_pos][y_pos-1][z_pos](w)))/(self.y_list[y_pos] - self.y_list[y_pos-1])
else:
m = len(x)
x_pos = np.searchsorted(self.x_list,x)
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
for i in range(1,self.x_n):
for j in range(1,self.y_n):
for k in range(1,self.z_n):
c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos)
if np.any(c):
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
gamma = (z[c] - self.z_list[k-1])/(self.z_list[k] - self.z_list[k-1])
dfdy[c] = (
((1-alpha)*(1-gamma)*self.wInterpolators[i-1][j][k-1](w[c])
+ (1-alpha)*gamma*self.wInterpolators[i-1][j][k](w[c])
+ alpha*(1-gamma)*self.wInterpolators[i][j][k-1](w[c])
+ alpha*gamma*self.wInterpolators[i][j][k](w[c])) -
((1-alpha)*(1-gamma)*self.wInterpolators[i-1][j-1][k-1](w[c])
+ (1-alpha)*gamma*self.wInterpolators[i-1][j-1][k](w[c])
+ alpha*(1-gamma)*self.wInterpolators[i][j-1][k-1](w[c])
+ alpha*gamma*self.wInterpolators[i][j-1][k](w[c])))/(self.y_list[j] - self.y_list[j-1])
return dfdy
def _derZ(self,w,x,y,z):
'''
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
'''
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list,x),self.x_n-1),1)
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (x - self.x_list[x_pos-1])/(self.y_list[x_pos] - self.x_list[x_pos-1])
beta = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdz = (
((1-alpha)*(1-beta)*self.wInterpolators[x_pos-1][y_pos-1][z_pos](w)
+ (1-alpha)*beta*self.wInterpolators[x_pos-1][y_pos][z_pos](w)
+ alpha*(1-beta)*self.wInterpolators[x_pos][y_pos-1][z_pos](w)
+ alpha*beta*self.wInterpolators[x_pos][y_pos][z_pos](w)) -
((1-alpha)*(1-beta)*self.wInterpolators[x_pos-1][y_pos-1][z_pos-1](w)
+ (1-alpha)*beta*self.wInterpolators[x_pos-1][y_pos][z_pos-1](w)
+ alpha*(1-beta)*self.wInterpolators[x_pos][y_pos-1][z_pos-1](w)
+ alpha*beta*self.wInterpolators[x_pos][y_pos][z_pos-1](w)))/(self.z_list[z_pos] - self.z_list[z_pos-1])
else:
m = len(x)
x_pos = np.searchsorted(self.x_list,x)
x_pos[x_pos > self.x_n-1] = self.x_n-1
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
for i in range(1,self.x_n):
for j in range(1,self.y_n):
for k in range(1,self.z_n):
c = np.logical_and(np.logical_and(i == x_pos, j == y_pos),k == z_pos)
if np.any(c):
alpha = (x[c] - self.x_list[i-1])/(self.x_list[i] - self.x_list[i-1])
beta = (y[c] - self.y_list[j-1])/(self.y_list[j] - self.y_list[j-1])
dfdz[c] = (
((1-alpha)*(1-beta)*self.wInterpolators[i-1][j-1][k](w[c])
+ (1-alpha)*beta*self.wInterpolators[i-1][j][k](w[c])
+ alpha*(1-beta)*self.wInterpolators[i][j-1][k](w[c])
+ alpha*beta*self.wInterpolators[i][j][k](w[c])) -
((1-alpha)*(1-beta)*self.wInterpolators[i-1][j-1][k-1](w[c])
+ (1-alpha)*beta*self.wInterpolators[i-1][j][k-1](w[c])
+ alpha*(1-beta)*self.wInterpolators[i][j-1][k-1](w[c])
+ alpha*beta*self.wInterpolators[i][j][k-1](w[c])))/(self.z_list[k] - self.z_list[k-1])
return dfdz
class LinearInterpOnInterp2D(HARKinterpolator3D):
'''
A 3D interpolation method that linearly interpolates between "layers" of
arbitrary 2D interpolations. Useful for models with two endogenous state
variables and one exogenous state variable when solving with the endogenous
grid method. NOTE: should not be used if an exogenous 3D grid is used, will
be significantly slower than TrilinearInterp.
'''
distance_criteria = ['xyInterpolators','z_list']
def __init__(self,xyInterpolators,z_values):
'''
Constructor for the class, generating an approximation to a function of
the form f(x,y,z) using interpolations over f(x,y,z_0) for a fixed grid
of z_0 values.
Parameters
----------
xyInterpolators : [HARKinterpolator2D]
A list of 2D interpolations over the x and y variables. The nth
element of xyInterpolators represents f(x,y,z_values[n]).
z_values: numpy.array
An array of z values equal in length to xyInterpolators.
Returns
-------
new instance of LinearInterpOnInterp2D
'''
self.xyInterpolators = xyInterpolators
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self,x,y,z):
'''
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
'''
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
f = (1-alpha)*self.xyInterpolators[z_pos-1](x,y) + alpha*self.xyInterpolators[z_pos](x,y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1,self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
f[c] = (1-alpha)*self.xyInterpolators[i-1](x[c],y[c]) + alpha*self.xyInterpolators[i](x[c],y[c])
return f
def _derX(self,x,y,z):
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
'''
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdx = (1-alpha)*self.xyInterpolators[z_pos-1].derivativeX(x,y) + alpha*self.xyInterpolators[z_pos].derivativeX(x,y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1,self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
dfdx[c] = (1-alpha)*self.xyInterpolators[i-1].derivativeX(x[c],y[c]) + alpha*self.xyInterpolators[i].derivativeX(x[c],y[c])
return dfdx
def _derY(self,x,y,z):
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
'''
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdy = (1-alpha)*self.xyInterpolators[z_pos-1].derivativeY(x,y) + alpha*self.xyInterpolators[z_pos].derivativeY(x,y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1,self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i-1])/(self.z_list[i] - self.z_list[i-1])
dfdy[c] = (1-alpha)*self.xyInterpolators[i-1].derivativeY(x[c],y[c]) + alpha*self.xyInterpolators[i].derivativeY(x[c],y[c])
return dfdy
def _derZ(self,x,y,z):
'''
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
'''
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
dfdz = (self.xyInterpolators[z_pos].derivativeX(x,y) - self.xyInterpolators[z_pos-1].derivativeX(x,y))/(self.z_list[z_pos] - self.z_list[z_pos-1])
else:
m = len(x)
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1,self.z_n):
c = z_pos == i
if np.any(c):
dfdz[c] = (self.xyInterpolators[i](x[c],y[c]) - self.xyInterpolators[i-1](x[c],y[c]))/(self.z_list[i] - self.z_list[i-1])
return dfdz
class BilinearInterpOnInterp2D(HARKinterpolator4D):
'''
A 4D interpolation method that bilinearly interpolates among "layers" of
arbitrary 2D interpolations. Useful for models with two endogenous state
variables and two exogenous state variables when solving with the endogenous
grid method. NOTE: should not be used if an exogenous 4D grid is used, will
be significantly slower than QuadlinearInterp.
'''
distance_criteria = ['wxInterpolators','y_list','z_list']
def __init__(self,wxInterpolators,y_values,z_values):
'''
Constructor for the class, generating an approximation to a function of
the form f(w,x,y,z) using interpolations over f(w,x,y_0,z_0) for a fixed
grid of y_0 and z_0 values.
Parameters
----------
wxInterpolators : [[HARKinterpolator2D]]
A list of lists of 2D interpolations over the w and x variables.
The i,j-th element of wxInterpolators represents
f(w,x,y_values[i],z_values[j]).
y_values: numpy.array
An array of y values equal in length to wxInterpolators.
z_values: numpy.array
An array of z values equal in length to wxInterpolators[0].
Returns
-------
new instance of BilinearInterpOnInterp2D
'''
self.wxInterpolators = wxInterpolators
self.y_list = y_values
self.y_n = y_values.size
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self,w,x,y,z):
'''
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
f = ((1-alpha)*(1-beta)*self.wxInterpolators[y_pos-1][z_pos-1](w,x)
+ (1-alpha)*beta*self.wxInterpolators[y_pos-1][z_pos](w,x)
+ alpha*(1-beta)*self.wxInterpolators[y_pos][z_pos-1](w,x)
+ alpha*beta*self.wxInterpolators[y_pos][z_pos](w,x))
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
f[c] = (
(1-alpha)*(1-beta)*self.wxInterpolators[i-1][j-1](w[c],x[c])
+ (1-alpha)*beta*self.wxInterpolators[i-1][j](w[c],x[c])
+ alpha*(1-beta)*self.wxInterpolators[i][j-1](w[c],x[c])
+ alpha*beta*self.wxInterpolators[i][j](w[c],x[c]))
return f
def _derW(self,w,x,y,z):
'''
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
'''
# This may look strange, as we call the derivativeX() method to get the
# derivative with respect to w, but that's just a quirk of 4D interpolations
# beginning with w rather than x. The derivative wrt the first dimension
# of an element of wxInterpolators is the w-derivative of the main function.
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdw = ((1-alpha)*(1-beta)*self.wxInterpolators[y_pos-1][z_pos-1].derivativeX(w,x)
+ (1-alpha)*beta*self.wxInterpolators[y_pos-1][z_pos].derivativeX(w,x)
+ alpha*(1-beta)*self.wxInterpolators[y_pos][z_pos-1].derivativeX(w,x)
+ alpha*beta*self.wxInterpolators[y_pos][z_pos].derivativeX(w,x))
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdw = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
dfdw[c] = (
(1-alpha)*(1-beta)*self.wxInterpolators[i-1][j-1].derivativeX(w[c],x[c])
+ (1-alpha)*beta*self.wxInterpolators[i-1][j].derivativeX(w[c],x[c])
+ alpha*(1-beta)*self.wxInterpolators[i][j-1].derivativeX(w[c],x[c])
+ alpha*beta*self.wxInterpolators[i][j].derivativeX(w[c],x[c]))
return dfdw
def _derX(self,w,x,y,z):
'''
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
'''
# This may look strange, as we call the derivativeY() method to get the
# derivative with respect to x, but that's just a quirk of 4D interpolations
# beginning with w rather than x. The derivative wrt the second dimension
# of an element of wxInterpolators is the x-derivative of the main function.
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdx = ((1-alpha)*(1-beta)*self.wxInterpolators[y_pos-1][z_pos-1].derivativeY(w,x)
+ (1-alpha)*beta*self.wxInterpolators[y_pos-1][z_pos].derivativeY(w,x)
+ alpha*(1-beta)*self.wxInterpolators[y_pos][z_pos-1].derivativeY(w,x)
+ alpha*beta*self.wxInterpolators[y_pos][z_pos].derivativeY(w,x))
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
dfdx[c] = (
(1-alpha)*(1-beta)*self.wxInterpolators[i-1][j-1].derivativeY(w[c],x[c])
+ (1-alpha)*beta*self.wxInterpolators[i-1][j].derivativeY(w[c],x[c])
+ alpha*(1-beta)*self.wxInterpolators[i][j-1].derivativeY(w[c],x[c])
+ alpha*beta*self.wxInterpolators[i][j].derivativeY(w[c],x[c]))
return dfdx
def _derY(self,w,x,y,z):
'''
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
beta = (z - self.z_list[z_pos-1])/(self.z_list[z_pos] - self.z_list[z_pos-1])
dfdy = (((1-beta)*self.wxInterpolators[y_pos][z_pos-1](w,x) + beta*self.wxInterpolators[y_pos][z_pos](w,x))
- ((1-beta)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + beta*self.wxInterpolators[y_pos-1][z_pos](w,x)))/(self.y_list[y_pos] - self.y_list[y_pos-1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
beta = (z[c] - self.z_list[j-1])/(self.z_list[j] - self.z_list[j-1])
dfdy[c] = (((1-beta)*self.wxInterpolators[i][j-1](w[c],x[c]) + beta*self.wxInterpolators[i][j](w[c],x[c]))
- ((1-beta)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + beta*self.wxInterpolators[i-1][j](w[c],x[c])))/(self.y_list[i] - self.y_list[i-1])
return dfdy
def _derZ(self,w,x,y,z):
'''
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
'''
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list,y),self.y_n-1),1)
z_pos = max(min(np.searchsorted(self.z_list,z),self.z_n-1),1)
alpha = (y - self.y_list[y_pos-1])/(self.y_list[y_pos] - self.y_list[y_pos-1])
dfdz = (((1-alpha)*self.wxInterpolators[y_pos-1][z_pos](w,x) + alpha*self.wxInterpolators[y_pos][z_pos](w,x))
- ((1-alpha)*self.wxInterpolators[y_pos-1][z_pos-1](w,x) + alpha*self.wxInterpolators[y_pos][z_pos-1](w,x)))/(self.z_list[z_pos] - self.z_list[z_pos-1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list,y)
y_pos[y_pos > self.y_n-1] = self.y_n-1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list,z)
z_pos[z_pos > self.z_n-1] = self.z_n-1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
for i in range(1,self.y_n):
for j in range(1,self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i-1])/(self.y_list[i] - self.y_list[i-1])
dfdz[c] = (((1-alpha)*self.wxInterpolators[i-1][j](w[c],x[c]) + alpha*self.wxInterpolators[i][j](w[c],x[c]))
- ((1-alpha)*self.wxInterpolators[i-1][j-1](w[c],x[c]) + alpha*self.wxInterpolators[i][j-1](w[c],x[c])))/(self.z_list[j] - self.z_list[j-1])
return dfdz
class Curvilinear2DInterp(HARKinterpolator2D):
'''
A 2D interpolation method for curvilinear or "warped grid" interpolation, as
in White (2015). Used for models with two endogenous states that are solved
with the endogenous grid method.
'''
distance_criteria = ['f_values','x_values','y_values']
def __init__(self,f_values,x_values,y_values):
'''
Constructor for 2D curvilinear interpolation for a function f(x,y)
Parameters
----------
f_values: numpy.array
A 2D array of function values such that f_values[i,j] =
f(x_values[i,j],y_values[i,j]).
x_values: numpy.array
A 2D array of x values of the same size as f_values.
y_values: numpy.array
A 2D array of y values of the same size as f_values.
Returns
-------
new instance of Curvilinear2DInterp
'''
self.f_values = f_values
self.x_values = x_values
self.y_values = y_values
my_shape = f_values.shape
self.x_n = my_shape[0]
self.y_n = my_shape[1]
self.updatePolarity()
def updatePolarity(self):
'''
Fills in the polarity attribute of the interpolation, determining whether
the "plus" (True) or "minus" (False) solution of the system of equations
should be used for each sector. Needs to be called in __init__.
Parameters
----------
none
Returns
-------
none
'''
# Grab a point known to be inside each sector: the midway point between
# the lower left and upper right vertex of each sector
x_temp = 0.5*(self.x_values[0:(self.x_n-1),0:(self.y_n-1)] + self.x_values[1:self.x_n,1:self.y_n])
y_temp = 0.5*(self.y_values[0:(self.x_n-1),0:(self.y_n-1)] + self.y_values[1:self.x_n,1:self.y_n])
size = (self.x_n-1)*(self.y_n-1)
x_temp = np.reshape(x_temp,size)
y_temp = np.reshape(y_temp,size)
y_pos = np.tile(np.arange(0,self.y_n-1),self.x_n-1)
x_pos = np.reshape(np.tile(np.arange(0,self.x_n-1),(self.y_n-1,1)).transpose(),size)
# Set the polarity of all sectors to "plus", then test each sector
self.polarity = np.ones((self.x_n-1,self.y_n-1),dtype=bool)
alpha, beta = self.findCoords(x_temp,y_temp,x_pos,y_pos)
polarity = np.logical_and(
np.logical_and(alpha > 0, alpha < 1),
np.logical_and(beta > 0, beta < 1))
# Update polarity: if (alpha,beta) not in the unit square, then that
# sector must use the "minus" solution instead
self.polarity = np.reshape(polarity,(self.x_n-1,self.y_n-1))
def findSector(self,x,y):
'''
Finds the quadrilateral "sector" for each (x,y) point in the input.
Only called as a subroutine of _evaluate().
Parameters
----------
x : np.array
Values whose sector should be found.
y : np.array
Values whose sector should be found. Should be same size as x.
Returns
-------
x_pos : np.array
Sector x-coordinates for each point of the input, of the same size.
y_pos : np.array
Sector y-coordinates for each point of the input, of the same size.
'''
# Initialize the sector guess
m = x.size
x_pos_guess = (np.ones(m)*self.x_n/2).astype(int)
y_pos_guess = (np.ones(m)*self.y_n/2).astype(int)
# Define a function that checks whether a set of points violates a linear
# boundary defined by (x_bound_1,y_bound_1) and (x_bound_2,y_bound_2),
# where the latter is *COUNTER CLOCKWISE* from the former. Returns
# 1 if the point is outside the boundary and 0 otherwise.
violationCheck = lambda x_check,y_check,x_bound_1,y_bound_1,x_bound_2,y_bound_2 : (
(y_bound_2 - y_bound_1)*x_check - (x_bound_2 - x_bound_1)*y_check > x_bound_1*y_bound_2 - y_bound_1*x_bound_2 ) + 0
# Identify the correct sector for each point to be evaluated
these = np.ones(m,dtype=bool)
max_loops = self.x_n + self.y_n
loops = 0
while np.any(these) and loops < max_loops:
# Get coordinates for the four vertices: (xA,yA),...,(xD,yD)
x_temp = x[these]
y_temp = y[these]
xA = self.x_values[x_pos_guess[these],y_pos_guess[these]]
xB = self.x_values[x_pos_guess[these]+1,y_pos_guess[these]]
xC = self.x_values[x_pos_guess[these],y_pos_guess[these]+1]
xD = self.x_values[x_pos_guess[these]+1,y_pos_guess[these]+1]
yA = self.y_values[x_pos_guess[these],y_pos_guess[these]]
yB = self.y_values[x_pos_guess[these]+1,y_pos_guess[these]]
yC = self.y_values[x_pos_guess[these],y_pos_guess[these]+1]
yD = self.y_values[x_pos_guess[these]+1,y_pos_guess[these]+1]
# Check the "bounding box" for the sector: is this guess plausible?
move_down = (y_temp < np.minimum(yA,yB)) + 0
move_right = (x_temp > np.maximum(xB,xD)) + 0
move_up = (y_temp > np.maximum(yC,yD)) + 0
move_left = (x_temp < np.minimum(xA,xC)) + 0
# Check which boundaries are violated (and thus where to look next)
c = (move_down + move_right + move_up + move_left) == 0
move_down[c] = violationCheck(x_temp[c],y_temp[c],xA[c],yA[c],xB[c],yB[c])
move_right[c] = violationCheck(x_temp[c],y_temp[c],xB[c],yB[c],xD[c],yD[c])
move_up[c] = violationCheck(x_temp[c],y_temp[c],xD[c],yD[c],xC[c],yC[c])
move_left[c] = violationCheck(x_temp[c],y_temp[c],xC[c],yC[c],xA[c],yA[c])
# Update the sector guess based on the violations
x_pos_next = x_pos_guess[these] - move_left + move_right
x_pos_next[x_pos_next < 0] = 0
x_pos_next[x_pos_next > (self.x_n-2)] = self.x_n-2
y_pos_next = y_pos_guess[these] - move_down + move_up
y_pos_next[y_pos_next < 0] = 0
y_pos_next[y_pos_next > (self.y_n-2)] = self.y_n-2
# Check which sectors have not changed, and mark them as complete
no_move = np.array(np.logical_and(x_pos_guess[these] == x_pos_next, y_pos_guess[these] == y_pos_next))
x_pos_guess[these] = x_pos_next
y_pos_guess[these] = y_pos_next
temp = these.nonzero()
these[temp[0][no_move]] = False
# Move to the next iteration of the search
loops += 1
# Return the output
x_pos = x_pos_guess
y_pos = y_pos_guess
return x_pos, y_pos
def findCoords(self,x,y,x_pos,y_pos):
'''
Calculates the relative coordinates (alpha,beta) for each point (x,y),
given the sectors (x_pos,y_pos) in which they reside. Only called as
a subroutine of __call__().
Parameters
----------
x : np.array
Values whose sector should be found.
y : np.array
Values whose sector should be found. Should be same size as x.
x_pos : np.array
Sector x-coordinates for each point in (x,y), of the same size.
y_pos : np.array
Sector y-coordinates for each point in (x,y), of the same size.
Returns
-------
alpha : np.array
Relative "horizontal" position of the input in their respective sectors.
beta : np.array
Relative "vertical" position of the input in their respective sectors.
'''
# Calculate relative coordinates in the sector for each point
xA = self.x_values[x_pos,y_pos]
xB = self.x_values[x_pos+1,y_pos]
xC = self.x_values[x_pos,y_pos+1]
xD = self.x_values[x_pos+1,y_pos+1]
yA = self.y_values[x_pos,y_pos]
yB = self.y_values[x_pos+1,y_pos]
yC = self.y_values[x_pos,y_pos+1]
yD = self.y_values[x_pos+1,y_pos+1]
polarity = 2.0*self.polarity[x_pos,y_pos] - 1.0
a = xA
b = (xB-xA)
c = (xC-xA)
d = (xA-xB-xC+xD)
e = yA
f = (yB-yA)
g = (yC-yA)
h = (yA-yB-yC+yD)
denom = (d*g-h*c)
mu = (h*b-d*f)/denom
tau = (h*(a-x) - d*(e-y))/denom
zeta = a - x + c*tau
eta = b + c*mu + d*tau
theta = d*mu
alpha = (-eta + polarity*np.sqrt(eta**2.0 - 4.0*zeta*theta))/(2.0*theta)
beta = mu*alpha + tau
# Alternate method if there are sectors that are "too regular"
z = np.logical_or(np.isnan(alpha),np.isnan(beta)) # These points weren't able to identify coordinates
if np.any(z):
these = np.isclose(f/b,(yD-yC)/(xD-xC)) # iso-beta lines have equal slope
if np.any(these):
kappa = f[these]/b[these]
int_bot = yA[these] - kappa*xA[these]
int_top = yC[these] - kappa*xC[these]
int_these = y[these] - kappa*x[these]
beta_temp = (int_these-int_bot)/(int_top-int_bot)
x_left = beta_temp*xC[these] + (1.0-beta_temp)*xA[these]
x_right = beta_temp*xD[these] + (1.0-beta_temp)*xB[these]
alpha_temp= (x[these]-x_left)/(x_right-x_left)
beta[these] = beta_temp
alpha[these] = alpha_temp
#print(np.sum(np.isclose(g/c,(yD-yB)/(xD-xB))))
return alpha, beta
def _evaluate(self,x,y):
'''
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
'''
x_pos, y_pos = self.findSector(x,y)
alpha, beta = self.findCoords(x,y,x_pos,y_pos)
# Calculate the function at each point using bilinear interpolation
f = (
(1-alpha)*(1-beta)*self.f_values[x_pos,y_pos]
+ (1-alpha)*beta*self.f_values[x_pos,y_pos+1]
+ alpha*(1-beta)*self.f_values[x_pos+1,y_pos]
+ alpha*beta*self.f_values[x_pos+1,y_pos+1])
return f
def _derX(self,x,y):
'''
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
x_pos, y_pos = self.findSector(x,y)
alpha, beta = self.findCoords(x,y,x_pos,y_pos)
# Get four corners data for each point
xA = self.x_values[x_pos,y_pos]
xB = self.x_values[x_pos+1,y_pos]
xC = self.x_values[x_pos,y_pos+1]
xD = self.x_values[x_pos+1,y_pos+1]
yA = self.y_values[x_pos,y_pos]
yB = self.y_values[x_pos+1,y_pos]
yC = self.y_values[x_pos,y_pos+1]
yD = self.y_values[x_pos+1,y_pos+1]
fA = self.f_values[x_pos,y_pos]
fB = self.f_values[x_pos+1,y_pos]
fC = self.f_values[x_pos,y_pos+1]
fD = self.f_values[x_pos+1,y_pos+1]
# Calculate components of the alpha,beta --> x,y delta translation matrix
alpha_x = (1-beta)*(xB-xA) + beta*(xD-xC)
alpha_y = (1-beta)*(yB-yA) + beta*(yD-yC)
beta_x = (1-alpha)*(xC-xA) + alpha*(xD-xB)
beta_y = (1-alpha)*(yC-yA) + alpha*(yD-yB)
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x*beta_y - beta_x*alpha_y
x_alpha = beta_y/det
x_beta = -alpha_y/det
# Calculate the derivative of f w.r.t. alpha and beta
dfda = (1-beta)*(fB-fA) + beta*(fD-fC)
dfdb = (1-alpha)*(fC-fA) + alpha*(fD-fB)
# Calculate the derivative with respect to x (and return it)
dfdx = x_alpha*dfda + x_beta*dfdb
return dfdx
def _derY(self,x,y):
'''
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
'''
x_pos, y_pos = self.findSector(x,y)
alpha, beta = self.findCoords(x,y,x_pos,y_pos)
# Get four corners data for each point
xA = self.x_values[x_pos,y_pos]
xB = self.x_values[x_pos+1,y_pos]
xC = self.x_values[x_pos,y_pos+1]
xD = self.x_values[x_pos+1,y_pos+1]
yA = self.y_values[x_pos,y_pos]
yB = self.y_values[x_pos+1,y_pos]
yC = self.y_values[x_pos,y_pos+1]
yD = self.y_values[x_pos+1,y_pos+1]
fA = self.f_values[x_pos,y_pos]
fB = self.f_values[x_pos+1,y_pos]
fC = self.f_values[x_pos,y_pos+1]
fD = self.f_values[x_pos+1,y_pos+1]
# Calculate components of the alpha,beta --> x,y delta translation matrix
alpha_x = (1-beta)*(xB-xA) + beta*(xD-xC)
alpha_y = (1-beta)*(yB-yA) + beta*(yD-yC)
beta_x = (1-alpha)*(xC-xA) + alpha*(xD-xB)
beta_y = (1-alpha)*(yC-yA) + alpha*(yD-yB)
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x*beta_y - beta_x*alpha_y
y_alpha = -beta_x/det
y_beta = alpha_x/det
# Calculate the derivative of f w.r.t. alpha and beta
dfda = (1-beta)*(fB-fA) + beta*(fD-fC)
dfdb = (1-alpha)*(fC-fA) + alpha*(fD-fB)
# Calculate the derivative with respect to x (and return it)
dfdy = y_alpha*dfda + y_beta*dfdb
return dfdy
###############################################################################
## Functions used in discrete choice models with T1EV taste shocks ############
###############################################################################
def calcLogSumChoiceProbs(Vals, sigma):
'''
Returns the final optimal value and choice probabilities given the choice
specific value functions `Vals`. Probabilities are degenerate if sigma == 0.0.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
V : [numpy.array]
A numpy.array that holds the integrated value function.
P : [numpy.array]
A numpy.array that holds the discrete choice probabilities
'''
# Assumes that NaNs have been replaced by -numpy.inf or similar
if sigma == 0.0:
# We could construct a linear index here and use unravel_index.
Pflat = np.argmax(Vals, axis=0)
V = np.zeros(Vals[0].shape)
Probs = np.zeros(Vals.shape)
for i in range(Vals.shape[0]):
optimalIndices = Pflat == i
V[optimalIndices] = Vals[i][optimalIndices]
Probs[i][optimalIndices] = 1
return V, Probs
# else we have a taste shock
maxV = np.max(Vals, axis=0)
# calculate maxV+sigma*log(sum_i=1^J exp((V[i]-maxV))/sigma)
sumexp = np.sum(np.exp((Vals-maxV)/sigma), axis=0)
LogSumV = np.log(sumexp)
LogSumV = maxV + sigma*LogSumV
Probs = np.exp((Vals-LogSumV)/sigma)
return LogSumV, Probs
def calcChoiceProbs(Vals, sigma):
'''
Returns the choice probabilities given the choice specific value functions
`Vals`. Probabilities are degenerate if sigma == 0.0.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
Probs : [numpy.array]
A numpy.array that holds the discrete choice probabilities
'''
# Assumes that NaNs have been replaced by -numpy.inf or similar
if sigma == 0.0:
# We could construct a linear index here and use unravel_index.
Pflat = np.argmax(Vals, axis=0)
Probs = np.zeros(Vals.shape)
for i in range(Vals.shape[0]):
Probs[i][Pflat==i] = 1
return Probs
maxV = np.max(Vals, axis=0)
Probs = np.divide(np.exp((Vals-maxV)/sigma), np.sum(np.exp((Vals-maxV)/sigma), axis=0))
return Probs
def calcLogSum(Vals, sigma):
'''
Returns the optimal value given the choice specific value functions Vals.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
V : [numpy.array]
A numpy.array that holds the integrated value function.
'''
# Assumes that NaNs have been replaced by -numpy.inf or similar
if sigma == 0.0:
# We could construct a linear index here and use unravel_index.
V = np.amax(Vals, axis=0)
return V
# else we have a taste shock
maxV = np.max(Vals, axis=0)
# calculate maxV+sigma*log(sum_i=1^J exp((V[i]-maxV))/sigma)
sumexp = np.sum(np.exp((Vals-maxV)/sigma), axis=0)
LogSumV = np.log(sumexp)
LogSumV = maxV + sigma*LogSumV
return LogSumV
def main():
print("Sorry, HARK.interpolation doesn't actually do much on its own.")
print("To see some examples of its interpolation methods in action, look at any")
print("of the model modules in /ConsumptionSavingModel. In the future, running")
print("this module will show examples of each interpolation class.")
from time import clock
import matplotlib.pyplot as plt
RNG = np.random.RandomState(123)
if False:
x = np.linspace(1,20,39)
y = np.log(x)
dydx = 1.0/x
f = CubicInterp(x,y,dydx)
x_test = np.linspace(0,30,200)
y_test = f(x_test)
plt.plot(x_test,y_test)
plt.show()
if False:
f = lambda x,y : 3.0*x**2.0 + x*y + 4.0*y**2.0
dfdx = lambda x,y : 6.0*x + y
dfdy = lambda x,y : x + 8.0*y
y_list = np.linspace(0,5,100,dtype=float)
xInterpolators = []
xInterpolators_alt = []
for y in y_list:
this_x_list = np.sort((RNG.rand(100)*5.0))
this_interpolation = LinearInterp(this_x_list,f(this_x_list,y*np.ones(this_x_list.size)))
that_interpolation = CubicInterp(this_x_list,f(this_x_list,y*np.ones(this_x_list.size)),dfdx(this_x_list,y*np.ones(this_x_list.size)))
xInterpolators.append(this_interpolation)
xInterpolators_alt.append(that_interpolation)
g = LinearInterpOnInterp1D(xInterpolators,y_list)
h = LinearInterpOnInterp1D(xInterpolators_alt,y_list)
rand_x = RNG.rand(100)*5.0
rand_y = RNG.rand(100)*5.0
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
q = (dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y))/dfdx(rand_x,rand_y)
r = (dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y))/dfdy(rand_x,rand_y)
#print(z)
#print(q)
#print(r)
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
q = (dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y))/dfdx(rand_x,rand_y)
r = (dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y))/dfdy(rand_x,rand_y)
print(z)
#print(q)
#print(r)
if False:
f = lambda x,y,z : 3.0*x**2.0 + x*y + 4.0*y**2.0 - 5*z**2.0 + 1.5*x*z
dfdx = lambda x,y,z : 6.0*x + y + 1.5*z
dfdy = lambda x,y,z : x + 8.0*y
dfdz = lambda x,y,z : -10.0*z + 1.5*x
y_list = np.linspace(0,5,51,dtype=float)
z_list = np.linspace(0,5,51,dtype=float)
xInterpolators = []
for y in y_list:
temp = []
for z in z_list:
this_x_list = np.sort((RNG.rand(100)*5.0))
this_interpolation = LinearInterp(this_x_list,f(this_x_list,y*np.ones(this_x_list.size),z*np.ones(this_x_list.size)))
temp.append(this_interpolation)
xInterpolators.append(deepcopy(temp))
g = BilinearInterpOnInterp1D(xInterpolators,y_list,z_list)
rand_x = RNG.rand(1000)*5.0
rand_y = RNG.rand(1000)*5.0
rand_z = RNG.rand(1000)*5.0
z = (f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z))/f(rand_x,rand_y,rand_z)
q = (dfdx(rand_x,rand_y,rand_z) - g.derivativeX(rand_x,rand_y,rand_z))/dfdx(rand_x,rand_y,rand_z)
r = (dfdy(rand_x,rand_y,rand_z) - g.derivativeY(rand_x,rand_y,rand_z))/dfdy(rand_x,rand_y,rand_z)
p = (dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z))/dfdz(rand_x,rand_y,rand_z)
z.sort()
if False:
f = lambda w,x,y,z : 4.0*w*z - 2.5*w*x + w*y + 6.0*x*y - 10.0*x*z + 3.0*y*z - 7.0*z + 4.0*x + 2.0*y - 5.0*w
dfdw = lambda w,x,y,z : 4.0*z - 2.5*x + y - 5.0
dfdx = lambda w,x,y,z : -2.5*w + 6.0*y - 10.0*z + 4.0
dfdy = lambda w,x,y,z : w + 6.0*x + 3.0*z + 2.0
dfdz = lambda w,x,y,z : 4.0*w - 10.0*x + 3.0*y - 7
x_list = np.linspace(0,5,16,dtype=float)
y_list = np.linspace(0,5,16,dtype=float)
z_list = np.linspace(0,5,16,dtype=float)
wInterpolators = []
for x in x_list:
temp = []
for y in y_list:
temptemp = []
for z in z_list:
this_w_list = np.sort((RNG.rand(16)*5.0))
this_interpolation = LinearInterp(this_w_list,f(this_w_list,x*np.ones(this_w_list.size),y*np.ones(this_w_list.size),z*np.ones(this_w_list.size)))
temptemp.append(this_interpolation)
temp.append(deepcopy(temptemp))
wInterpolators.append(deepcopy(temp))
g = TrilinearInterpOnInterp1D(wInterpolators,x_list,y_list,z_list)
N = 20000
rand_w = RNG.rand(N)*5.0
rand_x = RNG.rand(N)*5.0
rand_y = RNG.rand(N)*5.0
rand_z = RNG.rand(N)*5.0
t_start = clock()
z = (f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z))/f(rand_w,rand_x,rand_y,rand_z)
q = (dfdw(rand_w,rand_x,rand_y,rand_z) - g.derivativeW(rand_w,rand_x,rand_y,rand_z))/dfdw(rand_w,rand_x,rand_y,rand_z)
r = (dfdx(rand_w,rand_x,rand_y,rand_z) - g.derivativeX(rand_w,rand_x,rand_y,rand_z))/dfdx(rand_w,rand_x,rand_y,rand_z)
p = (dfdy(rand_w,rand_x,rand_y,rand_z) - g.derivativeY(rand_w,rand_x,rand_y,rand_z))/dfdy(rand_w,rand_x,rand_y,rand_z)
s = (dfdz(rand_w,rand_x,rand_y,rand_z) - g.derivativeZ(rand_w,rand_x,rand_y,rand_z))/dfdz(rand_w,rand_x,rand_y,rand_z)
t_end = clock()
z.sort()
print(z)
print(t_end-t_start)
if False:
f = lambda x,y : 3.0*x**2.0 + x*y + 4.0*y**2.0
dfdx = lambda x,y : 6.0*x + y
dfdy = lambda x,y : x + 8.0*y
x_list = np.linspace(0,5,101,dtype=float)
y_list = np.linspace(0,5,101,dtype=float)
x_temp,y_temp = np.meshgrid(x_list,y_list,indexing='ij')
g = BilinearInterp(f(x_temp,y_temp),x_list,y_list)
rand_x = RNG.rand(100)*5.0
rand_y = RNG.rand(100)*5.0
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
q = (f(x_temp,y_temp) - g(x_temp,y_temp))/f(x_temp,y_temp)
#print(z)
#print(q)
if False:
f = lambda x,y,z : 3.0*x**2.0 + x*y + 4.0*y**2.0 - 5*z**2.0 + 1.5*x*z
dfdx = lambda x,y,z : 6.0*x + y + 1.5*z
dfdy = lambda x,y,z : x + 8.0*y
dfdz = lambda x,y,z : -10.0*z + 1.5*x
x_list = np.linspace(0,5,11,dtype=float)
y_list = np.linspace(0,5,11,dtype=float)
z_list = np.linspace(0,5,101,dtype=float)
x_temp,y_temp,z_temp = np.meshgrid(x_list,y_list,z_list,indexing='ij')
g = TrilinearInterp(f(x_temp,y_temp,z_temp),x_list,y_list,z_list)
rand_x = RNG.rand(1000)*5.0
rand_y = RNG.rand(1000)*5.0
rand_z = RNG.rand(1000)*5.0
z = (f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z))/f(rand_x,rand_y,rand_z)
q = (dfdx(rand_x,rand_y,rand_z) - g.derivativeX(rand_x,rand_y,rand_z))/dfdx(rand_x,rand_y,rand_z)
r = (dfdy(rand_x,rand_y,rand_z) - g.derivativeY(rand_x,rand_y,rand_z))/dfdy(rand_x,rand_y,rand_z)
p = (dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z))/dfdz(rand_x,rand_y,rand_z)
p.sort()
plt.plot(p)
if False:
f = lambda w,x,y,z : 4.0*w*z - 2.5*w*x + w*y + 6.0*x*y - 10.0*x*z + 3.0*y*z - 7.0*z + 4.0*x + 2.0*y - 5.0*w
dfdw = lambda w,x,y,z : 4.0*z - 2.5*x + y - 5.0
dfdx = lambda w,x,y,z : -2.5*w + 6.0*y - 10.0*z + 4.0
dfdy = lambda w,x,y,z : w + 6.0*x + 3.0*z + 2.0
dfdz = lambda w,x,y,z : 4.0*w - 10.0*x + 3.0*y - 7
w_list = np.linspace(0,5,16,dtype=float)
x_list = np.linspace(0,5,16,dtype=float)
y_list = np.linspace(0,5,16,dtype=float)
z_list = np.linspace(0,5,16,dtype=float)
w_temp,x_temp,y_temp,z_temp = np.meshgrid(w_list,x_list,y_list,z_list,indexing='ij')
mySearch = lambda trash,x : np.floor(x/5*32).astype(int)
g = QuadlinearInterp(f(w_temp,x_temp,y_temp,z_temp),w_list,x_list,y_list,z_list)
N = 1000000
rand_w = RNG.rand(N)*5.0
rand_x = RNG.rand(N)*5.0
rand_y = RNG.rand(N)*5.0
rand_z = RNG.rand(N)*5.0
t_start = clock()
z = (f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z))/f(rand_w,rand_x,rand_y,rand_z)
t_end = clock()
#print(z)
print(t_end-t_start)
if False:
f = lambda x,y : 3.0*x**2.0 + x*y + 4.0*y**2.0
dfdx = lambda x,y : 6.0*x + y
dfdy = lambda x,y : x + 8.0*y
warp_factor = 0.01
x_list = np.linspace(0,5,71,dtype=float)
y_list = np.linspace(0,5,51,dtype=float)
x_temp,y_temp = np.meshgrid(x_list,y_list,indexing='ij')
x_adj = x_temp + warp_factor*(RNG.rand(x_list.size,y_list.size) - 0.5)
y_adj = y_temp + warp_factor*(RNG.rand(x_list.size,y_list.size) - 0.5)
g = Curvilinear2DInterp(f(x_adj,y_adj),x_adj,y_adj)
rand_x = RNG.rand(1000)*5.0
rand_y = RNG.rand(1000)*5.0
t_start = clock()
z = (f(rand_x,rand_y) - g(rand_x,rand_y))/f(rand_x,rand_y)
q = (dfdx(rand_x,rand_y) - g.derivativeX(rand_x,rand_y))/dfdx(rand_x,rand_y)
r = (dfdy(rand_x,rand_y) - g.derivativeY(rand_x,rand_y))/dfdy(rand_x,rand_y)
t_end = clock()
z.sort()
q.sort()
r.sort()
#print(z)
print(t_end-t_start)
if False:
f = lambda x,y,z : 3.0*x**2.0 + x*y + 4.0*y**2.0 - 5*z**2.0 + 1.5*x*z
dfdx = lambda x,y,z : 6.0*x + y + 1.5*z
dfdy = lambda x,y,z : x + 8.0*y
dfdz = lambda x,y,z : -10.0*z + 1.5*x
warp_factor = 0.01
x_list = np.linspace(0,5,11,dtype=float)
y_list = np.linspace(0,5,11,dtype=float)
z_list = np.linspace(0,5,101,dtype=float)
x_temp,y_temp = np.meshgrid(x_list,y_list,indexing='ij')
xyInterpolators = []
for j in range(z_list.size):
x_adj = x_temp + warp_factor*(RNG.rand(x_list.size,y_list.size) - 0.5)
y_adj = y_temp + warp_factor*(RNG.rand(x_list.size,y_list.size) - 0.5)
z_temp = z_list[j]*np.ones(x_adj.shape)
thisInterp = Curvilinear2DInterp(f(x_adj,y_adj,z_temp),x_adj,y_adj)
xyInterpolators.append(thisInterp)
g = LinearInterpOnInterp2D(xyInterpolators,z_list)
N = 1000
rand_x = RNG.rand(N)*5.0
rand_y = RNG.rand(N)*5.0
rand_z = RNG.rand(N)*5.0
z = (f(rand_x,rand_y,rand_z) - g(rand_x,rand_y,rand_z))/f(rand_x,rand_y,rand_z)
p = (dfdz(rand_x,rand_y,rand_z) - g.derivativeZ(rand_x,rand_y,rand_z))/dfdz(rand_x,rand_y,rand_z)
p.sort()
plt.plot(p)
if False:
f = lambda w,x,y,z : 4.0*w*z - 2.5*w*x + w*y + 6.0*x*y - 10.0*x*z + 3.0*y*z - 7.0*z + 4.0*x + 2.0*y - 5.0*w
dfdw = lambda w,x,y,z : 4.0*z - 2.5*x + y - 5.0
dfdx = lambda w,x,y,z : -2.5*w + 6.0*y - 10.0*z + 4.0
dfdy = lambda w,x,y,z : w + 6.0*x + 3.0*z + 2.0
dfdz = lambda w,x,y,z : 4.0*w - 10.0*x + 3.0*y - 7
warp_factor = 0.1
w_list = np.linspace(0,5,16,dtype=float)
x_list = np.linspace(0,5,16,dtype=float)
y_list = np.linspace(0,5,16,dtype=float)
z_list = np.linspace(0,5,16,dtype=float)
w_temp,x_temp = np.meshgrid(w_list,x_list,indexing='ij')
wxInterpolators = []
for i in range(y_list.size):
temp = []
for j in range(z_list.size):
w_adj = w_temp + warp_factor*(RNG.rand(w_list.size,x_list.size) - 0.5)
x_adj = x_temp + warp_factor*(RNG.rand(w_list.size,x_list.size) - 0.5)
y_temp = y_list[i]*np.ones(w_adj.shape)
z_temp = z_list[j]*np.ones(w_adj.shape)
thisInterp = Curvilinear2DInterp(f(w_adj,x_adj,y_temp,z_temp),w_adj,x_adj)
temp.append(thisInterp)
wxInterpolators.append(temp)
g = BilinearInterpOnInterp2D(wxInterpolators,y_list,z_list)
N = 1000000
rand_w = RNG.rand(N)*5.0
rand_x = RNG.rand(N)*5.0
rand_y = RNG.rand(N)*5.0
rand_z = RNG.rand(N)*5.0
t_start = clock()
z = (f(rand_w,rand_x,rand_y,rand_z) - g(rand_w,rand_x,rand_y,rand_z))/f(rand_w,rand_x,rand_y,rand_z)
t_end = clock()
z.sort()
print(z)
print(t_end-t_start)
if __name__ == '__main__':
main()
| HARK/interpolation.py | 159,390 | Bilinear full (or tensor) grid interpolation of a function f(x,y).
A 3D interpolator that bilinearly interpolates among a list of lists of 1D
interpolators.
A 4D interpolation method that bilinearly interpolates among "layers" of
arbitrary 2D interpolations. Useful for models with two endogenous state
variables and two exogenous state variables when solving with the endogenous
grid method. NOTE: should not be used if an exogenous 4D grid is used, will
be significantly slower than QuadlinearInterp.
A class for representing trivial functions that return the same real output for any input. This
is convenient for models where an object might be a (non-trivial) function, but in some variations
that object is just a constant number. Rather than needing to make a (Bi/Tri/Quad)-
LinearInterpolation with trivial state grids and the same f_value in every entry, ConstantFunction
allows the user to quickly make a constant/trivial function. This comes up, e.g., in models
with endogenous pricing of insurance contracts; a contract's premium might depend on some state
variables of the individual, but in some variations the premium of a contract is just a number.
An interpolating function using piecewise cubic splines. Matches level and
slope of 1D function at gridpoints, smoothly interpolating in between.
Extrapolation above highest gridpoint approaches a limiting linear function
if desired (linear extrapolation also enabled.)
A 2D interpolation method for curvilinear or "warped grid" interpolation, as
in White (2015). Used for models with two endogenous states that are solved
with the endogenous grid method.
A wrapper class for 1D interpolation methods in HARK.
A wrapper class for 2D interpolation methods in HARK.
A wrapper class for 3D interpolation methods in HARK.
A wrapper class for 4D interpolation methods in HARK.
A fairly trivial interpolator that simply returns one of its arguments. Useful for avoiding
numeric error in extreme cases.
A "from scratch" 1D linear interpolation class. Allows for linear or decay
extrapolation (approaching a limiting linear function from below).
A 2D interpolator that linearly interpolates among a list of 1D interpolators.
A 3D interpolation method that linearly interpolates between "layers" of
arbitrary 2D interpolations. Useful for models with two endogenous state
variables and one exogenous state variable when solving with the endogenous
grid method. NOTE: should not be used if an exogenous 3D grid is used, will
be significantly slower than TrilinearInterp.
The lower envelope of a finite set of 1D functions, each of which can be of
any class that has the methods __call__, derivative, and eval_with_derivative.
Generally: it combines HARKinterpolator1Ds.
The lower envelope of a finite set of 2D functions, each of which can be of
any class that has the methods __call__, derivativeX, and derivativeY.
Generally: it combines HARKinterpolator2Ds.
The lower envelope of a finite set of 3D functions, each of which can be of
any class that has the methods __call__, derivativeX, derivativeY, and
derivativeZ. Generally: it combines HARKinterpolator2Ds.
Quadlinear full (or tensor) grid interpolation of a function f(w,x,y,z).
Trilinear full (or tensor) grid interpolation of a function f(x,y,z).
A 4D interpolator that trilinearly interpolates among a list of lists of 1D interpolators.
The upper envelope of a finite set of 1D functions, each of which can be of
any class that has the methods __call__, derivative, and eval_with_derivative.
Generally: it combines HARKinterpolator1Ds.
A class for representing a function with two real inputs whose lower bound
in the first input depends on the second input. Useful for managing curved
natural borrowing constraints, as occurs in the persistent shocks model.
A class for representing a function with three real inputs whose lower bound
in the first input depends on the second input. Useful for managing curved
natural borrowing constraints.
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
fxy : np.array or float
The interpolated function evaluated at x,y: fxy = f(x,y), with the
same shape as x and y.
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
fxyz : np.array or float
The interpolated function evaluated at x,y,z: fxyz = f(x,y,z), with
the same shape as x, y, and z.
Evaluates the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
fwxyz : np.array or float
The interpolated function evaluated at w,x,y,z: fwxyz = f(w,x,y,z),
with the same shape as w, x, y, and z.
Evaluate the identity function.
Evaluate the constant function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists).
Evaluate the function at given state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
f_out : np.array
Function evaluated at (x,y), of same shape as inputs.
Evaluate the function at given state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
f_out : np.array
Function evaluated at (x,y,z), of same shape as inputs.
Constructor for a new IdentityFunction.
Parameters
----------
i_dim : int
Index of the dimension on which the identity is defined. f(*x) = x[i]
n_dims : int
Total number of input dimensions for this function.
Returns
-------
None
Make a new ConstantFunction object.
Parameters
----------
value : float
The constant value that the function returns.
Returns
-------
None
The interpolation constructor to make a new linear spline interpolation.
Parameters
----------
x_list : np.array
List of x values composing the grid.
y_list : np.array
List of y values, representing f(x) at the points in x_list.
intercept_limit : float
Intercept of limiting linear function.
slope_limit : float
Slope of limiting linear function.
lower_extrap : boolean
Indicator for whether lower extrapolation is allowed. False means
f(x) = NaN for x < min(x_list); True means linear extrapolation.
Returns
-------
new instance of LinearInterp
NOTE: When no input is given for the limiting linear function, linear
extrapolation is used above the highest gridpoint.
The interpolation constructor to make a new cubic spline interpolation.
Parameters
----------
x_list : np.array
List of x values composing the grid.
y_list : np.array
List of y values, representing f(x) at the points in x_list.
dydx_list : np.array
List of dydx values, representing f'(x) at the points in x_list
intercept_limit : float
Intercept of limiting linear function.
slope_limit : float
Slope of limiting linear function.
lower_extrap : boolean
Indicator for whether lower extrapolation is allowed. False means
f(x) = NaN for x < min(x_list); True means linear extrapolation.
Returns
-------
new instance of CubicInterp
NOTE: When no input is given for the limiting linear function, linear
extrapolation is used above the highest gridpoint.
Constructor to make a new bilinear interpolation.
Parameters
----------
f_values : numpy.array
An array of size (x_n,y_n) such that f_values[i,j] = f(x_list[i],y_list[j])
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
Returns
-------
new instance of BilinearInterp
Constructor to make a new trilinear interpolation.
Parameters
----------
f_values : numpy.array
An array of size (x_n,y_n,z_n) such that f_values[i,j,k] =
f(x_list[i],y_list[j],z_list[k])
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
z_list : numpy.array
An array of z values, with length designated z_n.
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
zSearchFunc : function
An optional function that returns the reference location for z values:
indices = zSearchFunc(z_list,z). Default is np.searchsorted
Returns
-------
new instance of TrilinearInterp
Constructor to make a new quadlinear interpolation.
Parameters
----------
f_values : numpy.array
An array of size (w_n,x_n,y_n,z_n) such that f_values[i,j,k,l] =
f(w_list[i],x_list[j],y_list[k],z_list[l])
w_list : numpy.array
An array of x values, with length designated w_n.
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
z_list : numpy.array
An array of z values, with length designated z_n.
wSearchFunc : function
An optional function that returns the reference location for w values:
indices = wSearchFunc(w_list,w). Default is np.searchsorted
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
zSearchFunc : function
An optional function that returns the reference location for z values:
indices = zSearchFunc(z_list,z). Default is np.searchsorted
Returns
-------
new instance of QuadlinearInterp
Constructor to make a new lower envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator1D
Returns
-------
new instance of LowerEnvelope
Constructor to make a new upper envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator1D
Returns
-------
new instance of UpperEnvelope
Constructor to make a new lower envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator2D
Returns
-------
new instance of LowerEnvelope2D
Constructor to make a new lower envelope iterpolation.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator3D
Returns
-------
None
Make a new instance of VariableLowerBoundFunc2D.
Parameters
----------
func : function
A function f: (R_+ x R) --> R representing the function of interest
shifted by its lower bound in the first input.
lowerBound : function
The lower bound in the first input of the function of interest, as
a function of the second input.
Returns
-------
None
Make a new instance of VariableLowerBoundFunc3D.
Parameters
----------
func : function
A function f: (R_+ x R^2) --> R representing the function of interest
shifted by its lower bound in the first input.
lowerBound : function
The lower bound in the first input of the function of interest, as
a function of the second input.
Returns
-------
None
Constructor for the class, generating an approximation to a function of
the form f(x,y) using interpolations over f(x,y_0) for a fixed grid of
y_0 values.
Parameters
----------
xInterpolators : [HARKinterpolator1D]
A list of 1D interpolations over the x variable. The nth element of
xInterpolators represents f(x,y_values[n]).
y_values: numpy.array
An array of y values equal in length to xInterpolators.
Returns
-------
new instance of LinearInterpOnInterp1D
Constructor for the class, generating an approximation to a function of
the form f(x,y,z) using interpolations over f(x,y_0,z_0) for a fixed grid
of y_0 and z_0 values.
Parameters
----------
xInterpolators : [[HARKinterpolator1D]]
A list of lists of 1D interpolations over the x variable. The i,j-th
element of xInterpolators represents f(x,y_values[i],z_values[j]).
y_values: numpy.array
An array of y values equal in length to xInterpolators.
z_values: numpy.array
An array of z values equal in length to xInterpolators[0].
Returns
-------
new instance of BilinearInterpOnInterp1D
Constructor for the class, generating an approximation to a function of
the form f(w,x,y,z) using interpolations over f(w,x_0,y_0,z_0) for a fixed
grid of y_0 and z_0 values.
Parameters
----------
wInterpolators : [[[HARKinterpolator1D]]]
A list of lists of lists of 1D interpolations over the x variable.
The i,j,k-th element of wInterpolators represents f(w,x_values[i],y_values[j],z_values[k]).
x_values: numpy.array
An array of x values equal in length to wInterpolators.
y_values: numpy.array
An array of y values equal in length to wInterpolators[0].
z_values: numpy.array
An array of z values equal in length to wInterpolators[0][0]
Returns
-------
new instance of TrilinearInterpOnInterp1D
Constructor for the class, generating an approximation to a function of
the form f(x,y,z) using interpolations over f(x,y,z_0) for a fixed grid
of z_0 values.
Parameters
----------
xyInterpolators : [HARKinterpolator2D]
A list of 2D interpolations over the x and y variables. The nth
element of xyInterpolators represents f(x,y,z_values[n]).
z_values: numpy.array
An array of z values equal in length to xyInterpolators.
Returns
-------
new instance of LinearInterpOnInterp2D
Constructor for the class, generating an approximation to a function of
the form f(w,x,y,z) using interpolations over f(w,x,y_0,z_0) for a fixed
grid of y_0 and z_0 values.
Parameters
----------
wxInterpolators : [[HARKinterpolator2D]]
A list of lists of 2D interpolations over the w and x variables.
The i,j-th element of wxInterpolators represents
f(w,x,y_values[i],z_values[j]).
y_values: numpy.array
An array of y values equal in length to wxInterpolators.
z_values: numpy.array
An array of z values equal in length to wxInterpolators[0].
Returns
-------
new instance of BilinearInterpOnInterp2D
Constructor for 2D curvilinear interpolation for a function f(x,y)
Parameters
----------
f_values: numpy.array
A 2D array of function values such that f_values[i,j] =
f(x_values[i,j],y_values[i,j]).
x_values: numpy.array
A 2D array of x values of the same size as f_values.
y_values: numpy.array
A 2D array of y values of the same size as f_values.
Returns
-------
new instance of Curvilinear2DInterp
Interpolated function derivative evaluator, to be defined in subclasses.
Evaluate the derivative of the function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists). This is an array of zeros.
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
Returns the first derivative of the function at each value in x. Only
called internally by HARKinterpolator1D.derivative.
Returns the first derivative of the function at each value in x. Only
called internally by HARKinterpolator1D.derivative.
Interpolated function w-derivative evaluator, to be defined in subclasses.
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
Interpolated function x-derivative evaluator, to be defined in subclasses.
Interpolated function x-derivative evaluator, to be defined in subclasses.
Interpolated function w-derivative evaluator, to be defined in subclasses.
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
Returns the first derivative of the function with respect to X at each
value in (x,y). Only called internally by HARKinterpolator2D._derX.
Returns the first derivative of the function with respect to X at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derX.
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
Interpolated function y-derivative evaluator, to be defined in subclasses.
Interpolated function y-derivative evaluator, to be defined in subclasses.
Interpolated function w-derivative evaluator, to be defined in subclasses.
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
Returns the first derivative of the function with respect to Y at each
value in (x,y). Only called internally by HARKinterpolator2D._derY.
Returns the first derivative of the function with respect to Y at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derY.
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
Interpolated function y-derivative evaluator, to be defined in subclasses.
Interpolated function w-derivative evaluator, to be defined in subclasses.
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
Returns the first derivative of the function with respect to Z at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derZ.
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
Interpolated function and derivative evaluator, to be defined in subclasses.
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
Returns the level and/or first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Parameters
----------
x_list : scalar or np.array
Set of points where we want to evlauate the interpolated function and/or its derivative..
_eval : boolean
Indicator for whether to evalute the level of the interpolated function.
_Der : boolean
Indicator for whether to evaluate the derivative of the interpolated function.
Returns
-------
A list including the level and/or derivative of the interpolated function where requested.
Interpolated function evaluator, to be defined in subclasses.
Interpolated function evaluator, to be defined in subclasses.
Interpolated function evaluator, to be defined in subclasses.
Interpolated function evaluator, to be defined in subclasses.
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
Returns the level of the function at each value in x as the minimum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
Returns the level of the function at each value in x as the maximum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
Returns the level of the function at each value in (x,y) as the minimum
among all of the functions. Only called internally by
HARKinterpolator2D.__call__.
Returns the level of the function at each value in (x,y,z) as the minimum
among all of the functions. Only called internally by
HARKinterpolator3D.__call__.
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
Returns the level of the interpolated function at each value in w,x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
Check whether x is if a scalar type, or 0-dim.
Parameters
----------
x : anything
An input to be checked for scalar-ness.
Returns
-------
is_scalar : boolean
True if the input is a scalar, False otherwise.
Returns the choice probabilities given the choice specific value functions
`Vals`. Probabilities are degenerate if sigma == 0.0.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
Probs : [numpy.array]
A numpy.array that holds the discrete choice probabilities
Returns the optimal value given the choice specific value functions Vals.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
V : [numpy.array]
A numpy.array that holds the integrated value function.
Returns the final optimal value and choice probabilities given the choice
specific value functions `Vals`. Probabilities are degenerate if sigma == 0.0.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
V : [numpy.array]
A numpy.array that holds the integrated value function.
P : [numpy.array]
A numpy.array that holds the discrete choice probabilities
Evaluates the derivative of the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
Returns the derivative of the function with respect to the first dimension.
Evaluates the partial derivative with respect to w (the first argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdw : np.array or float
The derivative with respect to w of the interpolated function eval-
uated at w,x,y,z: dfdw = f_w(w,x,y,z), with the same shape as inputs.
Returns the derivative of the function with respect to the W dimension.
This should only exist when n_dims >= 4.
Evaluates the partial derivative of interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative of the interpolated function with respect to x, eval-
uated at x,y: dfdx = f_x(x,y), with the same shape as x and y.
Evaluates the partial derivative of the interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative with respect to x of the interpolated function evaluated
at x,y,z: dfdx = f_x(x,y,z), with the same shape as x, y, and z.
Evaluates the partial derivative with respect to x (the second argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdx : np.array or float
The derivative with respect to x of the interpolated function eval-
uated at w,x,y,z: dfdx = f_x(w,x,y,z), with the same shape as inputs.
Returns the derivative of the function with respect to the X dimension.
This is the first input whenever n_dims < 4 and the second input otherwise.
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y), of same shape as inputs.
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y,z), of same shape as inputs.
Evaluates the partial derivative of interpolated function with respect
to y (the second argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdy : np.array or float
The derivative of the interpolated function with respect to y, eval-
uated at x,y: dfdx = f_y(x,y), with the same shape as x and y.
Evaluates the partial derivative of the interpolated function with respect
to y (the second argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function evaluated
at x,y,z: dfdy = f_y(x,y,z), with the same shape as x, y, and z.
Evaluates the partial derivative with respect to y (the third argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function eval-
uated at w,x,y,z: dfdy = f_y(w,x,y,z), with the same shape as inputs.
Returns the derivative of the function with respect to the Y dimension.
This is the second input whenever n_dims < 4 and the third input otherwise.
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y), of same shape as inputs.
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y,z), of same shape as inputs.
Evaluates the partial derivative of the interpolated function with respect
to z (the third argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function evaluated
at x,y,z: dfdz = f_z(x,y,z), with the same shape as x, y, and z.
Evaluates the partial derivative with respect to z (the fourth argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function eval-
uated at w,x,y,z: dfdz = f_z(w,x,y,z), with the same shape as inputs.
Returns the derivative of the function with respect to the Z dimension.
This is the third input whenever n_dims < 4 and the fourth input otherwise.
Evaluate the first derivative with respect to z of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdz_out : np.array
First derivative of function with respect to the third input,
evaluated at (x,y,z), of same shape as inputs.
Evaluates the interpolated function and its derivative at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
Calculates the relative coordinates (alpha,beta) for each point (x,y),
given the sectors (x_pos,y_pos) in which they reside. Only called as
a subroutine of __call__().
Parameters
----------
x : np.array
Values whose sector should be found.
y : np.array
Values whose sector should be found. Should be same size as x.
x_pos : np.array
Sector x-coordinates for each point in (x,y), of the same size.
y_pos : np.array
Sector y-coordinates for each point in (x,y), of the same size.
Returns
-------
alpha : np.array
Relative "horizontal" position of the input in their respective sectors.
beta : np.array
Relative "vertical" position of the input in their respective sectors.
Finds the quadrilateral "sector" for each (x,y) point in the input.
Only called as a subroutine of _evaluate().
Parameters
----------
x : np.array
Values whose sector should be found.
y : np.array
Values whose sector should be found. Should be same size as x.
Returns
-------
x_pos : np.array
Sector x-coordinates for each point of the input, of the same size.
y_pos : np.array
Sector y-coordinates for each point of the input, of the same size.
Fills in the polarity attribute of the interpolation, determining whether
the "plus" (True) or "minus" (False) solution of the system of equations
should be used for each sector. Needs to be called in __init__.
Parameters
----------
none
Returns
-------
none
Custom interpolation methods for representing approximations to functions.
It also includes wrapper classes to enforce standard methods across classes.
Each interpolation class must have a distance() method that compares itself to
another instance; this is used in HARK.core's solve() method to check for solution
convergence. The interpolator classes currently in this module inherit their
distance method from HARKobject.
If there is at least one argument, return appropriately sized array Otherwise, return a single instance of the constant value All other derivatives are also zero everywhere, so these methods just point to derivative Make the basic linear spline interpolation Make a decay extrapolation Define lower extrapolation as linear function (or just NaN) Calculate interpolation coefficients on segments mapped to [0,1] Calculate extrapolation coefficients as a decay toward limiting function y = mx+b fixing a problem when slope is positive Do the "in bounds" evaluation points Do the "out of bounds" evaluation points Do the "in bounds" evaluation points Do the "out of bounds" evaluation points Do the "in bounds" evaluation points Do the "out of bounds" evaluation points for convenience for convenience for convenience for convenience for convenience Sadly, this is the fastest / most convenient way... Sadly, this is the fastest / most convenient way... This may look strange, as we call the derivativeX() method to get the derivative with respect to w, but that's just a quirk of 4D interpolations beginning with w rather than x. The derivative wrt the first dimension of an element of wxInterpolators is the w-derivative of the main function. This may look strange, as we call the derivativeY() method to get the derivative with respect to x, but that's just a quirk of 4D interpolations beginning with w rather than x. The derivative wrt the second dimension of an element of wxInterpolators is the x-derivative of the main function. Grab a point known to be inside each sector: the midway point between the lower left and upper right vertex of each sector Set the polarity of all sectors to "plus", then test each sector Update polarity: if (alpha,beta) not in the unit square, then that sector must use the "minus" solution instead Initialize the sector guess Define a function that checks whether a set of points violates a linear boundary defined by (x_bound_1,y_bound_1) and (x_bound_2,y_bound_2), where the latter is *COUNTER CLOCKWISE* from the former. Returns 1 if the point is outside the boundary and 0 otherwise. Identify the correct sector for each point to be evaluated Get coordinates for the four vertices: (xA,yA),...,(xD,yD) Check the "bounding box" for the sector: is this guess plausible? Check which boundaries are violated (and thus where to look next) Update the sector guess based on the violations Check which sectors have not changed, and mark them as complete Move to the next iteration of the search Return the output Calculate relative coordinates in the sector for each point Alternate method if there are sectors that are "too regular" These points weren't able to identify coordinates iso-beta lines have equal slopeprint(np.sum(np.isclose(g/c,(yD-yB)/(xD-xB)))) Calculate the function at each point using bilinear interpolation Get four corners data for each point Calculate components of the alpha,beta --> x,y delta translation matrix Invert the delta translation matrix into x,y --> alpha,beta Calculate the derivative of f w.r.t. alpha and beta Calculate the derivative with respect to x (and return it) Get four corners data for each point Calculate components of the alpha,beta --> x,y delta translation matrix Invert the delta translation matrix into x,y --> alpha,beta Calculate the derivative of f w.r.t. alpha and beta Calculate the derivative with respect to x (and return it) Functions used in discrete choice models with T1EV taste shocks Assumes that NaNs have been replaced by -numpy.inf or similar We could construct a linear index here and use unravel_index. else we have a taste shock calculate maxV+sigma*log(sum_i=1^J exp((V[i]-maxV))/sigma) Assumes that NaNs have been replaced by -numpy.inf or similar We could construct a linear index here and use unravel_index. Assumes that NaNs have been replaced by -numpy.inf or similar We could construct a linear index here and use unravel_index. else we have a taste shock calculate maxV+sigma*log(sum_i=1^J exp((V[i]-maxV))/sigma)print(z)print(q)print(r)print(q)print(r)print(z)print(q)print(z)print(z) | 43,152 | en | 0.733564 |
from django import forms
from .models import Reclamacao,Login,Comentario
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
class CadastraReclamacaoForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(CadastraReclamacaoForm,self).__init__(*args, **kwargs)
self.fields['titulo'].required = True
self.fields['bairro'].required = True
self.fields['rua'].required = True
self.fields['descricao'].required = True
self.fields['foto'].required = False
class Meta:
model = Reclamacao
fields = ('titulo','bairro','rua','descricao', 'foto',)
class LoginUsuarioForm(forms.ModelForm):
class Meta:
model = Login
fields = ('username','password',)
widgets = {
'password': forms.PasswordInput(),
}
class SignUpForm(UserCreationForm):
cpf = forms.CharField(max_length=11, required=True)
bairro = forms.CharField(max_length=30, required=True)
email = forms.EmailField(max_length=254, help_text='Required. Inform a valid email address.')
class Meta:
model = User
fields = ('username', 'cpf', 'bairro', 'email', 'password1', 'password2', )
#class CadastraForum(forms.ModelForm):
# class Meta:
# model = Forum
# fields = ('text',)
class RegistroDeComentarioForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(RegistroDeComentarioForm,self).__init__(*args, **kwargs)
self.fields['text1'].required = True
class Meta:
model = Comentario
fields = ('text1',) | Application/ReclamaCaicoProject/ReclamaCaicoApp/forms.py | 1,619 | class CadastraForum(forms.ModelForm): class Meta: model = Forum fields = ('text',) | 99 | en | 0.343888 |
# coding: utf-8
from django.conf.urls import url
from api_v1 import views
urlpatterns = [
url(r'^register/$', views.register),
url(r'^login/$', views.login),
url(r'^images/$', views.images),
url(r'^reccomend/$', views.reccomend),
url(r'^user_post/$', views.get_user_post),
]
# api viewer (debug用)
from rest_framework import routers
from .views import UserViewSet, TokenViewSet, ImageViewSet, PostViewSet, FavoriteViewSet
router = routers.DefaultRouter()
router.register(r'user', UserViewSet)
router.register(r'token', TokenViewSet)
router.register(r'image', ImageViewSet)
router.register(r'post', PostViewSet)
# router.register(r'favorite', FavoriteViewSet)
| server/project/api_v1/urls.py | 682 | coding: utf-8 api viewer (debug用) router.register(r'favorite', FavoriteViewSet) | 79 | en | 0.222626 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.0.6320, generator: {generator})
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from azure.core.exceptions import HttpResponseError
import msrest.serialization
class ResponseBase(msrest.serialization.Model):
"""ResponseBase.
:param type:
:type type: str
"""
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(ResponseBase, self).__init__(**kwargs)
self.type = kwargs.get('type', None)
class Identifiable(ResponseBase):
"""Defines the identity of a resource.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
"""
_validation = {
'id': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(Identifiable, self).__init__(**kwargs)
self.id = None
class Response(Identifiable):
"""Defines a response. All schemas that could be returned at the root of a response should inherit from this.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(Response, self).__init__(**kwargs)
self.web_search_url = None
class Answer(Response):
"""Answer.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
}
def __init__(
self,
**kwargs
):
super(Answer, self).__init__(**kwargs)
self.follow_up_queries = None
class Thing(Response):
"""Thing.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(Thing, self).__init__(**kwargs)
self.name = None
self.url = None
self.image = None
self.description = None
self.bing_id = None
class CreativeWork(Thing):
"""CreativeWork.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
"""
_validation = {
'id': {'readonly': True},
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(CreativeWork, self).__init__(**kwargs)
self.thumbnail_url = None
self.provider = None
self.text = None
class Article(CreativeWork):
"""Article.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar word_count: The number of words in the text of the Article.
:vartype word_count: int
"""
_validation = {
'id': {'readonly': True},
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
'word_count': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
'word_count': {'key': 'wordCount', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(Article, self).__init__(**kwargs)
self.word_count = None
class Computation(Answer):
"""Defines an expression and its answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:param expression: Required. The math or conversion expression. If the query contains a request
to convert units of measure (for example, meters to feet), this field contains the from units
and value contains the to units. If the query contains a mathematical expression such as 2+2,
this field contains the expression and value contains the answer. Note that mathematical
expressions may be normalized. For example, if the query was sqrt(4^2+8^2), the normalized
expression may be sqrt((4^2)+(8^2)). If the user's query is a math question and the
textDecorations query parameter is set to true, the expression string may include formatting
markers. For example, if the user's query is log(2), the normalized expression includes the
subscript markers. For more information, see Hit Highlighting.
:type expression: str
:param value: Required. The expression's answer.
:type value: str
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'expression': {'required': True},
'value': {'required': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'expression': {'key': 'expression', 'type': 'str'},
'value': {'key': 'value', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(Computation, self).__init__(**kwargs)
self.expression = kwargs['expression']
self.value = kwargs['value']
class Error(msrest.serialization.Model):
"""Defines the error that occurred.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param code: Required. The error code that identifies the category of error. Possible values
include: "None", "ServerError", "InvalidRequest", "RateLimitExceeded", "InvalidAuthorization",
"InsufficientAuthorization". Default value: "None".
:type code: str or ~web_search_client.models.ErrorCode
:ivar sub_code: The error code that further helps to identify the error. Possible values
include: "UnexpectedError", "ResourceError", "NotImplemented", "ParameterMissing",
"ParameterInvalidValue", "HttpNotAllowed", "Blocked", "AuthorizationMissing",
"AuthorizationRedundancy", "AuthorizationDisabled", "AuthorizationExpired".
:vartype sub_code: str or ~web_search_client.models.ErrorSubCode
:param message: Required. A description of the error.
:type message: str
:ivar more_details: A description that provides additional information about the error.
:vartype more_details: str
:ivar parameter: The parameter in the request that caused the error.
:vartype parameter: str
:ivar value: The parameter's value in the request that was not valid.
:vartype value: str
"""
_validation = {
'code': {'required': True},
'sub_code': {'readonly': True},
'message': {'required': True},
'more_details': {'readonly': True},
'parameter': {'readonly': True},
'value': {'readonly': True},
}
_attribute_map = {
'code': {'key': 'code', 'type': 'str'},
'sub_code': {'key': 'subCode', 'type': 'str'},
'message': {'key': 'message', 'type': 'str'},
'more_details': {'key': 'moreDetails', 'type': 'str'},
'parameter': {'key': 'parameter', 'type': 'str'},
'value': {'key': 'value', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(Error, self).__init__(**kwargs)
self.code = kwargs.get('code', "None")
self.sub_code = None
self.message = kwargs['message']
self.more_details = None
self.parameter = None
self.value = None
class ErrorResponse(Response):
"""The top-level response that represents a failed request.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:param errors: Required. A list of errors that describe the reasons why the request failed.
:type errors: list[~web_search_client.models.Error]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'errors': {'required': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'errors': {'key': 'errors', 'type': '[Error]'},
}
def __init__(
self,
**kwargs
):
super(ErrorResponse, self).__init__(**kwargs)
self.errors = kwargs['errors']
class MediaObject(CreativeWork):
"""MediaObject.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar content_url: Original URL to retrieve the source (file) for the media object (e.g the
source URL for the image).
:vartype content_url: str
:ivar host_page_url: URL of the page that hosts the media object.
:vartype host_page_url: str
:ivar width: The width of the source media object, in pixels.
:vartype width: int
:ivar height: The height of the source media object, in pixels.
:vartype height: int
"""
_validation = {
'id': {'readonly': True},
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
'content_url': {'readonly': True},
'host_page_url': {'readonly': True},
'width': {'readonly': True},
'height': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
'content_url': {'key': 'contentUrl', 'type': 'str'},
'host_page_url': {'key': 'hostPageUrl', 'type': 'str'},
'width': {'key': 'width', 'type': 'int'},
'height': {'key': 'height', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(MediaObject, self).__init__(**kwargs)
self.content_url = None
self.host_page_url = None
self.width = None
self.height = None
class ImageObject(MediaObject):
"""Defines an image.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar content_url: Original URL to retrieve the source (file) for the media object (e.g the
source URL for the image).
:vartype content_url: str
:ivar host_page_url: URL of the page that hosts the media object.
:vartype host_page_url: str
:ivar width: The width of the source media object, in pixels.
:vartype width: int
:ivar height: The height of the source media object, in pixels.
:vartype height: int
:ivar thumbnail: The URL to a thumbnail of the image.
:vartype thumbnail: ~web_search_client.models.ImageObject
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
'content_url': {'readonly': True},
'host_page_url': {'readonly': True},
'width': {'readonly': True},
'height': {'readonly': True},
'thumbnail': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
'content_url': {'key': 'contentUrl', 'type': 'str'},
'host_page_url': {'key': 'hostPageUrl', 'type': 'str'},
'width': {'key': 'width', 'type': 'int'},
'height': {'key': 'height', 'type': 'int'},
'thumbnail': {'key': 'thumbnail', 'type': 'ImageObject'},
}
def __init__(
self,
**kwargs
):
super(ImageObject, self).__init__(**kwargs)
self.thumbnail = None
class SearchResultsAnswer(Answer):
"""SearchResultsAnswer.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
}
def __init__(
self,
**kwargs
):
super(SearchResultsAnswer, self).__init__(**kwargs)
self.query_context = None
self.total_estimated_matches = None
self.is_family_friendly = None
class Images(SearchResultsAnswer):
"""Defines an image answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:ivar next_offset:
:vartype next_offset: int
:param value: Required. A list of image objects that are relevant to the query. If there are no
results, the List is empty.
:type value: list[~web_search_client.models.ImageObject]
:ivar query_expansions:
:vartype query_expansions: list[~web_search_client.models.Query]
:ivar similar_terms:
:vartype similar_terms: list[~web_search_client.models.Query]
:ivar related_searches:
:vartype related_searches: list[~web_search_client.models.Query]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'next_offset': {'readonly': True},
'value': {'required': True},
'query_expansions': {'readonly': True},
'similar_terms': {'readonly': True},
'related_searches': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'next_offset': {'key': 'nextOffset', 'type': 'int'},
'value': {'key': 'value', 'type': '[ImageObject]'},
'query_expansions': {'key': 'queryExpansions', 'type': '[Query]'},
'similar_terms': {'key': 'similarTerms', 'type': '[Query]'},
'related_searches': {'key': 'relatedSearches', 'type': '[Query]'},
}
def __init__(
self,
**kwargs
):
super(Images, self).__init__(**kwargs)
self.next_offset = None
self.value = kwargs['value']
self.query_expansions = None
self.similar_terms = None
self.related_searches = None
class Intangible(Thing):
"""Intangible.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(Intangible, self).__init__(**kwargs)
class News(SearchResultsAnswer):
"""Defines a news answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. An array of NewsArticle objects that contain information about news
articles that are relevant to the query. If there are no results to return for the request, the
array is empty.
:type value: list[~web_search_client.models.Article]
:ivar location:
:vartype location: str
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'value': {'required': True},
'location': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'value': {'key': 'value', 'type': '[Article]'},
'location': {'key': 'location', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(News, self).__init__(**kwargs)
self.value = kwargs['value']
self.location = None
class NewsArticle(Article):
"""Defines a news article.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar word_count: The number of words in the text of the Article.
:vartype word_count: int
"""
_validation = {
'id': {'readonly': True},
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
'word_count': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
'word_count': {'key': 'wordCount', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(NewsArticle, self).__init__(**kwargs)
class Places(SearchResultsAnswer):
"""Defines a local entity answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of local entities, such as restaurants or hotels.
:type value: list[~web_search_client.models.Thing]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'value': {'required': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'value': {'key': 'value', 'type': '[Thing]'},
}
def __init__(
self,
**kwargs
):
super(Places, self).__init__(**kwargs)
self.value = kwargs['value']
class Query(msrest.serialization.Model):
"""Defines a search query.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param text: Required. The query string. Use this string as the query term in a new search
request.
:type text: str
:ivar display_text: The display version of the query term. This version of the query term may
contain special characters that highlight the search term found in the query string. The string
contains the highlighting characters only if the query enabled hit highlighting.
:vartype display_text: str
:ivar web_search_url: The URL that takes the user to the Bing search results page for the
query.Only related search results include this field.
:vartype web_search_url: str
:ivar search_link:
:vartype search_link: str
:ivar thumbnail: Defines an image.
:vartype thumbnail: ~web_search_client.models.ImageObject
"""
_validation = {
'text': {'required': True},
'display_text': {'readonly': True},
'web_search_url': {'readonly': True},
'search_link': {'readonly': True},
'thumbnail': {'readonly': True},
}
_attribute_map = {
'text': {'key': 'text', 'type': 'str'},
'display_text': {'key': 'displayText', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'search_link': {'key': 'searchLink', 'type': 'str'},
'thumbnail': {'key': 'thumbnail', 'type': 'ImageObject'},
}
def __init__(
self,
**kwargs
):
super(Query, self).__init__(**kwargs)
self.text = kwargs['text']
self.display_text = None
self.web_search_url = None
self.search_link = None
self.thumbnail = None
class QueryContext(msrest.serialization.Model):
"""Defines the query context that Bing used for the request.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param original_query: Required. The query string as specified in the request.
:type original_query: str
:ivar altered_query: The query string used by Bing to perform the query. Bing uses the altered
query string if the original query string contained spelling mistakes. For example, if the
query string is "saling downwind", the altered query string will be "sailing downwind". This
field is included only if the original query string contains a spelling mistake.
:vartype altered_query: str
:ivar alteration_override_query: The query string to use to force Bing to use the original
string. For example, if the query string is "saling downwind", the override query string will
be "+saling downwind". Remember to encode the query string which results in
"%2Bsaling+downwind". This field is included only if the original query string contains a
spelling mistake.
:vartype alteration_override_query: str
:ivar adult_intent: A Boolean value that indicates whether the specified query has adult
intent. The value is true if the query has adult intent; otherwise, false.
:vartype adult_intent: bool
:ivar ask_user_for_location: A Boolean value that indicates whether Bing requires the user's
location to provide accurate results. If you specified the user's location by using the
X-MSEdge-ClientIP and X-Search-Location headers, you can ignore this field. For location aware
queries, such as "today's weather" or "restaurants near me" that need the user's location to
provide accurate results, this field is set to true. For location aware queries that include
the location (for example, "Seattle weather"), this field is set to false. This field is also
set to false for queries that are not location aware, such as "best sellers".
:vartype ask_user_for_location: bool
:ivar is_transactional:
:vartype is_transactional: bool
"""
_validation = {
'original_query': {'required': True},
'altered_query': {'readonly': True},
'alteration_override_query': {'readonly': True},
'adult_intent': {'readonly': True},
'ask_user_for_location': {'readonly': True},
'is_transactional': {'readonly': True},
}
_attribute_map = {
'original_query': {'key': 'originalQuery', 'type': 'str'},
'altered_query': {'key': 'alteredQuery', 'type': 'str'},
'alteration_override_query': {'key': 'alterationOverrideQuery', 'type': 'str'},
'adult_intent': {'key': 'adultIntent', 'type': 'bool'},
'ask_user_for_location': {'key': 'askUserForLocation', 'type': 'bool'},
'is_transactional': {'key': 'isTransactional', 'type': 'bool'},
}
def __init__(
self,
**kwargs
):
super(QueryContext, self).__init__(**kwargs)
self.original_query = kwargs['original_query']
self.altered_query = None
self.alteration_override_query = None
self.adult_intent = None
self.ask_user_for_location = None
self.is_transactional = None
class RankingGroup(msrest.serialization.Model):
"""Defines a search results group, such as mainline.
All required parameters must be populated in order to send to Azure.
:param items: Required. A list of search result items to display in the group.
:type items: list[~web_search_client.models.RankingItem]
"""
_validation = {
'items': {'required': True},
}
_attribute_map = {
'items': {'key': 'items', 'type': '[RankingItem]'},
}
def __init__(
self,
**kwargs
):
super(RankingGroup, self).__init__(**kwargs)
self.items = kwargs['items']
class RankingItem(msrest.serialization.Model):
"""Defines a search result item to display.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param answer_type: Required. The answer that contains the item to display. Use the type to
find the answer in the SearchResponse object. The type is the name of a SearchResponse field.
Possible values include: "WebPages", "Images", "SpellSuggestions", "News", "RelatedSearches",
"Videos", "Computation", "TimeZone".
:type answer_type: str or ~web_search_client.models.AnswerType
:ivar result_index: A zero-based index of the item in the answer.If the item does not include
this field, display all items in the answer. For example, display all news articles in the News
answer.
:vartype result_index: int
:ivar value: The ID that identifies either an answer to display or an item of an answer to
display. If the ID identifies an answer, display all items of the answer.
:vartype value: ~web_search_client.models.Identifiable
:ivar html_index:
:vartype html_index: int
:ivar textual_index:
:vartype textual_index: int
:ivar screenshot_index:
:vartype screenshot_index: int
"""
_validation = {
'answer_type': {'required': True},
'result_index': {'readonly': True},
'value': {'readonly': True},
'html_index': {'readonly': True},
'textual_index': {'readonly': True},
'screenshot_index': {'readonly': True},
}
_attribute_map = {
'answer_type': {'key': 'answerType', 'type': 'str'},
'result_index': {'key': 'resultIndex', 'type': 'int'},
'value': {'key': 'value', 'type': 'Identifiable'},
'html_index': {'key': 'htmlIndex', 'type': 'int'},
'textual_index': {'key': 'textualIndex', 'type': 'int'},
'screenshot_index': {'key': 'screenshotIndex', 'type': 'int'},
}
def __init__(
self,
**kwargs
):
super(RankingItem, self).__init__(**kwargs)
self.answer_type = kwargs['answer_type']
self.result_index = None
self.value = None
self.html_index = None
self.textual_index = None
self.screenshot_index = None
class RankingResponse(msrest.serialization.Model):
"""Defines where on the search results page content should be placed and in what order.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar pole: The search results that should be afforded the most visible treatment (for example,
displayed above the mainline and sidebar).
:vartype pole: ~web_search_client.models.RankingGroup
:ivar mainline: The search results to display in the mainline.
:vartype mainline: ~web_search_client.models.RankingGroup
:ivar sidebar: The search results to display in the sidebar.
:vartype sidebar: ~web_search_client.models.RankingGroup
"""
_validation = {
'pole': {'readonly': True},
'mainline': {'readonly': True},
'sidebar': {'readonly': True},
}
_attribute_map = {
'pole': {'key': 'pole', 'type': 'RankingGroup'},
'mainline': {'key': 'mainline', 'type': 'RankingGroup'},
'sidebar': {'key': 'sidebar', 'type': 'RankingGroup'},
}
def __init__(
self,
**kwargs
):
super(RankingResponse, self).__init__(**kwargs)
self.pole = None
self.mainline = None
self.sidebar = None
class RelatedSearchesRelatedSearchAnswer(SearchResultsAnswer):
"""Defines a list of related queries made by others.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of related queries that were made by others.
:type value: list[~web_search_client.models.Query]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'value': {'required': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'value': {'key': 'value', 'type': '[Query]'},
}
def __init__(
self,
**kwargs
):
super(RelatedSearchesRelatedSearchAnswer, self).__init__(**kwargs)
self.value = kwargs['value']
class SearchResponse(Response):
"""Defines the top-level object that the response includes when the request succeeds.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar query_context: An object that contains the query string that Bing used for the request.
This object contains the query string as entered by the user. It may also contain an altered
query string that Bing used for the query if the query string contained a spelling mistake.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar web_pages: A list of webpages that are relevant to the search query.
:vartype web_pages: ~web_search_client.models.WebAnswer
:ivar images: A list of images that are relevant to the search query.
:vartype images: ~web_search_client.models.Images
:ivar news: A list of news articles that are relevant to the search query.
:vartype news: ~web_search_client.models.News
:ivar related_searches: A list of related queries made by others.
:vartype related_searches: ~web_search_client.models.RelatedSearchesRelatedSearchAnswer
:ivar spell_suggestions: The query string that likely represents the user's intent.
:vartype spell_suggestions: ~web_search_client.models.SpellSuggestions
:ivar time_zone: The date and time of one or more geographic locations.
:vartype time_zone: ~web_search_client.models.TimeZone
:ivar videos: A list of videos that are relevant to the search query.
:vartype videos: ~web_search_client.models.Videos
:ivar computation: The answer to a math expression or units conversion expression.
:vartype computation: ~web_search_client.models.Computation
:ivar ranking_response: The order that Bing suggests that you display the search results in.
:vartype ranking_response: ~web_search_client.models.RankingResponse
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'query_context': {'readonly': True},
'web_pages': {'readonly': True},
'images': {'readonly': True},
'news': {'readonly': True},
'related_searches': {'readonly': True},
'spell_suggestions': {'readonly': True},
'time_zone': {'readonly': True},
'videos': {'readonly': True},
'computation': {'readonly': True},
'ranking_response': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'web_pages': {'key': 'webPages', 'type': 'WebAnswer'},
'images': {'key': 'images', 'type': 'Images'},
'news': {'key': 'news', 'type': 'News'},
'related_searches': {'key': 'relatedSearches', 'type': 'RelatedSearchesRelatedSearchAnswer'},
'spell_suggestions': {'key': 'spellSuggestions', 'type': 'SpellSuggestions'},
'time_zone': {'key': 'timeZone', 'type': 'TimeZone'},
'videos': {'key': 'videos', 'type': 'Videos'},
'computation': {'key': 'computation', 'type': 'Computation'},
'ranking_response': {'key': 'rankingResponse', 'type': 'RankingResponse'},
}
def __init__(
self,
**kwargs
):
super(SearchResponse, self).__init__(**kwargs)
self.query_context = None
self.web_pages = None
self.images = None
self.news = None
self.related_searches = None
self.spell_suggestions = None
self.time_zone = None
self.videos = None
self.computation = None
self.ranking_response = None
class SpellSuggestions(SearchResultsAnswer):
"""Defines a suggested query string that likely represents the user's intent. The search results include this response if Bing determines that the user may have intended to search for something different. For example, if the user searches for alon brown, Bing may determine that the user likely intended to search for Alton Brown instead (based on past searches by others of Alon Brown).
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of suggested query strings that may represent the user's
intention. The list contains only one Query object.
:type value: list[~web_search_client.models.Query]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'value': {'required': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'value': {'key': 'value', 'type': '[Query]'},
}
def __init__(
self,
**kwargs
):
super(SpellSuggestions, self).__init__(**kwargs)
self.value = kwargs['value']
class StructuredValue(Thing):
"""StructuredValue.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(StructuredValue, self).__init__(**kwargs)
class TimeZone(SearchResultsAnswer):
"""Defines the data and time of one or more geographic locations.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param primary_city_time: Required. The data and time, in UTC, of the geographic location
specified in the query. If the query specified a specific geographic location (for example, a
city), this object contains the name of the geographic location and the current date and time
of the location, in UTC. If the query specified a general geographic location, such as a state
or country, this object contains the date and time of the primary city or state found in the
specified state or country. If the location contains additional time zones, the otherCityTimes
field contains the data and time of cities or states located in the other time zones.
:type primary_city_time: ~web_search_client.models.TimeZoneInformation
:ivar other_city_times: A list of dates and times of nearby time zones.
:vartype other_city_times: list[~web_search_client.models.TimeZoneInformation]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'primary_city_time': {'required': True},
'other_city_times': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'primary_city_time': {'key': 'primaryCityTime', 'type': 'TimeZoneInformation'},
'other_city_times': {'key': 'otherCityTimes', 'type': '[TimeZoneInformation]'},
}
def __init__(
self,
**kwargs
):
super(TimeZone, self).__init__(**kwargs)
self.primary_city_time = kwargs['primary_city_time']
self.other_city_times = None
class TimeZoneInformation(msrest.serialization.Model):
"""Defines a date and time for a geographical location.
All required parameters must be populated in order to send to Azure.
:param location: Required. The name of the geographical location.For example, County; City;
City, State; City, State, Country; or Time Zone.
:type location: str
:param time: Required. The data and time specified in the form, YYYY-MM-DDThh;mm:ss.ssssssZ.
:type time: str
:param utc_offset: Required. The offset from UTC. For example, UTC-7.
:type utc_offset: str
"""
_validation = {
'location': {'required': True},
'time': {'required': True},
'utc_offset': {'required': True},
}
_attribute_map = {
'location': {'key': 'location', 'type': 'str'},
'time': {'key': 'time', 'type': 'str'},
'utc_offset': {'key': 'utcOffset', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(TimeZoneInformation, self).__init__(**kwargs)
self.location = kwargs['location']
self.time = kwargs['time']
self.utc_offset = kwargs['utc_offset']
class VideoObject(MediaObject):
"""Defines a video object that is relevant to the query.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar content_url: Original URL to retrieve the source (file) for the media object (e.g the
source URL for the image).
:vartype content_url: str
:ivar host_page_url: URL of the page that hosts the media object.
:vartype host_page_url: str
:ivar width: The width of the source media object, in pixels.
:vartype width: int
:ivar height: The height of the source media object, in pixels.
:vartype height: int
:ivar motion_thumbnail_url:
:vartype motion_thumbnail_url: str
:ivar motion_thumbnail_id:
:vartype motion_thumbnail_id: str
:ivar embed_html:
:vartype embed_html: str
:ivar allow_https_embed:
:vartype allow_https_embed: bool
:ivar view_count:
:vartype view_count: int
:ivar thumbnail: Defines an image.
:vartype thumbnail: ~web_search_client.models.ImageObject
:ivar video_id:
:vartype video_id: str
:ivar allow_mobile_embed:
:vartype allow_mobile_embed: bool
:ivar is_superfresh:
:vartype is_superfresh: bool
"""
_validation = {
'id': {'readonly': True},
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
'content_url': {'readonly': True},
'host_page_url': {'readonly': True},
'width': {'readonly': True},
'height': {'readonly': True},
'motion_thumbnail_url': {'readonly': True},
'motion_thumbnail_id': {'readonly': True},
'embed_html': {'readonly': True},
'allow_https_embed': {'readonly': True},
'view_count': {'readonly': True},
'thumbnail': {'readonly': True},
'video_id': {'readonly': True},
'allow_mobile_embed': {'readonly': True},
'is_superfresh': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
'content_url': {'key': 'contentUrl', 'type': 'str'},
'host_page_url': {'key': 'hostPageUrl', 'type': 'str'},
'width': {'key': 'width', 'type': 'int'},
'height': {'key': 'height', 'type': 'int'},
'motion_thumbnail_url': {'key': 'motionThumbnailUrl', 'type': 'str'},
'motion_thumbnail_id': {'key': 'motionThumbnailId', 'type': 'str'},
'embed_html': {'key': 'embedHtml', 'type': 'str'},
'allow_https_embed': {'key': 'allowHttpsEmbed', 'type': 'bool'},
'view_count': {'key': 'viewCount', 'type': 'int'},
'thumbnail': {'key': 'thumbnail', 'type': 'ImageObject'},
'video_id': {'key': 'videoId', 'type': 'str'},
'allow_mobile_embed': {'key': 'allowMobileEmbed', 'type': 'bool'},
'is_superfresh': {'key': 'isSuperfresh', 'type': 'bool'},
}
def __init__(
self,
**kwargs
):
super(VideoObject, self).__init__(**kwargs)
self.motion_thumbnail_url = None
self.motion_thumbnail_id = None
self.embed_html = None
self.allow_https_embed = None
self.view_count = None
self.thumbnail = None
self.video_id = None
self.allow_mobile_embed = None
self.is_superfresh = None
class Videos(SearchResultsAnswer):
"""Defines a video answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of video objects that are relevant to the query.
:type value: list[~web_search_client.models.VideoObject]
:ivar next_offset:
:vartype next_offset: int
:ivar query_expansions:
:vartype query_expansions: list[~web_search_client.models.Query]
:ivar related_searches:
:vartype related_searches: list[~web_search_client.models.Query]
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'value': {'required': True},
'next_offset': {'readonly': True},
'query_expansions': {'readonly': True},
'related_searches': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'value': {'key': 'value', 'type': '[VideoObject]'},
'next_offset': {'key': 'nextOffset', 'type': 'int'},
'query_expansions': {'key': 'queryExpansions', 'type': '[Query]'},
'related_searches': {'key': 'relatedSearches', 'type': '[Query]'},
}
def __init__(
self,
**kwargs
):
super(Videos, self).__init__(**kwargs)
self.value = kwargs['value']
self.next_offset = None
self.query_expansions = None
self.related_searches = None
class WebAnswer(SearchResultsAnswer):
"""Defines a list of relevant webpage links.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of webpages that are relevant to the query.
:type value: list[~web_search_client.models.WebPage]
:ivar some_results_removed: A Boolean value that indicates whether the response excluded some
results from the answer. If Bing excluded some results, the value is true.
:vartype some_results_removed: bool
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'follow_up_queries': {'readonly': True},
'query_context': {'readonly': True},
'total_estimated_matches': {'readonly': True},
'is_family_friendly': {'readonly': True},
'value': {'required': True},
'some_results_removed': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'follow_up_queries': {'key': 'followUpQueries', 'type': '[Query]'},
'query_context': {'key': 'queryContext', 'type': 'QueryContext'},
'total_estimated_matches': {'key': 'totalEstimatedMatches', 'type': 'long'},
'is_family_friendly': {'key': 'isFamilyFriendly', 'type': 'bool'},
'value': {'key': 'value', 'type': '[WebPage]'},
'some_results_removed': {'key': 'someResultsRemoved', 'type': 'bool'},
}
def __init__(
self,
**kwargs
):
super(WebAnswer, self).__init__(**kwargs)
self.value = kwargs['value']
self.some_results_removed = None
class WebGrouping(msrest.serialization.Model):
"""WebGrouping.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: .
All required parameters must be populated in order to send to Azure.
:param web_pages: Required.
:type web_pages: list[~web_search_client.models.WebPage]
:param type: Required. Constant filled by server.
:type type: str
"""
_validation = {
'web_pages': {'required': True},
'type': {'required': True},
}
_attribute_map = {
'web_pages': {'key': 'webPages', 'type': '[WebPage]'},
'type': {'key': '_type', 'type': 'str'},
}
_subtype_map = {
'type': {}
}
def __init__(
self,
**kwargs
):
super(WebGrouping, self).__init__(**kwargs)
self.web_pages = kwargs['web_pages']
self.type = None # type: Optional[str]
class WebMetaTag(msrest.serialization.Model):
"""Defines a webpage's metadata.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar name: The metadata.
:vartype name: str
:ivar content: The name of the metadata.
:vartype content: str
"""
_validation = {
'name': {'readonly': True},
'content': {'readonly': True},
}
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'content': {'key': 'content', 'type': 'str'},
}
def __init__(
self,
**kwargs
):
super(WebMetaTag, self).__init__(**kwargs)
self.name = None
self.content = None
class WebPage(CreativeWork):
"""Defines a webpage that is relevant to the query.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar display_url: The display URL of the webpage. The URL is meant for display purposes only
and is not well formed.
:vartype display_url: str
:ivar snippet: A snippet of text from the webpage that describes its contents.
:vartype snippet: str
:ivar deep_links: A list of links to related content that Bing found in the website that
contains this webpage. The Webpage object in this context includes only the name, url,
urlPingSuffix, and snippet fields.
:vartype deep_links: list[~web_search_client.models.WebPage]
:ivar date_last_crawled: The last time that Bing crawled the webpage. The date is in the form,
YYYY-MM-DDTHH:MM:SS. For example, 2015-04-13T05:23:39.
:vartype date_last_crawled: str
:ivar search_tags: A list of search tags that the webpage owner specified on the webpage. The
API returns only indexed search tags. The name field of the MetaTag object contains the indexed
search tag. Search tags begin with search.* (for example, search.assetId). The content field
contains the tag's value.
:vartype search_tags: list[~web_search_client.models.WebMetaTag]
:ivar primary_image_of_page: Defines an image.
:vartype primary_image_of_page: ~web_search_client.models.ImageObject
"""
_validation = {
'id': {'readonly': True},
'web_search_url': {'readonly': True},
'name': {'readonly': True},
'url': {'readonly': True},
'image': {'readonly': True},
'description': {'readonly': True},
'bing_id': {'readonly': True},
'thumbnail_url': {'readonly': True},
'provider': {'readonly': True},
'text': {'readonly': True},
'display_url': {'readonly': True},
'snippet': {'readonly': True},
'deep_links': {'readonly': True},
'date_last_crawled': {'readonly': True},
'search_tags': {'readonly': True},
'primary_image_of_page': {'readonly': True},
}
_attribute_map = {
'type': {'key': '_type', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'web_search_url': {'key': 'webSearchUrl', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'image': {'key': 'image', 'type': 'ImageObject'},
'description': {'key': 'description', 'type': 'str'},
'bing_id': {'key': 'bingId', 'type': 'str'},
'thumbnail_url': {'key': 'thumbnailUrl', 'type': 'str'},
'provider': {'key': 'provider', 'type': '[Thing]'},
'text': {'key': 'text', 'type': 'str'},
'display_url': {'key': 'displayUrl', 'type': 'str'},
'snippet': {'key': 'snippet', 'type': 'str'},
'deep_links': {'key': 'deepLinks', 'type': '[WebPage]'},
'date_last_crawled': {'key': 'dateLastCrawled', 'type': 'str'},
'search_tags': {'key': 'searchTags', 'type': '[WebMetaTag]'},
'primary_image_of_page': {'key': 'primaryImageOfPage', 'type': 'ImageObject'},
}
def __init__(
self,
**kwargs
):
super(WebPage, self).__init__(**kwargs)
self.display_url = None
self.snippet = None
self.deep_links = None
self.date_last_crawled = None
self.search_tags = None
self.primary_image_of_page = None
| sdk/WebSearch/web_search_client/models/_models.py | 77,975 | Answer.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
Article.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar word_count: The number of words in the text of the Article.
:vartype word_count: int
Defines an expression and its answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:param expression: Required. The math or conversion expression. If the query contains a request
to convert units of measure (for example, meters to feet), this field contains the from units
and value contains the to units. If the query contains a mathematical expression such as 2+2,
this field contains the expression and value contains the answer. Note that mathematical
expressions may be normalized. For example, if the query was sqrt(4^2+8^2), the normalized
expression may be sqrt((4^2)+(8^2)). If the user's query is a math question and the
textDecorations query parameter is set to true, the expression string may include formatting
markers. For example, if the user's query is log(2), the normalized expression includes the
subscript markers. For more information, see Hit Highlighting.
:type expression: str
:param value: Required. The expression's answer.
:type value: str
CreativeWork.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
Defines the error that occurred.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param code: Required. The error code that identifies the category of error. Possible values
include: "None", "ServerError", "InvalidRequest", "RateLimitExceeded", "InvalidAuthorization",
"InsufficientAuthorization". Default value: "None".
:type code: str or ~web_search_client.models.ErrorCode
:ivar sub_code: The error code that further helps to identify the error. Possible values
include: "UnexpectedError", "ResourceError", "NotImplemented", "ParameterMissing",
"ParameterInvalidValue", "HttpNotAllowed", "Blocked", "AuthorizationMissing",
"AuthorizationRedundancy", "AuthorizationDisabled", "AuthorizationExpired".
:vartype sub_code: str or ~web_search_client.models.ErrorSubCode
:param message: Required. A description of the error.
:type message: str
:ivar more_details: A description that provides additional information about the error.
:vartype more_details: str
:ivar parameter: The parameter in the request that caused the error.
:vartype parameter: str
:ivar value: The parameter's value in the request that was not valid.
:vartype value: str
The top-level response that represents a failed request.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:param errors: Required. A list of errors that describe the reasons why the request failed.
:type errors: list[~web_search_client.models.Error]
Defines the identity of a resource.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
Defines an image.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar content_url: Original URL to retrieve the source (file) for the media object (e.g the
source URL for the image).
:vartype content_url: str
:ivar host_page_url: URL of the page that hosts the media object.
:vartype host_page_url: str
:ivar width: The width of the source media object, in pixels.
:vartype width: int
:ivar height: The height of the source media object, in pixels.
:vartype height: int
:ivar thumbnail: The URL to a thumbnail of the image.
:vartype thumbnail: ~web_search_client.models.ImageObject
Defines an image answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:ivar next_offset:
:vartype next_offset: int
:param value: Required. A list of image objects that are relevant to the query. If there are no
results, the List is empty.
:type value: list[~web_search_client.models.ImageObject]
:ivar query_expansions:
:vartype query_expansions: list[~web_search_client.models.Query]
:ivar similar_terms:
:vartype similar_terms: list[~web_search_client.models.Query]
:ivar related_searches:
:vartype related_searches: list[~web_search_client.models.Query]
Intangible.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
MediaObject.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar content_url: Original URL to retrieve the source (file) for the media object (e.g the
source URL for the image).
:vartype content_url: str
:ivar host_page_url: URL of the page that hosts the media object.
:vartype host_page_url: str
:ivar width: The width of the source media object, in pixels.
:vartype width: int
:ivar height: The height of the source media object, in pixels.
:vartype height: int
Defines a news answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. An array of NewsArticle objects that contain information about news
articles that are relevant to the query. If there are no results to return for the request, the
array is empty.
:type value: list[~web_search_client.models.Article]
:ivar location:
:vartype location: str
Defines a news article.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar word_count: The number of words in the text of the Article.
:vartype word_count: int
Defines a local entity answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of local entities, such as restaurants or hotels.
:type value: list[~web_search_client.models.Thing]
Defines a search query.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param text: Required. The query string. Use this string as the query term in a new search
request.
:type text: str
:ivar display_text: The display version of the query term. This version of the query term may
contain special characters that highlight the search term found in the query string. The string
contains the highlighting characters only if the query enabled hit highlighting.
:vartype display_text: str
:ivar web_search_url: The URL that takes the user to the Bing search results page for the
query.Only related search results include this field.
:vartype web_search_url: str
:ivar search_link:
:vartype search_link: str
:ivar thumbnail: Defines an image.
:vartype thumbnail: ~web_search_client.models.ImageObject
Defines the query context that Bing used for the request.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param original_query: Required. The query string as specified in the request.
:type original_query: str
:ivar altered_query: The query string used by Bing to perform the query. Bing uses the altered
query string if the original query string contained spelling mistakes. For example, if the
query string is "saling downwind", the altered query string will be "sailing downwind". This
field is included only if the original query string contains a spelling mistake.
:vartype altered_query: str
:ivar alteration_override_query: The query string to use to force Bing to use the original
string. For example, if the query string is "saling downwind", the override query string will
be "+saling downwind". Remember to encode the query string which results in
"%2Bsaling+downwind". This field is included only if the original query string contains a
spelling mistake.
:vartype alteration_override_query: str
:ivar adult_intent: A Boolean value that indicates whether the specified query has adult
intent. The value is true if the query has adult intent; otherwise, false.
:vartype adult_intent: bool
:ivar ask_user_for_location: A Boolean value that indicates whether Bing requires the user's
location to provide accurate results. If you specified the user's location by using the
X-MSEdge-ClientIP and X-Search-Location headers, you can ignore this field. For location aware
queries, such as "today's weather" or "restaurants near me" that need the user's location to
provide accurate results, this field is set to true. For location aware queries that include
the location (for example, "Seattle weather"), this field is set to false. This field is also
set to false for queries that are not location aware, such as "best sellers".
:vartype ask_user_for_location: bool
:ivar is_transactional:
:vartype is_transactional: bool
Defines a search results group, such as mainline.
All required parameters must be populated in order to send to Azure.
:param items: Required. A list of search result items to display in the group.
:type items: list[~web_search_client.models.RankingItem]
Defines a search result item to display.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param answer_type: Required. The answer that contains the item to display. Use the type to
find the answer in the SearchResponse object. The type is the name of a SearchResponse field.
Possible values include: "WebPages", "Images", "SpellSuggestions", "News", "RelatedSearches",
"Videos", "Computation", "TimeZone".
:type answer_type: str or ~web_search_client.models.AnswerType
:ivar result_index: A zero-based index of the item in the answer.If the item does not include
this field, display all items in the answer. For example, display all news articles in the News
answer.
:vartype result_index: int
:ivar value: The ID that identifies either an answer to display or an item of an answer to
display. If the ID identifies an answer, display all items of the answer.
:vartype value: ~web_search_client.models.Identifiable
:ivar html_index:
:vartype html_index: int
:ivar textual_index:
:vartype textual_index: int
:ivar screenshot_index:
:vartype screenshot_index: int
Defines where on the search results page content should be placed and in what order.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar pole: The search results that should be afforded the most visible treatment (for example,
displayed above the mainline and sidebar).
:vartype pole: ~web_search_client.models.RankingGroup
:ivar mainline: The search results to display in the mainline.
:vartype mainline: ~web_search_client.models.RankingGroup
:ivar sidebar: The search results to display in the sidebar.
:vartype sidebar: ~web_search_client.models.RankingGroup
Defines a list of related queries made by others.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of related queries that were made by others.
:type value: list[~web_search_client.models.Query]
Defines a response. All schemas that could be returned at the root of a response should inherit from this.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
ResponseBase.
:param type:
:type type: str
Defines the top-level object that the response includes when the request succeeds.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar query_context: An object that contains the query string that Bing used for the request.
This object contains the query string as entered by the user. It may also contain an altered
query string that Bing used for the query if the query string contained a spelling mistake.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar web_pages: A list of webpages that are relevant to the search query.
:vartype web_pages: ~web_search_client.models.WebAnswer
:ivar images: A list of images that are relevant to the search query.
:vartype images: ~web_search_client.models.Images
:ivar news: A list of news articles that are relevant to the search query.
:vartype news: ~web_search_client.models.News
:ivar related_searches: A list of related queries made by others.
:vartype related_searches: ~web_search_client.models.RelatedSearchesRelatedSearchAnswer
:ivar spell_suggestions: The query string that likely represents the user's intent.
:vartype spell_suggestions: ~web_search_client.models.SpellSuggestions
:ivar time_zone: The date and time of one or more geographic locations.
:vartype time_zone: ~web_search_client.models.TimeZone
:ivar videos: A list of videos that are relevant to the search query.
:vartype videos: ~web_search_client.models.Videos
:ivar computation: The answer to a math expression or units conversion expression.
:vartype computation: ~web_search_client.models.Computation
:ivar ranking_response: The order that Bing suggests that you display the search results in.
:vartype ranking_response: ~web_search_client.models.RankingResponse
SearchResultsAnswer.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
Defines a suggested query string that likely represents the user's intent. The search results include this response if Bing determines that the user may have intended to search for something different. For example, if the user searches for alon brown, Bing may determine that the user likely intended to search for Alton Brown instead (based on past searches by others of Alon Brown).
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of suggested query strings that may represent the user's
intention. The list contains only one Query object.
:type value: list[~web_search_client.models.Query]
StructuredValue.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
Thing.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
Defines the data and time of one or more geographic locations.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param primary_city_time: Required. The data and time, in UTC, of the geographic location
specified in the query. If the query specified a specific geographic location (for example, a
city), this object contains the name of the geographic location and the current date and time
of the location, in UTC. If the query specified a general geographic location, such as a state
or country, this object contains the date and time of the primary city or state found in the
specified state or country. If the location contains additional time zones, the otherCityTimes
field contains the data and time of cities or states located in the other time zones.
:type primary_city_time: ~web_search_client.models.TimeZoneInformation
:ivar other_city_times: A list of dates and times of nearby time zones.
:vartype other_city_times: list[~web_search_client.models.TimeZoneInformation]
Defines a date and time for a geographical location.
All required parameters must be populated in order to send to Azure.
:param location: Required. The name of the geographical location.For example, County; City;
City, State; City, State, Country; or Time Zone.
:type location: str
:param time: Required. The data and time specified in the form, YYYY-MM-DDThh;mm:ss.ssssssZ.
:type time: str
:param utc_offset: Required. The offset from UTC. For example, UTC-7.
:type utc_offset: str
Defines a video object that is relevant to the query.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar content_url: Original URL to retrieve the source (file) for the media object (e.g the
source URL for the image).
:vartype content_url: str
:ivar host_page_url: URL of the page that hosts the media object.
:vartype host_page_url: str
:ivar width: The width of the source media object, in pixels.
:vartype width: int
:ivar height: The height of the source media object, in pixels.
:vartype height: int
:ivar motion_thumbnail_url:
:vartype motion_thumbnail_url: str
:ivar motion_thumbnail_id:
:vartype motion_thumbnail_id: str
:ivar embed_html:
:vartype embed_html: str
:ivar allow_https_embed:
:vartype allow_https_embed: bool
:ivar view_count:
:vartype view_count: int
:ivar thumbnail: Defines an image.
:vartype thumbnail: ~web_search_client.models.ImageObject
:ivar video_id:
:vartype video_id: str
:ivar allow_mobile_embed:
:vartype allow_mobile_embed: bool
:ivar is_superfresh:
:vartype is_superfresh: bool
Defines a video answer.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of video objects that are relevant to the query.
:type value: list[~web_search_client.models.VideoObject]
:ivar next_offset:
:vartype next_offset: int
:ivar query_expansions:
:vartype query_expansions: list[~web_search_client.models.Query]
:ivar related_searches:
:vartype related_searches: list[~web_search_client.models.Query]
Defines a list of relevant webpage links.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar follow_up_queries:
:vartype follow_up_queries: list[~web_search_client.models.Query]
:ivar query_context: Defines the query context that Bing used for the request.
:vartype query_context: ~web_search_client.models.QueryContext
:ivar total_estimated_matches: The estimated number of webpages that are relevant to the query.
Use this number along with the count and offset query parameters to page the results.
:vartype total_estimated_matches: long
:ivar is_family_friendly:
:vartype is_family_friendly: bool
:param value: Required. A list of webpages that are relevant to the query.
:type value: list[~web_search_client.models.WebPage]
:ivar some_results_removed: A Boolean value that indicates whether the response excluded some
results from the answer. If Bing excluded some results, the value is true.
:vartype some_results_removed: bool
WebGrouping.
You probably want to use the sub-classes and not this class directly. Known
sub-classes are: .
All required parameters must be populated in order to send to Azure.
:param web_pages: Required.
:type web_pages: list[~web_search_client.models.WebPage]
:param type: Required. Constant filled by server.
:type type: str
Defines a webpage's metadata.
Variables are only populated by the server, and will be ignored when sending a request.
:ivar name: The metadata.
:vartype name: str
:ivar content: The name of the metadata.
:vartype content: str
Defines a webpage that is relevant to the query.
Variables are only populated by the server, and will be ignored when sending a request.
:param type:
:type type: str
:ivar id: A String identifier.
:vartype id: str
:ivar web_search_url: The URL To Bing's search result for this item.
:vartype web_search_url: str
:ivar name: The name of the thing represented by this object.
:vartype name: str
:ivar url: The URL to get more information about the thing represented by this object.
:vartype url: str
:ivar image: Defines an image.
:vartype image: ~web_search_client.models.ImageObject
:ivar description: A short description of the item.
:vartype description: str
:ivar bing_id: An ID that uniquely identifies this item.
:vartype bing_id: str
:ivar thumbnail_url: The URL to a thumbnail of the item.
:vartype thumbnail_url: str
:ivar provider: The source of the creative work.
:vartype provider: list[~web_search_client.models.Thing]
:ivar text:
:vartype text: str
:ivar display_url: The display URL of the webpage. The URL is meant for display purposes only
and is not well formed.
:vartype display_url: str
:ivar snippet: A snippet of text from the webpage that describes its contents.
:vartype snippet: str
:ivar deep_links: A list of links to related content that Bing found in the website that
contains this webpage. The Webpage object in this context includes only the name, url,
urlPingSuffix, and snippet fields.
:vartype deep_links: list[~web_search_client.models.WebPage]
:ivar date_last_crawled: The last time that Bing crawled the webpage. The date is in the form,
YYYY-MM-DDTHH:MM:SS. For example, 2015-04-13T05:23:39.
:vartype date_last_crawled: str
:ivar search_tags: A list of search tags that the webpage owner specified on the webpage. The
API returns only indexed search tags. The name field of the MetaTag object contains the indexed
search tag. Search tags begin with search.* (for example, search.assetId). The content field
contains the tag's value.
:vartype search_tags: list[~web_search_client.models.WebMetaTag]
:ivar primary_image_of_page: Defines an image.
:vartype primary_image_of_page: ~web_search_client.models.ImageObject
coding=utf-8 -------------------------------------------------------------------------- Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.0.6320, generator: {generator}) Changes may cause incorrect behavior and will be lost if the code is regenerated. -------------------------------------------------------------------------- type: Optional[str] | 35,836 | en | 0.642944 |
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PyPytestRunner(PythonPackage):
"""Invoke py.test as distutils command with dependency resolution."""
homepage = "https://github.com/pytest-dev/pytest-runner"
url = "https://pypi.io/packages/source/p/pytest-runner/pytest-runner-5.1.tar.gz"
version('5.1', sha256='25a013c8d84f0ca60bb01bd11913a3bcab420f601f0f236de4423074af656e7a')
version('2.11.1', sha256='983a31eab45e375240e250161a556163bc8d250edaba97960909338c273a89b3')
depends_on('py-setuptools', type='build')
depends_on('py-setuptools-scm@1.15:', type='build')
| var/spack/repos/builtin/packages/py-pytest-runner/package.py | 781 | Invoke py.test as distutils command with dependency resolution.
Copyright 2013-2020 Lawrence Livermore National Security, LLC and other Spack Project Developers. See the top-level COPYRIGHT file for details. SPDX-License-Identifier: (Apache-2.0 OR MIT) | 254 | en | 0.734105 |
"""
OpenSpace
Copyright (c) 2014-2018
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be included in all copies
or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This script traverses the file tree of OpenSpace and will check all files' include
guards for correctness. At the moment this includes:
* Correctness (file has a #ifndef. #define, and #endif lines)
* Equality (using the same name for the #ifdef and #define)
* Styling
* no empty line between #ifndef and #define lines
* Empty lines before and after #ifndef #define block
* Files end with an empty line
* Copyright header is correctly indented
* Include guard correctly uses the filename
* Include guard is all upper case
* Correct usage of the name in the final comment of the file
* Correct year of copyright notice
* Naming convention
* OpenSpace include guards start with OPENSPACE, Ghoul with GHOUL,
module includes have the module name in it
* The correct submodule is used
* Checking for duplicates between all files
* Checking that no file includes glm header directly
* Checking whether any files starts with the UTF-8 Byte-order mark
* Checking whether a file as empty-only lines
* Checking whether the default assert macros are used anywhere instead of the
ghoul_assert macro
* Checking whether there are TABs in the file
If this script is executed from the base directory of OpenSpace, no arguments need to
be passed, otherwise the first and only argument has to point to the base directory.
Thus, the default value of the first argument is '.'
"""
import fnmatch
import glob
import os
import re
import sys
current_year = '2018'
is_strict_mode = False
is_silent_mode = False
def get_ifndef_symbol(lines):
index = [i for i,s in enumerate(lines) if '#ifndef ' in s]
if len(index) == 0:
return '', -1
result = re.search('#ifndef (.*)\n', lines[index[0]])
return result.group(1), index[0]
def get_define_symbol(lines):
index = [i for i,s in enumerate(lines) if '#define ' in s]
if len(index) == 0:
return '', -1
result = re.search('#define (.*)\n', lines[index[0]])
return result.group(1), index[0]
def check_correctness(lines):
ifndef_symbol, line_number = get_ifndef_symbol(lines)
if line_number == -1:
return 'No #ifndef in file'
define_symbol, line_number = get_define_symbol(lines)
if (line_number == -1):
return 'No #define in file'
index = [i for i,s in enumerate(lines) if '#endif' in s]
if len(index) == 0:
return 'No #endif in file'
return ''
def check_equality(lines):
ifndef, _ = get_ifndef_symbol(lines)
define, _ = get_define_symbol(lines)
if ifndef == define:
return ''
else:
return ifndef + ' ' + define
def check_styling(lines):
ifndef_symbol, ifndef_line = get_ifndef_symbol(lines)
_, define_line = get_define_symbol(lines)
if abs(ifndef_line - define_line) != 1:
return '#ifndef and #define lines are not subsequent'
if lines[ifndef_line - 1].strip() != '':
return 'Preceding line is not empty'
if lines[define_line + 1].strip() != '':
return 'Following line is not empty'
if not lines[-1][-1] in ['\n', '\r']:
return 'Last line must end with a newline'
for l in lines[2:23]:
if l[0] != ' ':
return 'Copyright header must be indented'
if ifndef_symbol != ifndef_symbol.upper():
return 'Include guard is not all upper case'
return ''
def check_styling_filename(lines, filename):
ifndef_symbol, _ = get_ifndef_symbol(lines)
file = os.path.splitext(os.path.basename(filename))[0].upper()
if not (file in ifndef_symbol or file in ifndef_symbol.replace('_', '')):
return 'Malformed include guard: ' + ifndef_symbol + ' || ' + file
def check_comment(lines):
ifndef_symbol, _ = get_ifndef_symbol(lines)
index = [i for i,s in enumerate(lines) if '#endif' in s]
endif_line = lines[index[-1]].strip()
if endif_line != '#endif // ' + ifndef_symbol:
print(ifndef_symbol)
print(endif_line)
return '#endif line is not correctly formatted'
else:
return ''
def check_copyright(lines):
index = [i for i,s in enumerate(lines[0:23]) if 'Copyright' in s]
if len(index) == 0:
return 'No copyright header found'
beginning_string = ' * Copyright (c) 2012-'
# * Copyright (c) 2014-
year = lines[index[0]][len(beginning_string) : len(beginning_string) + 4]
if lines[index[0] + 1][0] != ' ':
return 'Copyright header is not correctly indented'
if year != current_year:
return 'Out of date copyright notice ' + year + ' || ' + current_year
return ''
def check_byte_order_mark_character(lines):
c = lines[0][0]
if c == 'ï':
return 'File contains UTF-8 byte mark order character'
return ''
def check_naming_convention_component(lines, component):
ifndef_symbol, _ = get_ifndef_symbol(lines)
component_part = ifndef_symbol[2:2 + len(component)]
if component_part != component.upper():
return '#ifndef naming convention broken: ' + ifndef_symbol + ' || ' + component.upper()
else:
return ''
def check_naming_convention_subcomponent(lines, component, file):
ifndef_symbol, _ = get_ifndef_symbol(lines)
if component == "ghoul" or component == "openspace_core":
return
subcomponent_part = ifndef_symbol[2 + len(component) + 1 :]
subcomponent_part = subcomponent_part[: subcomponent_part.find('_')]
path_part = file.split('/')[1]
second_path_part = file.split('/')[2]
if (path_part.upper() != subcomponent_part) and (second_path_part.upper() != subcomponent_part):
return 'Subcomponent naming convention broken: ' + ifndef_symbol
else:
return ''
def check_duplicates(lines, previousSymbols):
ifndef_symbol, _ = get_ifndef_symbol(lines)
if ifndef_symbol in previousSymbols:
return False, ifndef_symbol
else:
return True, ifndef_symbol
def check_glm_header(lines, file):
Allowed_Files = [
'ghoul/glm.h'
]
for f in Allowed_Files:
if f in file.replace('\\', '/'):
return ''
index = [i for i,s in enumerate(lines)
if '#include <glm/glm.hpp>' in s or
'#include "glm/glm.hpp>"' in s]
if len(index) > 0:
return 'File used wrong glm include. Use "#include <ghoul/glm.h>" instead'
else:
return ''
def check_core_dependency(lines, component):
if component != "openspace_core":
return ''
index = [i for i,s in enumerate(lines) if 'OPENSPACE_MODULE_' in s]
if len(index) > 0:
return lines[index[0]][:-1]
else:
return ''
def check_using_namespace(lines):
index = [i for i,s in enumerate(lines) if "using namespace" in s.strip()]
if len(index) > 0:
return lines[index[0]]
else:
return ''
def check_end_of_line(lines):
if lines[-1][-1] != '\n':
return lines[-1][-1]
else:
return ''
def check_empty_only_line(lines):
# Disable this check in non-strict mode
if not is_strict_mode:
return ''
index = [i + 1 for i, s in enumerate(lines) if s.translate({ord(c): None for c in '\n\r'}).isspace()]
if len(index) > 0:
return index
else:
return ''
def check_assert_usage(lines):
# _assert checks for both ghoul_assert and static_assert, which are both reasonable
index = [i + 1 for i,s in enumerate(lines) if ('assert(' in s and not '_assert(' in s) and s.strip()[0:2] != '//']
if len(index) > 0:
return index
else:
return '';
def check_line_length(lines):
# Disable this check in non-strict mode
if not is_strict_mode:
return ''
index = [i + 1 for i, s in enumerate(lines) if len(s) > (90 + 1)]
if len(index) > 0:
return index
else:
return ''
def check_empty_character_at_end(lines):
# Disable this check in non-strict mode
if not is_strict_mode:
return ''
index = [i + 1 for i, s in enumerate(lines) if len(s) > 1 and s[-2] == ' ' and not s.strip() == '']
if len(index) > 0:
return index
else:
return ''
def check_for_tab(lines):
index = [i + 1 for i, s in enumerate(lines) if '\t' in s]
if len(index) > 0:
return index
else:
return ''
previousSymbols = {}
def check_header_file(file, component):
with open(file, 'r+', encoding="utf8") as f:
lines = f.readlines()
correctness = check_correctness(lines)
if correctness:
print(file, '\t', 'Correctness check failed', '\t', correctness)
return
equality = check_equality(lines)
if equality:
print(file, '\t', 'Equality check failed', '\t', equality)
return
styling = check_styling(lines)
if styling:
print(file, '\t', 'Styling check failed', '\t', styling)
return
styling_filename = check_styling_filename(lines, file)
if styling_filename:
print(file, '\t', 'Filename styling check failed', '\t', styling_filename)
return
comment = check_comment(lines)
if comment:
print(file, '\t', 'Comment check failed', '\t', comment)
return
copyright = check_copyright(lines)
if copyright:
print(file, '\t', 'Copyright check failed', '\t', copyright)
return
naming_component = check_naming_convention_component(lines, component)
if naming_component:
print(file, '\t', 'Naming convention broken', '\t', naming_component)
return
naming_subcomponent = check_naming_convention_subcomponent(lines, component, file)
if naming_subcomponent:
print(file, '\t', 'Naming convention broken', '\t', naming_subcomponent)
return
end_of_line = check_end_of_line(lines)
if end_of_line:
print(file, '\t', 'Last line does not contain a newline character: ', end_of_line)
return
duplicates, symbol = check_duplicates(lines, previousSymbols)
if not duplicates:
print(file, '\t', 'Duplicate include guard', symbol, 'first in', previousSymbols[symbol])
return
else:
previousSymbols[symbol] = file
header = check_glm_header(lines, file)
if header:
print(file, '\t', 'Illegal glm header include', header)
return
core_dependency = check_core_dependency(lines, component)
if core_dependency:
print(file, '\t', 'Wrong dependency (core depends on module)', core_dependency)
if (not 'ghoul_gl.h' in file):
# ghoul_gl.h is allowed to use 'using namespace' to pull the gl namespace in
using_namespaces = check_using_namespace(lines)
if using_namespaces:
print(file, '\t', 'Using namespace found in header file')
bom = check_byte_order_mark_character(lines)
if bom:
print(file, '\t', 'Byte order mark failed:', bom)
empty_only_lines = check_empty_only_line(lines)
if empty_only_lines:
print(file, '\t', 'Empty only line: ', empty_only_lines)
line_length = check_line_length(lines)
if line_length:
print(file, '\t', 'Line length exceeded: ', line_length)
empty_character_at_end = check_empty_character_at_end(lines)
if empty_character_at_end:
print(file, '\t', 'Empty character at end: ', empty_character_at_end)
assert_usage = check_assert_usage(lines)
if assert_usage:
print(file, '\t', 'Wrong assert usage: ', assert_usage)
tabs = check_for_tab(lines)
if tabs:
print(file, '\t', 'TABs found: ', tabs)
def check_inline_file(file, component):
with open(file, 'r+', encoding="utf8") as f:
lines = f.readlines()
copyright = check_copyright(lines)
if copyright:
print(file, '\t', 'Copyright check failed', '\t', copyright)
header = check_glm_header(lines, file)
if header:
print(file, '\t', 'Illegal glm header include', header)
core_dependency = check_core_dependency(lines, component)
if core_dependency:
print(file, '\t', 'Wrong dependency (core depends on module)', core_dependency)
end_of_line = check_end_of_line(lines)
if end_of_line:
print(file, '\t', 'Last line does not contain a newline character: ', end_of_line)
return
bom = check_byte_order_mark_character(lines)
if bom:
print(file, '\t', 'Byte order mark failed:', bom)
empty_only_lines = check_empty_only_line(lines)
if empty_only_lines:
print(file, '\t', 'Empty only line: ', empty_only_lines)
line_length = check_line_length(lines)
if line_length:
print(file, '\t', 'Line length exceeded: ', line_length)
if (not '_doc.inl' in file):
# The _doc.inl files are allowed to use using namespace as they are inclued
# from the cpp files and thus don't leak it
using_namespaces = check_using_namespace(lines)
if using_namespaces:
print(file, '\t', 'Using namespace found in inline file')
line_length = check_line_length(lines)
if line_length:
print(file, '\t', 'Line length exceeded: ', line_length)
empty_character_at_end = check_empty_character_at_end(lines)
if empty_character_at_end:
print(file, '\t', 'Empty character at end: ', empty_character_at_end)
assert_usage = check_assert_usage(lines)
if assert_usage:
print(file, '\t', 'Wrong assert usage: ', assert_usage)
tabs = check_for_tab(lines)
if tabs:
print(file, '\t', 'TABs found: ', tabs)
def check_source_file(file, component):
with open(file, 'r+', encoding="utf8") as f:
lines = f.readlines()
header = check_glm_header(lines, file)
if header:
print(file, '\t', 'Illegal glm header include', header)
core_dependency = check_core_dependency(lines, component)
if core_dependency:
print(file, '\t' 'Wrong core dependency', core_dependency)
end_of_line = check_end_of_line(lines)
if end_of_line:
print(file, '\t', 'Last line does not contain a newline character: ', end_of_line)
return
copyright = check_copyright(lines)
if copyright:
print(file, '\t', 'Copyright check failed', '\t', copyright)
bom = check_byte_order_mark_character(lines)
if bom:
print(file, '\t', 'Byte order mark failed:', bom)
empty_only_lines = check_empty_only_line(lines)
if empty_only_lines:
print(file, '\t', 'Empty only line: ', empty_only_lines)
line_length = check_line_length(lines)
if line_length:
print(file, '\t', 'Line length exceeded: ', line_length)
empty_character_at_end = check_empty_character_at_end(lines)
if empty_character_at_end:
print(file, '\t', 'Empty character at end: ', empty_character_at_end)
assert_usage = check_assert_usage(lines)
if assert_usage:
print(file, '\t', 'Wrong assert usage: ', assert_usage)
tabs = check_for_tab(lines)
if tabs:
print(file, '\t', 'TABs found: ', tabs)
def check_files(positiveList, negativeList, component, check_function):
files = []
for p in positiveList:
f = glob.glob(p, recursive=True)
f = [fi.replace('\\', '/') for fi in f]
files.extend(f)
negativeFiles = []
for n in negativeList:
f = glob.glob(n, recursive=True)
f = [fi.replace('\\', '/') for fi in f]
negativeFiles.extend(f)
filtered_files = [f for f in files if f not in negativeFiles]
for file in filtered_files:
check_function(file, component)
basePath = './'
if len(sys.argv) > 1:
if sys.argv[1] != "strict":
basePath = sys.argv[1] + '/'
for a in sys.argv:
if a == "strict":
is_strict_mode = True
if a == "silent":
is_silent_mode = True
# Check header files
if not is_silent_mode:
print("Checking header files")
print("=====================")
check_files(
[basePath + 'include/**/*.h'],
[],
'openspace_core',
check_header_file
)
check_files(
[basePath + 'apps/**/*.h'],
[basePath + 'apps/**/ext/**/*.h'],
'openspace_app',
check_header_file
)
check_files(
[basePath + 'modules/**/*.h'],
[
basePath + 'modules/**/ext/**/*.h',
basePath + 'modules/**/node_modules/**/*.h',
basePath + 'modules/webbrowser/resource.h'
],
'openspace_module',
check_header_file
)
check_files(
[basePath + 'ext/ghoul/include/**/*.h'],
[],
'ghoul',
check_header_file
)
if not is_silent_mode:
print("")
print("Checking inline files")
print("=====================")
check_files(
[basePath + 'include/**/*.inl'],
[],
'openspace_core',
check_inline_file
)
check_files(
[basePath + 'src/**/*.inl'],
[],
'openspace_core',
check_inline_file
)
check_files(
[basePath + 'apps/**/*.inl'],
[basePath + 'apps/**/ext/**/*.inl'],
'openspace_app',
check_inline_file
)
check_files(
[basePath + 'modules/**/*.inl'],
[basePath + 'modules/**/ext/**/*.h'],
'openspace_module',
check_inline_file
)
check_files(
[basePath + 'ext/ghoul/include/**/*.inl'],
[],
'ghoul',
check_inline_file
)
if not is_silent_mode:
print("")
print("Checking source files")
print("=====================")
check_files(
[basePath + 'src/**/*.cpp'],
[],
'openspace_core',
check_source_file
)
check_files(
[basePath + 'apps/**/*.cpp'],
[basePath + 'apps/**/ext/**/*.cpp'],
'openspace_app',
check_source_file
)
check_files(
[basePath + 'modules/**/*.cpp'],
[basePath + 'modules/**/ext/**/*.cpp', basePath + 'modules/**/node_modules/**/*.cpp'],
'openspace_module',
check_source_file
)
check_files(
[basePath + 'ext/ghoul/src/**/*.cpp'],
[],
'ghoul',
check_source_file
)
| support/coding/check_style_guide.py | 19,467 | OpenSpace
Copyright (c) 2014-2018
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be included in all copies
or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This script traverses the file tree of OpenSpace and will check all files' include
guards for correctness. At the moment this includes:
* Correctness (file has a #ifndef. #define, and #endif lines)
* Equality (using the same name for the #ifdef and #define)
* Styling
* no empty line between #ifndef and #define lines
* Empty lines before and after #ifndef #define block
* Files end with an empty line
* Copyright header is correctly indented
* Include guard correctly uses the filename
* Include guard is all upper case
* Correct usage of the name in the final comment of the file
* Correct year of copyright notice
* Naming convention
* OpenSpace include guards start with OPENSPACE, Ghoul with GHOUL,
module includes have the module name in it
* The correct submodule is used
* Checking for duplicates between all files
* Checking that no file includes glm header directly
* Checking whether any files starts with the UTF-8 Byte-order mark
* Checking whether a file as empty-only lines
* Checking whether the default assert macros are used anywhere instead of the
ghoul_assert macro
* Checking whether there are TABs in the file
If this script is executed from the base directory of OpenSpace, no arguments need to
be passed, otherwise the first and only argument has to point to the base directory.
Thus, the default value of the first argument is '.'
* Copyright (c) 2014- Disable this check in non-strict mode _assert checks for both ghoul_assert and static_assert, which are both reasonable Disable this check in non-strict mode Disable this check in non-strict mode ghoul_gl.h is allowed to use 'using namespace' to pull the gl namespace in The _doc.inl files are allowed to use using namespace as they are inclued from the cpp files and thus don't leak it Check header files | 2,893 | en | 0.84651 |
"""
Training data and validation accuracy.
"""
# Author: Changyu Liu <Shiyipaisizuo@gmail.com>
# Last modified: 2018-07-06
# LICENSE: MIT
import os
import numpy as np
import tensorflow as tf
from PIL import Image
import train_test_split
import cnn
N_CLASSES = 2 # dogs and cats
IMG_W = 208 # resize the image, if the input image is too large, training will be very slow
IMG_H = 208
BATCH_SIZE = 16
CAPACITY = 2000
MAX_STEP = 15000
# with current parameters, it is suggested to use learning rate<0.0001
learning_rate = 0.0001
def run_training():
# Set there directories .
train_dir = './data/train/'
logs_train_dir = './logs/train/'
train, train_label = train_test_split.get_files(train_dir)
train_batch, train_label_batch = train_test_split.get_batch(train,
train_label,
IMG_W,
IMG_H,
BATCH_SIZE,
CAPACITY)
train_logits = cnn.inference(train_batch, BATCH_SIZE, N_CLASSES)
train_loss = cnn.losses(train_logits, train_label_batch)
train_op = cnn.training(train_loss, learning_rate)
train__acc = cnn.evaluation(train_logits, train_label_batch)
summary_op = tf.summary.merge_all()
sess = tf.Session()
train_writer = tf.summary.FileWriter(logs_train_dir, sess.graph)
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
for step in np.arange(MAX_STEP):
if coord.should_stop():
break
_, tra_loss, tra_acc = sess.run([train_op, train_loss, train__acc])
if step % 50 == 0:
print(
"Step {}, ".format(step),
"train loss = {:.2f}, ".format(tra_loss),
"train accuracy = {:.2f}%".format(tra_acc * 100.0))
summary_str = sess.run(summary_op)
train_writer.add_summary(summary_str, step)
if step % 2000 == 0 or (step + 1) == MAX_STEP:
checkpoint_path = os.path.join(logs_train_dir, "model.ckpt")
saver.save(sess, checkpoint_path, global_step=step)
except tf.errors.OutOfRangeError:
print("Done training -- epoch limit reached")
finally:
coord.request_stop()
coord.join(threads)
sess.close()
def get_image(train):
"""
Randomly pick one image from training data
====================
Args:
train: train data
====================
Return:
image
"""
n = len(train)
ind = np.random.randint(0, n)
img_dir = train[ind]
image = Image.open(img_dir)
image = image.resize([208, 208])
image = np.array(image)
return image
def evaluate():
"""
Test one image against the saved models and parameters
"""
# you need to change the directories to yours.
train_dir = './data/train/'
train, train_label = train_test_split.get_files(train_dir)
image_array = get_image(train)
with tf.Graph().as_default():
batch_size = 1
n_classes = 2
image = tf.cast(image_array, tf.float32)
image = tf.image.per_image_standardization(image)
image = tf.reshape(image, [1, 208, 208, 3])
logits = cnn.inference(image, batch_size, n_classes)
logits = tf.nn.softmax(logits)
X = tf.placeholder(tf.float32, shape=[208, 208, 3])
# you need to change the directories to yours.
logs_train_dir = './logs/train/'
saver = tf.train.Saver()
with tf.Session() as sess:
print("Reading checkpoints...")
ckpt = tf.train.get_checkpoint_state(logs_train_dir)
if ckpt and ckpt.model_checkpoint_path:
global_step = ckpt.model_checkpoint_path.split(
'/')[-1].split('-')[-1]
saver.restore(sess, ckpt.model_checkpoint_path)
print("Loading success, global_step is %s".format(global_step))
else:
print("No checkpoint file found")
prediction = sess.run(logits, feed_dict={X: image_array})
max_index = np.argmax(prediction)
if max_index == 0:
print("This is a cat with possibility {:.6f}".format(
prediction[:, 0]))
else:
print("This is a dog with possibility {:.6f}".format(
prediction[:, 1]))
| cats_dogs/base.py | 4,753 | Test one image against the saved models and parameters
Randomly pick one image from training data
====================
Args:
train: train data
====================
Return:
image
Training data and validation accuracy.
Author: Changyu Liu <Shiyipaisizuo@gmail.com> Last modified: 2018-07-06 LICENSE: MIT dogs and cats resize the image, if the input image is too large, training will be very slow with current parameters, it is suggested to use learning rate<0.0001 Set there directories . you need to change the directories to yours. you need to change the directories to yours. | 590 | en | 0.820136 |
import json
#Try with python3
try:
from urllib.request import urlopen, urlretrieve
from urllib.request import urlretrieve
#Else try python2
except:
from urllib2 import urlopen
from urllib import urlretrieve
from os import path
#User home folder
homeFolder = path.expanduser("~")
#Save pictures to a folder
pictureLocation = homeFolder + "/Downloads/"
def main():
########Defining variables#######
#URL in json format for latest wallpaper
url = "http://www.bing.com/HPImageArchive.aspx?format=js&idx=0&n=1&mkt=en-US"
getHighRes = 1 #Manually change the resolution in the url to 1920x1200. Change to 0 if url breaks.
#Get json response from bing.com
response = urlopen(url)
#Trying python 3
try:
output = response.readall().decode('utf-8')
#Else trying python2
except:
output = response.read()
#Get json output
data = json.loads(output)
#Form image url from json
output_url = "http://www.bing.com/" + data["images"][0]["url"]
#Form 1920x1200 image from above url
output_url_highres = output_url.replace("1080", "1200")
#If higher resolution is preferred(default)
if getHighRes == 1:
#Use try block to catch any failure in getting the high res image
try:
process_url(output_url_highres)
except:
process_url(output_url)
else:
process_url(output_url)
def process_url(image_url):
if not check_url(image_url) == 1:
#Get the filename of the new file from the url
filename = pictureLocation + image_url.split('/')[-1]
#Retrieve the image from the web and save it to desired location
req = urlretrieve(image_url, filename)
#Save the file path + filename to the output variable
bingImage = path.abspath(filename)
print(bingImage)
else:
raise Exception('bad url')
def check_url(image_url):
conn = urlopen(image_url)
if not conn.getcode() == 200:
return 1
main()
| Bing Wallpaper/GetWallpaper.py | 2,139 | Try with python3Else try python2User home folderSave pictures to a folderDefining variablesURL in json format for latest wallpaperManually change the resolution in the url to 1920x1200. Change to 0 if url breaks.Get json response from bing.comTrying python 3Else trying python2Get json outputForm image url from jsonForm 1920x1200 image from above urlIf higher resolution is preferred(default)Use try block to catch any failure in getting the high res imageGet the filename of the new file from the urlRetrieve the image from the web and save it to desired locationSave the file path + filename to the output variable | 617 | en | 0.768031 |
import asyncio
from collections import defaultdict
from dataclasses import dataclass
import json
import logging
import os
import time
from typing import Dict, Set
from ray._private.utils import import_attr
from ray.core.generated import runtime_env_agent_pb2
from ray.core.generated import runtime_env_agent_pb2_grpc
from ray.core.generated import agent_manager_pb2
import ray.dashboard.utils as dashboard_utils
import ray.dashboard.modules.runtime_env.runtime_env_consts \
as runtime_env_consts
from ray.experimental.internal_kv import _internal_kv_initialized, \
_initialize_internal_kv
from ray._private.ray_logging import setup_component_logger
from ray._private.runtime_env.conda import CondaManager
from ray._private.runtime_env.context import RuntimeEnvContext
from ray._private.runtime_env.py_modules import PyModulesManager
from ray._private.runtime_env.working_dir import WorkingDirManager
from ray._private.runtime_env.container import ContainerManager
from ray._private.runtime_env.plugin import decode_plugin_uri
from ray._private.runtime_env.utils import RuntimeEnv
logger = logging.getLogger(__name__)
# TODO(edoakes): this is used for unit tests. We should replace it with a
# better pluggability mechanism once available.
SLEEP_FOR_TESTING_S = os.environ.get("RAY_RUNTIME_ENV_SLEEP_FOR_TESTING_S")
@dataclass
class CreatedEnvResult:
# Whether or not the env was installed correctly.
success: bool
# If success is True, will be a serialized RuntimeEnvContext
# If success is False, will be an error message.
result: str
class RuntimeEnvAgent(dashboard_utils.DashboardAgentModule,
runtime_env_agent_pb2_grpc.RuntimeEnvServiceServicer):
"""An RPC server to create and delete runtime envs.
Attributes:
dashboard_agent: The DashboardAgent object contains global config.
"""
def __init__(self, dashboard_agent):
super().__init__(dashboard_agent)
self._runtime_env_dir = dashboard_agent.runtime_env_dir
self._logging_params = dashboard_agent.logging_params
self._per_job_logger_cache = dict()
# Cache the results of creating envs to avoid repeatedly calling into
# conda and other slow calls.
self._env_cache: Dict[str, CreatedEnvResult] = dict()
# Maps a serialized runtime env to a lock that is used
# to prevent multiple concurrent installs of the same env.
self._env_locks: Dict[str, asyncio.Lock] = dict()
# Keeps track of the URIs contained within each env so we can
# invalidate the env cache when a URI is deleted.
# This is a temporary mechanism until we have per-URI caching.
self._uris_to_envs: Dict[str, Set[str]] = defaultdict(set)
# Initialize internal KV to be used by the working_dir setup code.
_initialize_internal_kv(self._dashboard_agent.gcs_client)
assert _internal_kv_initialized()
self._conda_manager = CondaManager(self._runtime_env_dir)
self._py_modules_manager = PyModulesManager(self._runtime_env_dir)
self._working_dir_manager = WorkingDirManager(self._runtime_env_dir)
self._container_manager = ContainerManager(dashboard_agent.temp_dir)
def get_or_create_logger(self, job_id: bytes):
job_id = job_id.decode()
if job_id not in self._per_job_logger_cache:
params = self._logging_params.copy()
params["filename"] = f"runtime_env_setup-{job_id}.log"
params["logger_name"] = f"runtime_env_{job_id}"
per_job_logger = setup_component_logger(**params)
self._per_job_logger_cache[job_id] = per_job_logger
return self._per_job_logger_cache[job_id]
async def CreateRuntimeEnv(self, request, context):
async def _setup_runtime_env(serialized_runtime_env,
serialized_allocated_resource_instances):
# This function will be ran inside a thread
def run_setup_with_logger():
runtime_env = RuntimeEnv(
serialized_runtime_env=serialized_runtime_env)
allocated_resource: dict = json.loads(
serialized_allocated_resource_instances or "{}")
# Use a separate logger for each job.
per_job_logger = self.get_or_create_logger(request.job_id)
# TODO(chenk008): Add log about allocated_resource to
# avoid lint error. That will be moved to cgroup plugin.
per_job_logger.debug(f"Worker has resource :"
f"{allocated_resource}")
context = RuntimeEnvContext(env_vars=runtime_env.env_vars())
self._conda_manager.setup(
runtime_env, context, logger=per_job_logger)
self._py_modules_manager.setup(
runtime_env, context, logger=per_job_logger)
self._working_dir_manager.setup(
runtime_env, context, logger=per_job_logger)
self._container_manager.setup(
runtime_env, context, logger=per_job_logger)
# Add the mapping of URIs -> the serialized environment to be
# used for cache invalidation.
if runtime_env.working_dir_uri():
uri = runtime_env.working_dir_uri()
self._uris_to_envs[uri].add(serialized_runtime_env)
if runtime_env.py_modules_uris():
for uri in runtime_env.py_modules_uris():
self._uris_to_envs[uri].add(serialized_runtime_env)
if runtime_env.conda_uri():
uri = runtime_env.conda_uri()
self._uris_to_envs[uri].add(serialized_runtime_env)
if runtime_env.plugin_uris():
for uri in runtime_env.plugin_uris():
self._uris_to_envs[uri].add(serialized_runtime_env)
# Run setup function from all the plugins
for plugin_class_path, config in runtime_env.plugins():
logger.debug(
f"Setting up runtime env plugin {plugin_class_path}")
plugin_class = import_attr(plugin_class_path)
# TODO(simon): implement uri support
plugin_class.create("uri not implemented",
json.loads(config), context)
plugin_class.modify_context("uri not implemented",
json.loads(config), context)
return context
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, run_setup_with_logger)
serialized_env = request.serialized_runtime_env
if serialized_env not in self._env_locks:
# async lock to prevent the same env being concurrently installed
self._env_locks[serialized_env] = asyncio.Lock()
async with self._env_locks[serialized_env]:
if serialized_env in self._env_cache:
serialized_context = self._env_cache[serialized_env]
result = self._env_cache[serialized_env]
if result.success:
context = result.result
logger.info("Runtime env already created successfully. "
f"Env: {serialized_env}, context: {context}")
return runtime_env_agent_pb2.CreateRuntimeEnvReply(
status=agent_manager_pb2.AGENT_RPC_STATUS_OK,
serialized_runtime_env_context=context)
else:
error_message = result.result
logger.info("Runtime env already failed. "
f"Env: {serialized_env}, err: {error_message}")
return runtime_env_agent_pb2.CreateRuntimeEnvReply(
status=agent_manager_pb2.AGENT_RPC_STATUS_FAILED,
error_message=error_message)
if SLEEP_FOR_TESTING_S:
logger.info(f"Sleeping for {SLEEP_FOR_TESTING_S}s.")
time.sleep(int(SLEEP_FOR_TESTING_S))
logger.info(f"Creating runtime env: {serialized_env}")
runtime_env_context: RuntimeEnvContext = None
error_message = None
for _ in range(runtime_env_consts.RUNTIME_ENV_RETRY_TIMES):
try:
runtime_env_context = await _setup_runtime_env(
serialized_env,
request.serialized_allocated_resource_instances)
break
except Exception as ex:
logger.exception("Runtime env creation failed.")
error_message = str(ex)
await asyncio.sleep(
runtime_env_consts.RUNTIME_ENV_RETRY_INTERVAL_MS / 1000
)
if error_message:
logger.error(
"Runtime env creation failed for %d times, "
"don't retry any more.",
runtime_env_consts.RUNTIME_ENV_RETRY_TIMES)
self._env_cache[serialized_env] = CreatedEnvResult(
False, error_message)
return runtime_env_agent_pb2.CreateRuntimeEnvReply(
status=agent_manager_pb2.AGENT_RPC_STATUS_FAILED,
error_message=error_message)
serialized_context = runtime_env_context.serialize()
self._env_cache[serialized_env] = CreatedEnvResult(
True, serialized_context)
logger.info(
"Successfully created runtime env: %s, the context: %s",
serialized_env, serialized_context)
return runtime_env_agent_pb2.CreateRuntimeEnvReply(
status=agent_manager_pb2.AGENT_RPC_STATUS_OK,
serialized_runtime_env_context=serialized_context)
async def DeleteURIs(self, request, context):
logger.info(f"Got request to delete URIs: {request.uris}.")
failed_uris = [] # URIs that we failed to delete.
for plugin_uri in request.uris:
plugin, uri = decode_plugin_uri(plugin_uri)
# Invalidate the env cache for any envs that contain this URI.
for env in self._uris_to_envs.get(uri, []):
if env in self._env_cache:
del self._env_cache[env]
if plugin == "working_dir":
if not self._working_dir_manager.delete_uri(uri):
failed_uris.append(uri)
elif plugin == "py_modules":
if not self._py_modules_manager.delete_uri(uri):
failed_uris.append(uri)
elif plugin == "conda":
if not self._conda_manager.delete_uri(uri):
failed_uris.append(uri)
else:
raise ValueError(
"RuntimeEnvAgent received DeleteURI request "
f"for unsupported plugin {plugin}. URI: {uri}")
if failed_uris:
return runtime_env_agent_pb2.DeleteURIsReply(
status=agent_manager_pb2.AGENT_RPC_STATUS_FAILED,
error_message="Local files for URI(s) "
f"{failed_uris} not found.")
else:
return runtime_env_agent_pb2.DeleteURIsReply(
status=agent_manager_pb2.AGENT_RPC_STATUS_OK)
async def run(self, server):
runtime_env_agent_pb2_grpc.add_RuntimeEnvServiceServicer_to_server(
self, server)
| dashboard/modules/runtime_env/runtime_env_agent.py | 11,735 | An RPC server to create and delete runtime envs.
Attributes:
dashboard_agent: The DashboardAgent object contains global config.
TODO(edoakes): this is used for unit tests. We should replace it with a better pluggability mechanism once available. Whether or not the env was installed correctly. If success is True, will be a serialized RuntimeEnvContext If success is False, will be an error message. Cache the results of creating envs to avoid repeatedly calling into conda and other slow calls. Maps a serialized runtime env to a lock that is used to prevent multiple concurrent installs of the same env. Keeps track of the URIs contained within each env so we can invalidate the env cache when a URI is deleted. This is a temporary mechanism until we have per-URI caching. Initialize internal KV to be used by the working_dir setup code. This function will be ran inside a thread Use a separate logger for each job. TODO(chenk008): Add log about allocated_resource to avoid lint error. That will be moved to cgroup plugin. Add the mapping of URIs -> the serialized environment to be used for cache invalidation. Run setup function from all the plugins TODO(simon): implement uri support async lock to prevent the same env being concurrently installed URIs that we failed to delete. Invalidate the env cache for any envs that contain this URI. | 1,351 | en | 0.838116 |
# Copyright 2018-2022 Streamlit Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import streamlit as st
st.text("This text is awesome!")
| e2e/scripts/st_text.py | 640 | Copyright 2018-2022 Streamlit Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 555 | en | 0.853357 |
import os
import logging
import argparse
from collections import Counter
import pandas as pd
import inflect
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
_CATEGRORIES = [
'Mini Briefs',
'Advances & Business',
'Concerns & Hype',
'Analysis & Policy',
'Expert Opinions & Discussion within the field',
'Explainers'
]
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument('--template_file', '-tf', type=str, default='digest_template.md')
parser.add_argument('--digest_number', '-n', type=int, required=True)
parser.add_argument('--input_csv', '-i', type=str, required=True)
parser.add_argument('--output_md', '-o', type=str, required=True)
parser.add_argument('--force_overwrite', '-f', action='store_true')
args = parser.parse_args()
n = args.digest_number
p = inflect.engine()
n_english = p.number_to_words(p.ordinal(n))
logging.info('Parsing for the {} digest'.format(n_english))
logging.info('Will save result to {}'.format(args.output_md))
if os.path.isfile(args.output_md):
if not args.force_overwrite:
raise ValueError('Cannot overwrite existing output file!')
logging.info('Loading template from {}'.format(args.template_file))
with open(args.template_file, 'r') as f:
md_template = f.read()
logging.info('Reading {}'.format(args.input_csv))
articles_map = {c : [] for c in _CATEGRORIES}
csv = pd.read_csv(args.input_csv)
for row_num, row in csv.iterrows():
if not row['Type']:
print()
print('To which category does this article belong?')
print()
print(row['Name'])
print()
for i, c in enumerate(_CATEGRORIES):
print('{}) {}'.format(i, c))
while True:
try:
print()
c_idx = int(input('Category Number: '))
c = _CATEGRORIES[c_idx]
break
except:
print('Please enter a valid category!')
print()
else:
c = row['Type']
articles_map[c].append(row)
logging.info('Populating content...')
content = ''
for c in _CATEGRORIES:
items = articles_map[c]
if len(items) > 0:
content += '### {}\n'.format(c)
content += '\n'
for item in items:
if c == 'Mini Briefs':
content += '#### [{}]({})\n'.format(item['Name'], item['URL'])
content += '\n'
content += '<one-two paragraph brief>\n'
else:
content += '* [{}]({}) - {}\n'.format(item['Name'], item['URL'], item['Excerpt'])
content += '\n'
# remove the last two empty lines
content = content[:-2]
md = md_template.replace('$digest_number$', str(n)) \
.replace('$digest_number_english$', n_english) \
.replace('$content$', content)
logging.info('Saving digest markdown...')
with open(args.output_md, 'w') as f:
f.write(md)
logging.info('Done!')
| scripts/csv2md.py | 3,270 | remove the last two empty lines | 31 | en | 0.559367 |
s = "Hey there! what should this string be?"
# Length should be 20
print("Length of s = %d" % len(s[0:20]))
# Index
print("The first occurrence of the letter a = %d" % s.index("!"))
# Count
print("t occurs %d times" % s.count("t"))
# Slicing the string into bits
s1 = "hello world"
print(s1[:1]) # splicing is exclusive
print("|",s1[:s1.index(" ")],"|", sep="") # splicing is exclusive
print("|",s1[s1.index(" "):s1.index(" ")],"|", sep="") # splicing is exclusive
print("|",s1[s1.index(" ") + 1:],"|", sep="") # splicing is exclusive
print("The first five characters are '%s'" % s[:5]) # Start to 5
print("The next five characters are '%s'" % s[5:10]) # 5 to 10
print("The thirteenth character is '%s'" % s[12]) # Just number 12
print("The characters with odd index are '%s'" %s[1::2]) #(0-based indexing)
print("The last five characters are '%s'" % s[-5:]) # 5th-from-last to end
print("Reverse the characteres are '%s'" % s[::-1]) # string reversed
print("Reverse the characteres are '%s'" % s[::-2]) # reversed with odd index
# uppercase
print("String in uppercase: %s" % s.upper())
# Convert everything to lowercase
print("String in lowercase: %s" % s.lower())
# Check how a string starts
print("String starts with 'Str'.!", s.startswith("Str"))
# Check how a string ends
print("String ends with 'ome!'.!", s.endswith("ome!"))
# Split
print("Split the words of the string: %s" % s.split(" "))
# Check ranges
x = 'b'
print('a' <= x <= 'z')
word_squares = ["ball", "area", "able", "lead", "lady"]
step = 1
prefix = ''.join([word[step] for word in word_squares])
print("prefix ", prefix)
| python/python/basics/strings.py | 1,601 | Length should be 20 Index Count Slicing the string into bits splicing is exclusive splicing is exclusive splicing is exclusive splicing is exclusive Start to 5 5 to 10 Just number 12(0-based indexing) 5th-from-last to end string reversed reversed with odd index uppercase Convert everything to lowercase Check how a string starts Check how a string ends Split Check ranges | 372 | en | 0.773022 |
import os
import pkgutil
from pathlib import Path
import pytest
from click.testing import CliRunner
from slotscheck.cli import root as cli
from .conftest import EXAMPLES_DIR
@pytest.fixture()
def runner():
return CliRunner()
@pytest.fixture(autouse=True)
def set_cwd(request):
os.chdir(EXAMPLES_DIR)
yield
os.chdir(request.config.invocation_dir)
def test_no_inputs(runner: CliRunner):
result = runner.invoke(cli, [])
assert result.exit_code == 0
assert result.output == "No files or modules given. Nothing to do!\n"
def test_module_doesnt_exist(runner: CliRunner):
result = runner.invoke(cli, ["-m", "foo"])
assert result.exit_code == 1
assert result.output == (
"ERROR: Module 'foo' not found.\n\n"
"See slotscheck.rtfd.io/en/latest/discovery.html\n"
"for help resolving common import problems.\n"
)
def test_path_doesnt_exist(runner: CliRunner):
result = runner.invoke(cli, ["doesnt_exist"])
assert result.exit_code == 2
assert (
result.output
== """\
Usage: slotscheck [OPTIONS] [FILES]...
Try 'slotscheck --help' for help.
Error: Invalid value for '[FILES]...': Path 'doesnt_exist' does not exist.
"""
)
def test_everything_ok(runner: CliRunner):
result = runner.invoke(cli, ["-m", "module_ok"])
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 6 module(s), 64 class(es).\n"
def test_single_file_module(runner: CliRunner):
result = runner.invoke(
cli, ["-m", "module_singular"], catch_exceptions=False
)
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 1 module(s), 5 class(es).\n"
def test_builtins(runner: CliRunner):
result = runner.invoke(cli, ["-m", "builtins"])
assert result.exit_code == 0
def test_extension(runner: CliRunner):
result = runner.invoke(cli, ["-m", "_pickle"])
assert result.exit_code == 0
assert result.output == ("All OK!\nScanned 1 module(s), 5 class(es).\n")
def test_success_verbose(runner: CliRunner):
result = runner.invoke(
cli, ["-m", "module_ok", "-v"], catch_exceptions=False
)
assert result.exit_code == 0
assert (
result.output
== """\
All OK!
stats:
modules: 7
checked: 6
excluded: 1
skipped: 0
classes: 64
has slots: 44
no slots: 20
n/a: 0
"""
)
def test_submodule(runner: CliRunner):
result = runner.invoke(
cli, ["-m", "module_ok.a.b"], catch_exceptions=False
)
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 4 module(s), 32 class(es).\n"
def test_namespaced(runner: CliRunner):
result = runner.invoke(
cli, ["-m", "namespaced.module"], catch_exceptions=False
)
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 4 module(s), 1 class(es).\n"
def test_multiple_modules(runner: CliRunner):
result = runner.invoke(
cli,
["-m", "module_singular", "-m", "module_ok", "-m", "namespaced"],
catch_exceptions=False,
)
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 11 module(s), 70 class(es).\n"
def test_multiple_paths(runner: CliRunner):
result = runner.invoke(
cli,
[
str(EXAMPLES_DIR / "module_singular.py"),
str(EXAMPLES_DIR / "module_ok/a/b/../b"),
str(EXAMPLES_DIR / "namespaced/module/foo.py"),
],
catch_exceptions=False,
)
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 8 module(s), 38 class(es).\n"
def test_path_is_module_directory(runner: CliRunner):
# let's define the path indirectly to ensure it works
path = str(EXAMPLES_DIR / "module_ok/a/../")
result = runner.invoke(cli, [path], catch_exceptions=False)
assert result.exit_code == 0
assert result.output == "All OK!\nScanned 6 module(s), 64 class(es).\n"
def test_cannot_pass_both_path_and_module(runner: CliRunner):
result = runner.invoke(cli, ["module_ok", "-m", "click"])
assert result.exit_code == 2
assert (
result.output
== "ERROR: Specify either FILES argument or `-m/--module` "
"option, not both.\n"
)
def test_errors_with_default_settings(runner: CliRunner):
result = runner.invoke(cli, ["-m", "module_not_ok"])
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.a.b:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:S' has slots but superclass does not.
ERROR: 'module_not_ok.foo:T' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_errors_require_slots_subclass(runner: CliRunner):
result = runner.invoke(cli, ["-m", "module_not_ok", "--require-subclass"])
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.a.b:A' has no slots, but it could have.
ERROR: 'module_not_ok.a.b:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:A' has no slots, but it could have.
ERROR: 'module_not_ok.foo:C' has no slots, but it could have.
ERROR: 'module_not_ok.foo:R' has no slots, but it could have.
ERROR: 'module_not_ok.foo:S' has slots but superclass does not.
ERROR: 'module_not_ok.foo:T' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_errors_disallow_nonslot_inherit(runner: CliRunner):
result = runner.invoke(
cli, ["-m", "module_not_ok", "--require-superclass"]
)
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.a.b:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:S' has slots but superclass does not.
ERROR: 'module_not_ok.foo:T' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_errors_no_require_superclass(runner: CliRunner):
result = runner.invoke(
cli, ["-m", "module_not_ok", "--no-require-superclass"]
)
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_errors_with_exclude_classes(runner: CliRunner):
result = runner.invoke(
cli,
["-m", "module_not_ok", "--exclude-classes", "(foo:U$|:(W|S))"],
)
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.a.b:U' has slots but superclass does not.
ERROR: 'module_not_ok.foo:T' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_errors_with_include_classes(runner: CliRunner):
result = runner.invoke(
cli,
["-m", "module_not_ok", "--include-classes", "(foo:.*a|:(W|S))"],
)
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.foo:S' has slots but superclass does not.
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_errors_with_include_modules(runner: CliRunner):
result = runner.invoke(
cli,
[
"-m",
"module_not_ok",
"--include-modules",
"(module_not_ok$ | a)",
],
)
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.a.b:U' has slots but superclass does not.
Oh no, found some problems!
Scanned 3 module(s), 2 class(es).
"""
)
def test_ingores_given_module_completely(runner: CliRunner):
result = runner.invoke(
cli,
[
"-m",
"module_not_ok",
"--include-modules",
"nomatch",
],
)
assert result.exit_code == 0
assert (
result.output
== "Files or modules given, but filtered out by exclude/include. "
"Nothing to do!\n"
)
def test_module_not_ok_verbose(runner: CliRunner):
result = runner.invoke(cli, ["-m", "module_not_ok", "-v"])
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.a.b:U' has slots but superclass does not.
Superclasses without slots:
- 'module_not_ok.a.b:A'
ERROR: 'module_not_ok.foo:S' has slots but superclass does not.
Superclasses without slots:
- 'module_not_ok.foo:R'
ERROR: 'module_not_ok.foo:T' has slots but superclass does not.
Superclasses without slots:
- 'module_not_ok.foo:A'
ERROR: 'module_not_ok.foo:U' has slots but superclass does not.
Superclasses without slots:
- 'module_not_ok.foo:L'
- 'module_not_ok.foo:D'
- 'module_not_ok.foo:C'
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
Slots already defined in superclass:
- 'w' (module_not_ok.foo:Q)
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
Slots already defined in superclass:
- 'w' (module_not_ok.foo:U.Ua)
- 'w' (module_not_ok.foo:Q)
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
Slots already defined in superclass:
- 'p' (module_not_ok.foo:U)
- 'v' (module_not_ok.foo:V)
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
Duplicate slot names:
- 'b'
- 'c'
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Slots already defined in superclass:
- 'b' (module_not_ok.foo:Z)
- 'c' (module_not_ok.foo:Z)
Oh no, found some problems!
stats:
modules: 4
checked: 4
excluded: 0
skipped: 0
classes: 28
has slots: 21
no slots: 7
n/a: 0
"""
)
def test_module_misc(runner: CliRunner):
result = runner.invoke(
cli,
["-m", "module_misc", "--no-strict-imports"],
catch_exceptions=False,
)
assert result.exit_code == 0
assert (
result.output
== """\
NOTE: Failed to import 'module_misc.a.evil'.
All OK!
Scanned 18 module(s), 8 class(es).
"""
)
def test_module_exclude(runner: CliRunner):
result = runner.invoke(
cli,
[
"-m",
"module_misc",
"--exclude-modules",
"evil",
"--no-strict-imports",
],
catch_exceptions=False,
)
assert result.exit_code == 0
assert (
result.output
== """\
NOTE: Failed to import 'module_misc.a.b.__main__'.
All OK!
Scanned 16 module(s), 9 class(es).
"""
)
from module_misc import a # type: ignore
assert not a.evil_was_imported
def test_module_disallow_import_failures(runner: CliRunner):
result = runner.invoke(cli, ["-m", "module_misc", "--strict-imports"])
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: Failed to import 'module_misc.a.evil'.
Oh no, found some problems!
Scanned 18 module(s), 8 class(es).
"""
)
def test_module_allow_import_failures(runner: CliRunner):
result = runner.invoke(cli, ["-m", "module_misc", "--no-strict-imports"])
assert result.exit_code == 0
assert (
result.output
== """\
NOTE: Failed to import 'module_misc.a.evil'.
All OK!
Scanned 18 module(s), 8 class(es).
"""
)
def test_finds_config(runner: CliRunner, mocker, tmpdir):
(tmpdir / "myconf.toml").write_binary(
b"""
[tool.slotscheck]
require-superclass = false
"""
)
mocker.patch(
"slotscheck.config.find_config_file",
return_value=Path(tmpdir / "myconf.toml"),
)
result = runner.invoke(cli, ["-m", "module_not_ok"])
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_given_config(runner: CliRunner, tmpdir):
my_config = tmpdir / "myconf.toml"
my_config.write_binary(
b"""
[tool.slotscheck]
require-superclass = false
"""
)
result = runner.invoke(
cli,
["-m", "module_not_ok", "--settings", str(my_config)],
catch_exceptions=False,
)
assert result.exit_code == 1
assert (
result.output
== """\
ERROR: 'module_not_ok.foo:U.Ua' defines overlapping slots.
ERROR: 'module_not_ok.foo:U.Ub' defines overlapping slots.
ERROR: 'module_not_ok.foo:W' defines overlapping slots.
ERROR: 'module_not_ok.foo:Z' has duplicate slots.
ERROR: 'module_not_ok.foo:Za' defines overlapping slots.
Oh no, found some problems!
Scanned 4 module(s), 28 class(es).
"""
)
def test_ambiguous_import(runner: CliRunner):
result = runner.invoke(
cli,
[str(EXAMPLES_DIR / "other/module_misc/a/b/c.py")],
catch_exceptions=False,
)
assert result.exit_code == 1
assert (
result.output
== """\
Cannot check due to import ambiguity.
The given files do not correspond with what would be imported:
'import module_misc.a.b.c' would load from:
{}
instead of:
{}
You may need to define $PYTHONPATH or run as 'python -m slotscheck'
to ensure the correct files can be imported.
See slotscheck.rtfd.io/en/latest/discovery.html
for more information on why this happens and how to resolve it.
""".format(
pkgutil.get_loader(
"module_misc.a.b.c"
).path, # type: ignore[union-attr]
EXAMPLES_DIR / "other/module_misc/a/b/c.py",
)
)
def test_ambiguous_import_excluded(runner: CliRunner):
result = runner.invoke(
cli,
["other/module_misc/a/b/c.py", "--exclude-modules", "module_misc"],
catch_exceptions=False,
)
assert result.exit_code == 0
assert (
result.output
== """\
Files or modules given, but filtered out by exclude/include. Nothing to do!
"""
)
| tests/src/test_cli.py | 15,847 | let's define the path indirectly to ensure it works type: ignore type: ignore[union-attr] | 89 | en | 0.500687 |
from keras.utils import to_categorical
import tensorflow as tf
import pygame
class pytennis:
def __init__(self, fps = 50):
self.net = Network(150,450,100,600)
self.updateRewardA = 0
self.updateRewardB = 0
self.updateIter = 0
self.lossA = 0
self.lossB = 0
# Testing
self.net = Network(150, 450, 100, 600)
self.NetworkA = self.net.network(300, ysource=100, Ynew=600) # Network A
self.NetworkB = self.net.network(200, ysource=600, Ynew=100) # Network B
# NetworkA
# display test plot of network A
#sns.jointplot(NetworkA[0], NetworkA[1])
# display test plot of network B
#sns.jointplot(NetworkB[0], NetworkB[1])
self.out = self.net.DefaultToPosition(250)
self.lastxcoordinate = 350
pygame.init()
self.BLACK = ( 0,0,0)
self.myFontA = pygame.font.SysFont("Times New Roman", 25)
self.myFontB = pygame.font.SysFont("Times New Roman", 25)
self.myFontIter = pygame.font.SysFont('Times New Roman', 25)
self.FPS = fps
self.fpsClock = pygame.time.Clock()
def setWindow(self):
# set up the window
self.DISPLAYSURF = pygame.display.set_mode((600, 700), 0, 32)
pygame.display.set_caption('REINFORCEMENT LEARNING (Discrete Mathematics) - TABLE TENNIS')
# set up the colors
self.BLACK = ( 0,0,0)
self.WHITE = (255, 255, 255)
self.RED= (255,0,0)
self.GREEN = ( 0, 255,0)
self.BLUE = ( 0,0, 255)
return
def display(self):
self.setWindow()
self.DISPLAYSURF.fill(self.WHITE)
pygame.draw.rect(self.DISPLAYSURF, self.GREEN, (150, 100, 300, 500))
pygame.draw.rect(self.DISPLAYSURF, self.RED, (150, 340, 300, 20))
pygame.draw.rect(self.DISPLAYSURF, self.BLACK, (0, 20, 600, 20))
pygame.draw.rect(self.DISPLAYSURF, self.BLACK, (0, 660, 600, 20))
return
def reset(self):
return
def evaluate_state_from_last_coordinate(self, c):
"""
cmax: 450
cmin: 150
c definately will be between 150 and 450.
state0 - (150 - 179)
state1 - (180 - 209)
state2 - (210 - 239)
state3 - (240 - 269)
state4 - (270 - 299)
state5 - (300 - 329)
state6 - (330 - 359)
state7 - (360 - 389)
state8 - (390 - 419)
state9 - (420 - 450)
"""
if c >= 150 and c <=179:
return 0
elif c >= 180 and c <= 209:
return 1
elif c >=210 and c <= 239:
return 2
elif c >=240 and c <= 269:
return 3
elif c>= 270 and c<=299:
return 4
elif c >= 300 and c <= 329:
return 5
elif c >= 330 and c <= 359:
return 6
elif c >= 360 and c <= 389:
return 7
elif c >= 390 and c <= 419:
return 8
elif c >= 420 and c <= 450:
return 9
def evaluate_action(self, action, expectedState):
if action == expectedState:
return True
else:
return False
def randomVal(self, action):
"""
cmax: 450
cmin: 150
c definately will be between 150 and 450.
state0 - (150 - 179)
state1 - (180 - 209)
state2 - (210 - 239)
state3 - (240 - 269)
state4 - (270 - 299)
state5 - (300 - 329)
state6 - (330 - 359)
state7 - (360 - 389)
state8 - (390 - 419)
state9 - (420 - 450)
"""
if action == 0:
val = np.random.choice([i for i in range(150, 180)])
elif action == 1:
val = np.random.choice([i for i in range(180, 210)])
elif action == 2:
val = np.random.choice([i for i in range(210, 240)])
elif action == 3:
val = np.random.choice([i for i in range(240, 270)])
elif action == 4:
val = np.random.choice([i for i in range(270, 300)])
elif action == 5:
val = np.random.choice([i for i in range(300, 330)])
elif action == 6:
val = np.random.choice([i for i in range(330, 360)])
elif action == 7:
val = np.random.choice([i for i in range(360, 390)])
elif action == 8:
val = np.random.choice([i for i in range(390, 420)])
else:
val = np.random.choice([i for i in range(420, 450)])
return val
def stepA(self, action, count = 0):
#playerA should play
if count == 0:
#playerax = lastxcoordinate
self.NetworkA = self.net.network(self.lastxcoordinate, ysource = 100, Ynew = 600) #Network A
self.out = self.net.DefaultToPosition(self.lastxcoordinate)
#update lastxcoordinate
self.bally = self.NetworkA[1][count]
#here
#self.playerax = self.out[count]
self.playerbx = self.randomVal(action)
# soundObj = pygame.mixer.Sound('sound/sound.wav')
# soundObj.play()
# time.sleep(0.4)
# soundObj.stop()
elif count == 49:
self.ballx = self.NetworkA[0][count]
self.bally = self.NetworkA[1][count]
# move playerbx with respect to action
self.playerbx = self.randomVal(action)
else:
self.ballx = self.NetworkA[0][count]
self.bally = self.NetworkA[1][count]
# move playerbx with respect to action
# self.playerbx = self.randomVal(action)
obs = self.evaluate_state_from_last_coordinate(int(self.ballx)) # last state of the ball
reward = self.evaluate_action(action, obs)
done = True
info = ''
return obs, reward, done, info
def stepB(self, action, count):
#playerB can play
if count == 0:
#playerbx = lastxcoordinate
self.NetworkB = self.net.network(self.lastxcoordinate, ysource = 600, Ynew = 100) #Network B
self.out = self.net.DefaultToPosition(self.lastxcoordinate)
#update lastxcoordinate
self.bally = self.NetworkB[1][count]
#self.playerax = self.out[count]
self.playerax = self.randomVal(action)
# soundObj = pygame.mixer.Sound('sound/sound.wav')
# soundObj.play()
# time.sleep(0.4)
# soundObj.stop()
elif count ==49:
self.ballx = self.NetworkA[0][count]
self.bally = self.NetworkA[1][count]
# move playerbx with respect to action
self.playerbx = self.randomVal(action)
else:
self.ballx = self.NetworkB[0][count]
self.bally = self.NetworkB[1][count]
# self.playerbx = self.randomVal(action)
obs = self.evaluate_state_from_last_coordinate(int(self.ballx)) # last state of the ball
reward = self.evaluate_action(action, obs)
done = True
info = ''
return obs, reward, done, info
def computeLossA(self, reward):
if reward == 0:
self.lossA += 1
else:
self.lossA += 0
return
def computeLossB(self, reward):
if reward == 0:
self.lossB += 1
else:
self.lossB += 0
return
def render(self):
# diplay team players
self.PLAYERA = pygame.image.load('images/cap.jpg')
self.PLAYERA = pygame.transform.scale(self.PLAYERA, (50, 50))
self.PLAYERB = pygame.image.load('images/cap.jpg')
self.PLAYERB = pygame.transform.scale(self.PLAYERB, (50, 50))
self.ball = pygame.image.load('images/ball.png')
self.ball = pygame.transform.scale(self.ball, (15, 15))
self.playerax = 150
self.playerbx = 250
self.ballx = 250
self.bally = 300
count = 0
nextplayer = 'A'
#player A starts by playing with state 0
obs, reward, done, info = self.stepA(0)
stateA = obs
stateB = obs
next_state = 0
iterations = 20000
iteration = 0
restart = False
while iteration < iterations:
self.display()
self.randNumLabelA = self.myFontA.render('A (Win): '+str(self.updateRewardA) + ', A(loss): '+str(self.lossA), 1, self.BLACK)
self.randNumLabelB = self.myFontB.render('B (Win): '+str(self.updateRewardB) + ', B(loss): '+ str(self.lossB), 1, self.BLACK)
self.randNumLabelIter = self.myFontIter.render('Iterations: '+str(self.updateIter), 1, self.BLACK)
if nextplayer == 'A':
if count == 0:
# Online DQN evaluates what to do
q_valueA = AgentA.model.predict([stateA])
actionA = AgentA.epsilon_greedy(q_valueA, iteration)
# Online DQN plays
obs, reward, done, info = self.stepA(action = actionA, count = count)
next_stateA = obs
# Let's memorize what just happened
AgentA.replay_memory.append((stateA, actionA, reward, next_stateA, 1.0 - done))
stateA = next_stateA
else:
# Online DQN evaluates what to do
q_valueA = AgentA.model.predict([stateA])
actionA = AgentA.epsilon_greedy(q_valueA, iteration)
# Online DQN plays
obs, reward, done, info = self.stepA(action = actionA, count = count)
next_stateA = obs
# Let's memorize what just happened
# AgentA.replay_memory.append((state, action, reward, next_state, 1.0 - done))
stateA = next_stateA
count += 1
if count == 50:
count = 0
self.updateRewardA += reward
self.computeLossA(reward)
#restart the game if player A fails to get the ball, and let B start the game
if reward == 0:
restart = True
time.sleep(0.5)
nextplayer = 'B'
self.playerbx = self.ballx
else:
restart = False
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, continues = (AgentA.sample_memories(AgentA.batch_size))
next_q_values = AgentA.model.predict([X_next_state_val])
max_next_q_values = np.max(next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * AgentA.discount_rate * max_next_q_values
# Train the online DQN
AgentA.model.fit(X_state_val,tf.keras.utils.to_categorical(X_next_state_val, num_classes=10), verbose = 0)
nextplayer = 'B'
self.updateIter += 1
#evaluate A
else:
nextplayer = 'A'
else:
if count == 0:
# Online DQN evaluates what to do
q_valueB = AgentB.model.predict([stateB])
actionB = AgentB.epsilon_greedy(q_valueB, iteration)
# Online DQN plays
obs, reward, done, info = self.stepB(action = actionB, count = count)
next_stateB = obs
# Let's memorize what just happened
AgentB.replay_memory.append((stateB, actionB, reward, next_stateB, 1.0 - done))
stateB = next_stateB
else:
# Online DQN evaluates what to do
q_valueB = AgentB.model.predict([stateB])
actionB = AgentB.epsilon_greedy(q_valueB, iteration)
# Online DQN plays
obs, reward, done, info = self.stepB(action = actionB, count = count)
next_stateB = obs
# Let's memorize what just happened
# AgentB.replay_memory.append((state, action, reward, next_state, 1.0 - done))
stateB = next_stateB
count += 1
if count == 50:
count = 0
self.updateRewardB += reward
self.computeLossB(reward)
#restart the game if player A fails to get the ball, and let B start the game
if reward == 0:
restart = True
time.sleep(0.5)
nextplayer = 'A'
self.playerax = self.ballx
else:
restart = False
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, continues = (AgentB.sample_memories(AgentB.batch_size))
next_q_values = AgentB.model.predict([X_next_state_val])
max_next_q_values = np.max(next_q_values, axis=1, keepdims=True)
y_val = rewards + continues * AgentB.discount_rate * max_next_q_values
# Train the online DQN
AgentB.model.fit(X_state_val,tf.keras.utils.to_categorical(X_next_state_val, num_classes=10), verbose = 0)
nextplayer = 'A'
self.updateIter += 1
#evaluate B
else:
nextplayer = 'B'
count += 1
#CHECK BALL MOVEMENT
self.DISPLAYSURF.blit(self.PLAYERA, (self.playerax, 50))
self.DISPLAYSURF.blit(self.PLAYERB, (self.playerbx, 600))
self.DISPLAYSURF.blit(self.ball, (self.ballx, self.bally))
self.DISPLAYSURF.blit(self.randNumLabelA, (300, 630))
self.DISPLAYSURF.blit(self.randNumLabelB, (300, 40))
self.DISPLAYSURF.blit(self.randNumLabelIter, (50, 40))
#update last coordinate
self.lastxcoordinate = self.ballx
pygame.display.update()
self.fpsClock.tick(self.FPS)
for event in pygame.event.get():
if event.type == QUIT:
AgentA.model.save('AgentA.h5')
AgentB.model.save('AgentB.h5')
pygame.quit()
sys.exit()
| pytennis/play.py | 15,682 | cmax: 450
cmin: 150
c definately will be between 150 and 450.
state0 - (150 - 179)
state1 - (180 - 209)
state2 - (210 - 239)
state3 - (240 - 269)
state4 - (270 - 299)
state5 - (300 - 329)
state6 - (330 - 359)
state7 - (360 - 389)
state8 - (390 - 419)
state9 - (420 - 450)
cmax: 450
cmin: 150
c definately will be between 150 and 450.
state0 - (150 - 179)
state1 - (180 - 209)
state2 - (210 - 239)
state3 - (240 - 269)
state4 - (270 - 299)
state5 - (300 - 329)
state6 - (330 - 359)
state7 - (360 - 389)
state8 - (390 - 419)
state9 - (420 - 450)
Testing Network A Network B NetworkA display test plot of network Asns.jointplot(NetworkA[0], NetworkA[1]) display test plot of network Bsns.jointplot(NetworkB[0], NetworkB[1]) set up the window set up the colorsplayerA should playplayerax = lastxcoordinateNetwork Aupdate lastxcoordinatehereself.playerax = self.out[count] soundObj = pygame.mixer.Sound('sound/sound.wav') soundObj.play() time.sleep(0.4) soundObj.stop() move playerbx with respect to action move playerbx with respect to action self.playerbx = self.randomVal(action) last state of the ballplayerB can playplayerbx = lastxcoordinateNetwork Bupdate lastxcoordinateself.playerax = self.out[count] soundObj = pygame.mixer.Sound('sound/sound.wav') soundObj.play() time.sleep(0.4) soundObj.stop() move playerbx with respect to action self.playerbx = self.randomVal(action) last state of the ball diplay team playersplayer A starts by playing with state 0 Online DQN evaluates what to do Online DQN plays Let's memorize what just happened Online DQN evaluates what to do Online DQN plays Let's memorize what just happened AgentA.replay_memory.append((state, action, reward, next_state, 1.0 - done))restart the game if player A fails to get the ball, and let B start the game Sample memories and use the target DQN to produce the target Q-Value Train the online DQNevaluate A Online DQN evaluates what to do Online DQN plays Let's memorize what just happened Online DQN evaluates what to do Online DQN plays Let's memorize what just happened AgentB.replay_memory.append((state, action, reward, next_state, 1.0 - done))restart the game if player A fails to get the ball, and let B start the game Sample memories and use the target DQN to produce the target Q-Value Train the online DQNevaluate BCHECK BALL MOVEMENTupdate last coordinate | 2,506 | en | 0.648987 |
"""
Optuna example that optimizes a classifier configuration for cancer dataset
using XGBoost.
In this example, we optimize the validation accuracy of cancer detection
using XGBoost. We optimize both the choice of booster model and their hyper
parameters.
We have following two ways to execute this example:
(1) Execute this code directly.
$ python xgboost_simple.py
(2) Execute through CLI.
$ STUDY_NAME=`optuna create-study --storage sqlite:///example.db`
$ optuna study optimize xgboost_simple.py objective --n-trials=100 --study $STUDY_NAME \
--storage sqlite:///example.db
"""
from __future__ import division
import numpy as np
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import xgboost as xgb
import optuna
# FYI: Objective functions can take additional arguments
# (https://optuna.readthedocs.io/en/stable/faq.html#objective-func-additional-args).
def objective(trial):
(data, target) = sklearn.datasets.load_breast_cancer(return_X_y=True)
train_x, test_x, train_y, test_y = train_test_split(data, target, test_size=0.25)
dtrain = xgb.DMatrix(train_x, label=train_y)
dtest = xgb.DMatrix(test_x, label=test_y)
param = {
'silent': 1,
'objective': 'binary:logistic',
'booster': trial.suggest_categorical('booster', ['gbtree', 'gblinear', 'dart']),
'lambda': trial.suggest_loguniform('lambda', 1e-8, 1.0),
'alpha': trial.suggest_loguniform('alpha', 1e-8, 1.0)
}
if param['booster'] == 'gbtree' or param['booster'] == 'dart':
param['max_depth'] = trial.suggest_int('max_depth', 1, 9)
param['eta'] = trial.suggest_loguniform('eta', 1e-8, 1.0)
param['gamma'] = trial.suggest_loguniform('gamma', 1e-8, 1.0)
param['grow_policy'] = trial.suggest_categorical('grow_policy', ['depthwise', 'lossguide'])
if param['booster'] == 'dart':
param['sample_type'] = trial.suggest_categorical('sample_type', ['uniform', 'weighted'])
param['normalize_type'] = trial.suggest_categorical('normalize_type', ['tree', 'forest'])
param['rate_drop'] = trial.suggest_loguniform('rate_drop', 1e-8, 1.0)
param['skip_drop'] = trial.suggest_loguniform('skip_drop', 1e-8, 1.0)
bst = xgb.train(param, dtrain)
preds = bst.predict(dtest)
pred_labels = np.rint(preds)
accuracy = sklearn.metrics.accuracy_score(test_y, pred_labels)
return 1.0 - accuracy
if __name__ == '__main__':
study = optuna.create_study()
study.optimize(objective, n_trials=100)
print(study.best_trial)
| examples/xgboost_simple.py | 2,599 | Optuna example that optimizes a classifier configuration for cancer dataset
using XGBoost.
In this example, we optimize the validation accuracy of cancer detection
using XGBoost. We optimize both the choice of booster model and their hyper
parameters.
We have following two ways to execute this example:
(1) Execute this code directly.
$ python xgboost_simple.py
(2) Execute through CLI.
$ STUDY_NAME=`optuna create-study --storage sqlite:///example.db`
$ optuna study optimize xgboost_simple.py objective --n-trials=100 --study $STUDY_NAME --storage sqlite:///example.db
FYI: Objective functions can take additional arguments (https://optuna.readthedocs.io/en/stable/faq.htmlobjective-func-additional-args). | 733 | en | 0.629075 |
import re,sys
class Instruction:
def __init__(self,defn):
m = re.match("^([A-Fa-f0-9\-\,]+)\s+\"(.*?)\"\s+(.*)$",defn)
assert m is not None,"Bad line "+defn
range = m.group(1)
range = range+"-"+range if len(range) == 2 else range
range = range+",1" if len(range) == 5 else range
self.first = int(range[:2],16)
self.last = int(range[3:5],16)
self.step = int(range[-1],16)
self.name = m.group(2).strip()
self.code = m.group(3).strip()
#print(defn,range,self.first,self.last,self.step,self.getOpcodes())
def getOpcodes(self):
return range(self.first,self.last+self.step,self.step)
def getMnemonics(self,opcode):
base = self.name
base = self.process(base,opcode)
return base.lower()
def getCode(self,opcode,type = "C"):
base = self.process(self.code,opcode)
if (opcode & 0xF0) == 0xC0:
base = base + ";$CYCLES++"
isFirst = True
while base.find("$") >= 0:
if isFirst:
mWord = "$DF"
isFirst = False
else:
m = re.search("(\$[A-Za-z]+)",base)
mWord = m.group(1)
if type == "C":
base = base.replace(mWord,mWord[1:].upper())
elif type == "T":
base = base.replace(mWord,"this."+mWord[1:].lower())
else:
raise Exception()
while base.find(";;") >= 0:
base = base.replace(";;",";")
if base[0] == ';':
base = base[1:]
return base
def process(self,s,opc):
s = s.replace("@R","{0:X}".format(opc & 0x0F))
s = s.replace("@P","{0:X}".format(opc & 0x07))
s = s.replace("@E","{0:X}".format((opc & 0x03)+1))
s = s.replace("@BRANCH","$R[$P] = ($R[$P] & 0xFF00) | $T8")
s = s.replace("@LBRANCH","$R[$P] = $T16")
s = s.replace("@FETCH16","$T16=$FETCH();$T16=($T16 << 8)|$FETCH()")
s = s.replace("@LSKIP","$R[$P] = ($R[$P]+2) & 0xFFFF")
if s[:4] == "@ADD":
params = ["("+x+")" for x in s.strip()[5:-1].split(",")]
s = "$T16 = "+("+".join(params))+";$D = $T16 & 0xFF;$DF = ($T16 >> 8) & 1"
#print(s,params)
#sys.exit(0)
return s
src = open("1802.def").readlines()
src = [x if x.find("//") < 0 else x[:x.find("//")] for x in src]
src = [x.replace("\t"," ").strip() for x in src]
src = [x for x in src if x != ""]
instructions = [ None ] * 256
for l in src:
instr = Instruction(l)
for opc in instr.getOpcodes():
assert instructions[opc] is None,"Duplicate opcode : "+l
instructions[opc] = instr
mList = ",".join(['"'+instructions[x].getMnemonics(x)+'"' for x in range(0,256)])
open("_1802_mnemonics.h","w").write("{ "+mList+ " };\n\n")
h = open("_1802_case.h","w")
for i in range(0,256):
h.write("case 0x{0:02x}: /*** {1} ***/\n".format(i,instructions[i].getMnemonics(i)))
h.write(" "+instructions[i].getCode(i,"C")+";break;\n")
h.close()
h = open("_1802_opcodes.ts","w")
h.write("class CPU1802_Opcodes extends CPU1802_Base {\n\n")
h.write("public getOpcodeList():Function[] {\n ")
h.write(",".join("opcode_{0:02x}()".format(n) for n in range(0,256)))
h.write("\n}\n\n")
for i in range(0,256):
h.write("private opcode_{0:02x}(): void {{ /*** {1} ***/\n".format(i,instructions[i].getMnemonics(i)))
h.write(" "+instructions[i].getCode(i,"T")+";\n}\n")
h.write("}\n")
h.close()
h = open("_1802_ports.h","w")
for p in range(1,8):
h.write("#ifndef INPUT{0}\n#define INPUT{0}() (0)\n#endif\n".format(p))
h.write("#ifndef OUTPUT{0}\n#define OUTPUT{0}(x) {{}}\n#endif\n".format(p))
for p in range(1,5):
h.write("#ifndef EFLAG{0}\n#define EFLAG{0}() (0)\n#endif\n".format(p))
h.write("#ifndef UPDATEQ\n#define UPDATEQ(x) {{}}\n#endif\n".format(p))
h.close() | processor/generate.py | 3,489 | print(defn,range,self.first,self.last,self.step,self.getOpcodes())print(s,params)sys.exit(0) | 92 | en | 0.299679 |
"""Writes the given metrics in a csv."""
import numpy as np
import os
import pandas as pd
import sys
models_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(models_dir)
from baseline_constants import CLIENT_ID_KEY, NUM_ROUND_KEY, NUM_SAMPLES_KEY
COLUMN_NAMES = [
CLIENT_ID_KEY, NUM_ROUND_KEY, 'hierarchy', NUM_SAMPLES_KEY]
def print_metrics(
round_number,
client_ids,
metrics,
hierarchies,
num_samples,
path):
"""Prints or appends the given metrics in a csv.
The resulting dataframe is of the form:
client_id, round_number, hierarchy, num_samples, metric1, metric2
twebbstack, 0, , 18, 0.5, 0.89
Args:
round_number: Number of the round the metrics correspond to. If
0, then the file in path is overwritten. If not 0, we append to
that file.
client_ids: Ids of the clients. Not all ids must be in the following
dicts.
metrics: Dict keyed by client id. Each element is a dict of metrics
for that client in the specified round. The dicts for all clients
are expected to have the same set of keys.
hierarchies: Dict keyed by client id. Each element is a list of hierarchies
to which the client belongs.
num_samples: Dict keyed by client id. Each element is the number of test
samples for the client.
"""
columns = COLUMN_NAMES + get_metrics_names(metrics)
client_data = pd.DataFrame(columns=columns)
for i, c_id in enumerate(client_ids):
current_client = {
'client_id': c_id,
'round_number': round_number,
'hierarchy': ','.join(hierarchies.get(c_id, [])),
'num_samples': num_samples.get(c_id, np.nan)
}
current_metrics = metrics.get(c_id, {})
for metric, metric_value in current_metrics.items():
current_client[metric] = metric_value
client_data.loc[len(client_data)] = current_client
mode = 'w' if round_number == 0 else 'a'
print_dataframe(client_data, path, mode)
def print_dataframe(df, path, mode='w'):
"""Writes the given dataframe in path as a csv"""
header = mode == 'w'
df.to_csv(path, mode=mode, header=header, index=False)
def get_metrics_names(metrics):
"""Gets the names of the metrics.
Args:
metrics: Dict keyed by client id. Each element is a dict of metrics
for that client in the specified round. The dicts for all clients
are expected to have the same set of keys."""
if len(metrics) == 0:
return []
metrics_dict = next(iter(metrics.values()))
return list(metrics_dict.keys())
| models/metrics/writer.py | 2,742 | Gets the names of the metrics.
Args:
metrics: Dict keyed by client id. Each element is a dict of metrics
for that client in the specified round. The dicts for all clients
are expected to have the same set of keys.
Writes the given dataframe in path as a csv
Prints or appends the given metrics in a csv.
The resulting dataframe is of the form:
client_id, round_number, hierarchy, num_samples, metric1, metric2
twebbstack, 0, , 18, 0.5, 0.89
Args:
round_number: Number of the round the metrics correspond to. If
0, then the file in path is overwritten. If not 0, we append to
that file.
client_ids: Ids of the clients. Not all ids must be in the following
dicts.
metrics: Dict keyed by client id. Each element is a dict of metrics
for that client in the specified round. The dicts for all clients
are expected to have the same set of keys.
hierarchies: Dict keyed by client id. Each element is a list of hierarchies
to which the client belongs.
num_samples: Dict keyed by client id. Each element is the number of test
samples for the client.
Writes the given metrics in a csv. | 1,182 | en | 0.884414 |
import numpy as np
import pandas as pd
import sys # can use sys to take command line arguments
class Recommender():
'''
What is this class all about - write a really good doc string here
'''
def __init__(self, ):
'''
what do we need to start out our recommender system
'''
def fit(self, ):
'''
fit the recommender to your dataset and also have this save the results
to pull from when you need to make predictions
'''
def predict_rating(self, ):
'''
makes predictions of a rating for a user on a movie-user combo
'''
def make_recs(self,):
'''
given a user id or a movie that an individual likes
make recommendations
'''
if __name__ == '__main__':
# test different parts to make sure it works
| lessons/Recommendations/2_Matrix_Factorization_for_Recommendations/recommender_template.py | 843 | can use sys to take command line arguments test different parts to make sure it works | 85 | en | 0.834695 |
import tensorflow as tf
import sys
sys.path.insert(0,'..')
import vtrace_popart as vtrace
nest = tf.contrib.framework.nest
from .flags import *
def compute_baseline_loss(advantages):
# Loss for the baseline, summed over the time dimension.
# Multiply by 0.5 to match the standard update rule:
# d(loss) / d(baseline) = advantage
return .5 * tf.reduce_sum(tf.square(advantages))
def compute_entropy_loss(logits):
policy = tf.nn.softmax(logits)
log_policy = tf.nn.log_softmax(logits)
entropy_per_timestep = tf.reduce_sum(-policy * log_policy, axis=-1)
return -tf.reduce_sum(entropy_per_timestep)
def compute_policy_gradient_loss(logits, actions, advantages):
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=actions, logits=logits)
advantages = tf.stop_gradient(advantages)
policy_gradient_loss_per_timestep = cross_entropy * advantages
return tf.reduce_sum(policy_gradient_loss_per_timestep)
def build_learner(agent, env_outputs, agent_outputs, env_id):
"""Builds the learner loop.
Args:
agent: A snt.RNNCore module outputting `AgentOutput` named tuples, with an
`unroll` call for computing the outputs for a whole trajectory.
agent_state: The initial agent state for each sequence in the batch.
env_outputs: A `StepOutput` namedtuple where each field is of shape
[T+1, ...].
agent_outputs: An `AgentOutput` namedtuple where each field is of shape
[T+1, ...].
Returns:
A tuple of (done, infos, and environment frames) where
the environment frames tensor causes an update.
"""
learner_outputs = agent.unroll(agent_outputs.action, env_outputs, env_id)
# Use last baseline value (from the value function) to bootstrap.
bootstrap_value = learner_outputs.un_normalized_vf[-1]
# At this point, the environment outputs at time step `t` are the inputs that
# lead to the learner_outputs at time step `t`. After the following shifting,
# the actions in agent_outputs and learner_outputs at time step `t` is what
# leads to the environment outputs at time step `t`.
agent_outputs = nest.map_structure(lambda t: t[1:], agent_outputs)
rewards, infos, done, _ = nest.map_structure(
lambda t: t[1:], env_outputs)
learner_outputs = nest.map_structure(lambda t: t[:-1], learner_outputs)
if FLAGS.reward_clipping == 'abs_one':
clipped_rewards = tf.clip_by_value(rewards, -1, 1)
elif FLAGS.reward_clipping == 'soft_asymmetric':
squeezed = tf.tanh(rewards / 5.0)
# Negative rewards are given less weight than positive rewards.
clipped_rewards = tf.where(rewards < 0, .3 * squeezed, squeezed) * 5.
discounts = tf.to_float(~done) * FLAGS.discounting
game_specific_mean = tf.gather(agent._mean, env_id)
game_specific_std = tf.gather(agent._std, env_id)
# Compute V-trace returns and weights.
# Note, this is put on the CPU because it's faster than on GPU. It can be
# improved further with XLA-compilation or with a custom TensorFlow operation.
with tf.device('/cpu'):
vtrace_returns = vtrace.from_logits(
behaviour_policy_logits=agent_outputs.policy_logits,
target_policy_logits=learner_outputs.policy_logits,
actions=agent_outputs.action,
discounts=discounts,
rewards=clipped_rewards,
un_normalized_values=learner_outputs.un_normalized_vf,
normalized_values=learner_outputs.normalized_vf,
mean=game_specific_mean,
std=game_specific_std,
bootstrap_value=bootstrap_value)
# First term of equation (7) in (Hessel et al., 2018)
normalized_vtrace = (vtrace_returns.vs - game_specific_mean) / game_specific_std
normalized_vtrace = nest.map_structure(tf.stop_gradient, normalized_vtrace)
# Compute loss as a weighted sum of the baseline loss, the policy gradient
# loss and an entropy regularization term.
total_loss = compute_policy_gradient_loss(
learner_outputs.policy_logits, agent_outputs.action,
vtrace_returns.pg_advantages)
baseline_loss = compute_baseline_loss(
normalized_vtrace - learner_outputs.normalized_vf)
total_loss += FLAGS.baseline_cost * baseline_loss
total_loss += FLAGS.entropy_cost * compute_entropy_loss(
learner_outputs.policy_logits)
# Optimization
num_env_frames = tf.train.get_global_step()
learning_rate = tf.train.polynomial_decay(FLAGS.learning_rate, num_env_frames,
FLAGS.total_environment_frames, 0)
optimizer = tf.train.RMSPropOptimizer(learning_rate, FLAGS.decay,
FLAGS.momentum, FLAGS.epsilon)
# Use reward clipping for atari games only
if FLAGS.gradient_clipping > 0.0:
variables = tf.trainable_variables()
gradients = tf.gradients(total_loss, variables)
gradients, _ = tf.clip_by_global_norm(gradients, FLAGS.gradient_clipping)
train_op = optimizer.apply_gradients(zip(gradients, variables))
else:
train_op = optimizer.minimize(total_loss)
# Merge updating the network and environment frames into a single tensor.
with tf.control_dependencies([train_op]):
num_env_frames_and_train = num_env_frames.assign_add(
FLAGS.batch_size * FLAGS.unroll_length)
# Adding a few summaries.
tf.summary.scalar('learning_rate', learning_rate)
tf.summary.scalar('total_loss', total_loss)
tf.summary.histogram('action', agent_outputs.action)
# I'm not sure if it's really necessary to put this operation on the CPU.
with tf.device('/cpu'):
(mean, mean_squared) = (agent.update_moments(vtrace_returns.vs, env_id))
return (done, infos, num_env_frames_and_train) + (mean, mean_squared) | popart/build_learner.py | 5,645 | Builds the learner loop.
Args:
agent: A snt.RNNCore module outputting `AgentOutput` named tuples, with an
`unroll` call for computing the outputs for a whole trajectory.
agent_state: The initial agent state for each sequence in the batch.
env_outputs: A `StepOutput` namedtuple where each field is of shape
[T+1, ...].
agent_outputs: An `AgentOutput` namedtuple where each field is of shape
[T+1, ...].
Returns:
A tuple of (done, infos, and environment frames) where
the environment frames tensor causes an update.
Loss for the baseline, summed over the time dimension. Multiply by 0.5 to match the standard update rule: d(loss) / d(baseline) = advantage Use last baseline value (from the value function) to bootstrap. At this point, the environment outputs at time step `t` are the inputs that lead to the learner_outputs at time step `t`. After the following shifting, the actions in agent_outputs and learner_outputs at time step `t` is what leads to the environment outputs at time step `t`. Negative rewards are given less weight than positive rewards. Compute V-trace returns and weights. Note, this is put on the CPU because it's faster than on GPU. It can be improved further with XLA-compilation or with a custom TensorFlow operation. First term of equation (7) in (Hessel et al., 2018) Compute loss as a weighted sum of the baseline loss, the policy gradient loss and an entropy regularization term. Optimization Use reward clipping for atari games only Merge updating the network and environment frames into a single tensor. Adding a few summaries. I'm not sure if it's really necessary to put this operation on the CPU. | 1,660 | en | 0.860974 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.