markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concent... | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric aci... | notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Bl... | notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concent... | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculatio... | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Load the component using KFP SDK | import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml')
help(dataflow_template_op) | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Sample
Note: The following sample code works in an IPython notebook or directly in Python code.
In this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input... | !gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Set sample parameters | # Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Template'
OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR) | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Example pipeline that uses the component | import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch template pipeline',
description='Dataflow launch template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
gcs_path = 'gs://dataflow-templates/latest/Word_Count',
launch_parameters = json.dumps({
'parameters': {
... | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Compile the pipeline | pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename) | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Submit the pipeline for execution | #Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pi... | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Inspect the output | !gsutil cat $OUTPUT_PATH* | components/gcp/dataflow/launch_template/sample.ipynb | kubeflow/pipelines | apache-2.0 |
Plot Original Data Set
Plot the Sepal width vs. Sepal length on the original data. | # Plot the first two features BEFORE doing the PCA
plt.figure(2, figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
plt.show() | app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb | eric-svds/flask-with-docker | gpl-2.0 |
Plot Data After PCA
After performing a PCA, the first two components are plotted. Note that the two components plotted are linear combinations of the original 4 features of the data set. | # Plot the first two principal components AFTER the PCA
plt.figure(2, figsize=(8, 6))
plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=Y, cmap=plt.cm.Paired)
plt.xlabel('Component 1')
plt.ylabel('Component 2')
plt.show() | app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb | eric-svds/flask-with-docker | gpl-2.0 |
Save Output
The Flask application will make use of the following D3 Scatterplot example. Data has to be in a particular format (see link for example), this cell flips the data sets into that format and pickles the output. | # Pickle pre- and post-PCA data
import pickle
features = []
for full_label in iris.feature_names:
name = full_label[:-5].split() # remove trailing ' (cm)'
features.append(name[0]+name[1].capitalize())
features.append("species")
# Create full set for Iris data
data1 = []
data_PCA = []
for i, vals in enumerate(... | app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb | eric-svds/flask-with-docker | gpl-2.0 |
The line below creates a list of three pairs, each pair containing two pandas.Series objects.
A Series is like a dictionary, only its items are ordered and its values must share a data type. The order keys of the series are its index. It is easy to compose Series objects into a DataFrame. | series = [ordered_words(archive.data) for archive in archives] | examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb | sbenthall/bigbang | agpl-3.0 |
This creates a DataFrame from each of the series.
The columns alternate between representing word rankings and representing word counts. | rankings = pd.concat([series[0][0],
series[0][1],
series[1][0],
series[1][1],
series[2][0],
series[2][1]],axis=1)
# display the first 5 rows of the DataFrame
rankings[:5] | examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb | sbenthall/bigbang | agpl-3.0 |
We should rename the columns to be more descriptive of the data. | rankings.rename(columns={0: 'ipc-gnso rankings',
1: 'ipc-gnso counts',
2: 'wp4 rankings',
3: 'wp4 counts',
4: 'ncuc-discuss rankings',
5: 'ncuc-discuss counts'},inplace=True)
rankings[:5] | examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb | sbenthall/bigbang | agpl-3.0 |
Use the to_csv() function on the DataFrame object to export the data to CSV format, which you can open easily in Excel. | rankings.to_csv("rankings_all.csv",encoding="utf-8") | examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb | sbenthall/bigbang | agpl-3.0 |
To filter the data by certain authors before computing the word rankings, provide a list of author names as an argument.
Only emails whose From header includes on of the author names within it will be included in the calculation.
Note that for detecting the author name, the program for now uses simple string inclusion.... | authors = ["Greg Shatan",
"Niels ten Oever"]
ordered_words(archives[0].data, authors=authors) | examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb | sbenthall/bigbang | agpl-3.0 |
Find the symmetries of the qubit operator | [symmetries, sq_paulis, cliffords, sq_list] = qubit_op.find_Z2_symmetries()
print('Z2 symmetries found:')
for symm in symmetries:
print(symm.to_label())
print('single qubit operators found:')
for sq in sq_paulis:
print(sq.to_label())
print('cliffords found:')
for clifford in cliffords:
print(clifford.print_... | community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
Use the found symmetries, single qubit operators, and cliffords to taper qubits from the original qubit operator. For each Z2 symmetry one can taper one qubit. However, different tapered operators can be built, corresponding to different symmetry sectors. | tapered_ops = []
for coeff in itertools.product([1, -1], repeat=len(sq_list)):
tapered_op = Operator.qubit_tapering(qubit_op, cliffords, sq_list, list(coeff))
tapered_ops.append((list(coeff), tapered_op))
print("Number of qubits of tapered qubit operator: {}".format(tapered_op.num_qubits)) | community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
The user has to specify the symmetry sector he is interested in. Since we are interested in finding the ground state here, let us get the original ground state energy as a reference. | ee = get_algorithm_instance('ExactEigensolver')
ee.init_args(qubit_op, k=1)
result = core.process_algorithm_result(ee.run())
for line in result[0]:
print(line) | community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
Now, let us iterate through all tapered qubit operators to find out the one whose ground state energy matches the original (un-tapered) one. | smallest_eig_value = 99999999999999
smallest_idx = -1
for idx in range(len(tapered_ops)):
ee.init_args(tapered_ops[idx][1], k=1)
curr_value = ee.run()['energy']
if curr_value < smallest_eig_value:
smallest_eig_value = curr_value
smallest_idx = idx
print("Lowest eigenvalue of the {}-th ta... | community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
Alternatively, one can run multiple VQE instances to find the lowest eigenvalue sector.
Here we just validate that the_tapered_op reach the smallest eigenvalue in one VQE execution with the UCCSD variational form, modified to take into account of the tapered symmetries. | # setup initial state
init_state = get_initial_state_instance('HartreeFock')
init_state.init_args(num_qubits=the_tapered_op.num_qubits, num_orbitals=core._molecule_info['num_orbitals'],
qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction,
num_particle... | community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb | antoniomezzacapo/qiskit-tutorial | apache-2.0 |
Naive concept of simultaneous deformation
Here we try to split simple shear and pure shear to several incremental steps and mutually superposed those increments to simulate simultaneous deformation. We will use following deformation gradients for total simple shear and pure shear: | gamma = 1
Sx = 2
Fs = array([[1, gamma], [0, 1]])
Fp = array([[Sx, 0], [0, 1/Sx]]) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
To divide simple shear deformation with $\gamma$=1 to n incremental steps | n = 10
Fsi = array([[1, gamma/n], [0, 1]])
print('Incremental deformation gradient:')
print(Fsi) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
To check that supperposition of those increments give as total deformation, we can use allclose numpy function | array_equal(matrix_power(Fsi, n), Fs)
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
print('Incremental deformation gradient:')
print(Fpi)
allclose(matrix_power(Fpi, n), Fp) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Knowing that deformation superposition is not cimmutative, we can check that axial ratio of finite strain resulting from simple shear superposed on pure shear and vice-versa is really different: | u,s,v = svd(Fs @ Fp)
print('Axial ratio of finite strain resulting from simple shear superposed on pure shear: {}'.format(s[0]/s[1]))
u,s,v = svd(Fp @ Fs)
print('Axial ratio of finite strain resulting from pure shear superposed on simple shear: {}'.format(s[0]/s[1])) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Lets try to split those deformation to two increments and mutually mix them: | Fsi = array([[1, gamma/2], [0, 1]])
Fpi = array([[Sx**(1/2), 0], [0, Sx**(-1/2)]])
u,s,v = svd(Fsi @ Fpi @ Fsi @ Fpi)
print('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1]))
u,s,v = svd(Fpi @ Fsi @ Fpi @ Fsi)
print('Axial ratio of finite strain of superposed increme... | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
It is now close to each other, but still quite different. So let's split it to much more increments.... | n = 100
Fsi = array([[1, gamma/n], [0, 1]])
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
u,s,v = svd(matrix_power(Fsi @ Fpi, n))
print('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1]))
u,s,v = svd(matrix_power(Fpi @ Fsi, n))
print('Axial ratio of finite strain of ... | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Now it is very close. Let's visualize how finite strain converge with increasing number of increments: | arp = []
ars = []
ninc = range(1, 201)
for n in ninc:
Fsi = array([[1, gamma/n], [0, 1]])
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
u,s,v = svd(matrix_power(Fsi @ Fpi, n))
arp.append(s[0]/s[1])
u,s,v = svd(matrix_power(Fpi @ Fsi, n))
ars.append(s[0]/s[1])
figure(figsize=(16, 4))
semilogy(ni... | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Using spatial velocity gradient
We need to import matrix exponential and matrix logarithm functions from scipy.linalg | from scipy.linalg import expm, logm | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Spatial velocity gradient could be obtained as matrix logarithm of deformation gradient | Lp = logm(Fp)
Ls = logm(Fs) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Total spatial velocity gradient of simulatanous deformation could be calculated by summation of individual ones | L = Lp + Ls | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Resulting deformation gradient could be calculated as matrix exponential of total spatial velocity gradient | F = expm(L)
u,s,v = svd(F)
sar = s[0]/s[1]
print('Axial| ratio of finite strain of simultaneous pure shear and simple shear: {}'.format(sar)) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Lets overlay it on previous diagram | arp = []
ars = []
ninc = range(1, 201)
for n in ninc:
Fsi = array([[1, gamma/n], [0, 1]])
Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])
u,s,v = svd(matrix_power(Fsi @ Fpi, n))
arp.append(s[0]/s[1])
u,s,v = svd(matrix_power(Fpi @ Fsi, n))
ars.append(s[0]/s[1])
figure(figsize=(16, 4))
semilogy(ni... | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Decomposition of spatial velocity gradient
Here we will decompose spatial velocity gradient of simple shear to rate of deformation tensor and spin tensor. | L = logm(Fs)
D = (L + L.T)/2
W = (L - L.T)/2 | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Check that decomposition give total spatial velocity gradient | allclose(D + W, L) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Visualize spatial velocity gradients for rate of deformation tensor | vel_field(D) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
Visualize spatial velocity gradients for spin tensor | vel_field(W) | 14_Simultaneous_deformation.ipynb | ondrolexa/sg2 | mit |
A simple plot with the pyplot API | from bqplot import pyplot as plt
plt.figure(1)
n = 100
plt.plot(np.linspace(0.0, 10.0, n), np.cumsum(np.random.randn(n)),
axes_options={'y': {'grid_lines': 'dashed'}})
plt.show() | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Scatter Plot | plt.figure(title='Scatter Plot with colors')
plt.scatter(y_data_2, y_data_3, color=y_data)
plt.show() | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Histogram | plt.figure()
plt.hist(y_data, colors=['OrangeRed'])
plt.show() | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Every component of the figure is an independent widget | xs = bq.LinearScale()
ys = bq.LinearScale()
x = np.arange(100)
y = np.cumsum(np.random.randn(2, 100), axis=1) #two random walks
line = bq.Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['red', 'green'])
xax = bq.Axis(scale=xs, label='x', grid_lines='solid')
yax = bq.Axis(scale=ys, orientation='vertical', tick_forma... | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
The same holds for the attributes of scales, axes | xs.min = 4
xs.min = None
xax.label = 'Some label for the x axis' | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Use bqplot figures as input widgets | xs = bq.LinearScale()
ys = bq.LinearScale()
x = np.arange(100)
y = np.cumsum(np.random.randn(2, 100), axis=1) #two random walks
line = bq.Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['red', 'green'])
xax = bq.Axis(scale=xs, label='x', grid_lines='solid')
yax = bq.Axis(scale=ys, orientation='vertical', tick_forma... | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Selections | def interval_change_callback(change):
db.value = str(change['new'])
intsel = bq.interacts.FastIntervalSelector(scale=xs, marks=[line])
intsel.observe(interval_change_callback, names=['selected'] )
db = widgets.Label()
db.value = str(intsel.selected)
display(db)
fig = bq.Figure(marks=[line], axes=[xax, yax], anim... | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Handdraw | handdraw = bq.interacts.HandDraw(lines=line)
fig.interaction = handdraw
line.y[0] | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Moving points around | from bqplot import *
size = 100
np.random.seed(0)
x_data = range(size)
y_data = np.cumsum(np.random.randn(size) * 100.0)
## Enabling moving of points in scatter. Try to click and drag any of the points in the scatter and
## notice the line representing the mean of the data update
sc_x = LinearScale()
sc_y = LinearS... | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | QuantStack/quantstack-talks | bsd-3-clause |
Load config from default location. | config.load_kube_config() | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Create API endpoint instance as well as API resource instances (body and specification). | api_instance = client.AppsV1Api()
dep = client.V1Deployment()
spec = client.V1DeploymentSpec() | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Fill required object fields (apiVersion, kind, metadata and spec). | name = "my-busybox"
dep.metadata = client.V1ObjectMeta(name=name)
spec.template = client.V1PodTemplateSpec()
spec.template.metadata = client.V1ObjectMeta(name="busybox")
spec.template.metadata.labels = {"app":"busybox"}
spec.template.spec = client.V1PodSpec()
dep.spec = spec
container = client.V1Container()
containe... | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Create Deployment using create_xxxx command for Deployments. | api_instance.create_namespaced_deployment(namespace="default",body=dep) | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Use list_xxxx command for Deployment, to list Deployments. | deps = api_instance.list_namespaced_deployment(namespace="default")
for item in deps.items:
print("%s %s" % (item.metadata.namespace, item.metadata.name)) | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Use read_xxxx command for Deployment, to display the detailed state of the created Deployment resource. | api_instance.read_namespaced_deployment(namespace="default",name=name) | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Use patch_xxxx command for Deployment, to make specific update to the Deployment. | dep.metadata.labels = {"key": "value"}
api_instance.patch_namespaced_deployment(name=name, namespace="default", body=dep) | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Use replace_xxxx command for Deployment, to update Deployment with a completely new version of the object. | dep.spec.template.spec.containers[0].image = "busybox:1.26.2"
api_instance.replace_namespaced_deployment(name=name, namespace="default", body=dep) | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Use delete_xxxx command for Deployment, to delete created Deployment. | api_instance.delete_namespaced_deployment(name=name, namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5)) | examples/notebooks/intro_notebook.ipynb | kubernetes-client/python | apache-2.0 |
Create Data | # Create a list of 20 observations drawn from a random distribution
# with mean 1 and a standard deviation of 1.5
x = np.random.normal(1, 1.5, 20)
# Create a list of 20 observations drawn from a random distribution
# with mean 0 and a standard deviation of 1.5
y = np.random.normal(0, 1.5, 20) | statistics/t-tests.ipynb | tpin3694/tpin3694.github.io | mit |
One Sample Two-Sided T-Test
Imagine the one sample T-test and drawing a (normally shaped) hill centered at 1 and "spread" out with a standard deviation of 1.5, then placing a flag at 0 and looking at where on the hill the flag is location. Is it near the top? Far away from the hill? If the flag is near the very bottom ... | # Run a t-test to test if the mean of x is statistically significantly different than 0
pvalue = stats.ttest_1samp(x, 0)[1]
# View the p-value
pvalue | statistics/t-tests.ipynb | tpin3694/tpin3694.github.io | mit |
Two Variable Unpaired Two-Sided T-Test With Equal Variances
Imagine the one sample T-test and drawing two (normally shaped) hills centered at their means and their 'flattness' (individual spread) based on the standard deviation. The T-test looks at how much the two hills are overlapping. Are they basically on top of ea... | stats.ttest_ind(x, y)[1] | statistics/t-tests.ipynb | tpin3694/tpin3694.github.io | mit |
Two Variable Unpaired Two-Sided T-Test With Unequal Variances | stats.ttest_ind(x, y, equal_var=False)[1] | statistics/t-tests.ipynb | tpin3694/tpin3694.github.io | mit |
Two Variable Paired Two-Sided T-Test
Paired T-tests are used when we are taking repeated samples and want to take into account the fact that the two distributions we are testing are paired. | stats.ttest_rel(x, y)[1] | statistics/t-tests.ipynb | tpin3694/tpin3694.github.io | mit |
Outline
The outline of the idea is:
Find the red lines that represent parallel synchronization signal above
Calculate their size
"Synchromize with rows below" (according to the rules of the code)
...
PROFIT!
!!! Things to keep in mind:
deviations of red
deviations of black
noise - it might just break everything!
beg... | # Let us first define colour red
# We'll work with RGB for colours
# So for accepted variants we'll make a list of 3-lists.
class colourlist(list):
"""Just lists of 3-lists with some fancy methods to work with RGB colours
"""
def add_deviations(self, d=8): # Magical numbers are so magical!
"""... | Decoder.ipynb | fedor1113/LineCodes | mit |
We found the sync (clock) line length in our graph! | print("It is", line_length) | Decoder.ipynb | fedor1113/LineCodes | mit |
Now the information transfer signal itself is ~"black", so we need to find the black colour range as well! | # Let's do just that
black = colourlist([[0, 0, 0], [0, 1, 0], [7, 2, 8]])
# black.add_deviations(60) # experimentally it is somewhere around that
# experimentally the max deviation is somewhere around 60
print(black) | Decoder.ipynb | fedor1113/LineCodes | mit |
The signal we are currently interested in is Manchester code (as per G.E. Thomas).
It is a self-clocking signal, but since we do have a clock with it - we use it)
Let us find the height of the Manchester signal in our PNG - just because... | fb = find_first_pixel_of_colour(img, black)
def signal_height(pxls, fib):
signal_height = 1
# if ([img[fb[0]+1][fb[1]], img[fb[0]+1][fb[1]+1], img[fb[0]+1][fb[1]+2]] in black):
if withinDeviation([pxls[fib[0]+1][fib[1]], pxls[fib[0]+1][fib[1]+1]
, pxls[fib[0]+1][fib[1]+2]], black, 6... | Decoder.ipynb | fedor1113/LineCodes | mit |
Huzzah!
And that is how we decode it.
Let us now look at some specific examples. | # Here is a helper function to automate all that
def parse_code(path_to_file, code, inv=False):
"""Guess what... Parses a line code PNG
Input: str, function
(~coinsides with the name of the code)
Output: str (of '1' and '0') or (maybe?) None
"""
r1 = png.Reader(path_to_file)... | Decoder.ipynb | fedor1113/LineCodes | mit |
Manchester Code
(a rather tricky example)
Here is a tricky example of Manchester code - where we have ASCII '0's and '1's with which a 3-letter "word" is encoded. | ans1 = print_nums(parse_code("Line_Code_PNGs/Manchester.png", manchester))
res2d = ""
for i in range(0, len(ans1)):
res2d += chr(ans1[i])
ans2d = []
for i in range(0, len(res2d), 8):
print(int('0b'+res2d[i:i+8], 2)) | Decoder.ipynb | fedor1113/LineCodes | mit |
NRZ | ans2 = print_nums(parse_code("Line_Code_PNGs/NRZ.png", nrz))
| Decoder.ipynb | fedor1113/LineCodes | mit |
2B1Q
Warning! 2B1Q is currently almost completely broken. Pull requests with correct solutions are welcome :) | ans3 = print_nums(parse_code("Line_Code_PNGs/2B1Q.png", code2B1Q))
res2d3 = ""
for i in range(0, len(ans3)):
res2d3 += chr(ans3[i])
ans2d3 = []
for i in range(0, len(res2d3), 8):
print(int('0b'+res2d3[i:i+8], 2)) | Decoder.ipynb | fedor1113/LineCodes | mit |
Processing a single file
We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file: | with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f:
print(f.readline()) | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
So we will need to do some manual processing.
Just reading the tab-delimited data: | data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None)
data.head() | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names.
<div class="alert alert-success">
<b>EXERCISE 1</b>: <br><br> Clean up this dataframe... | # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag'
hours = ["{:02d}".format(i) for i in range(24)]
column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair]
# %load _solutions/case4_air_quality_processing1.py
# %load _solu... | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data).
<div class="alert alert-success">
**EXERCISE 2**:
Drop all 'flag' columns ('flag1', 'flag2', ...)
</div> | flag_columns = [col for col in data.columns if 'flag' in col]
# we can now use this list to drop these columns
# %load _solutions/case4_air_quality_processing3.py
data.head() | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries.
<div class="alert alert-info">
<b>REMEMBER</b>:
Recap: reshaping your data with [`stack` / `melt` a... | # %load _solutions/case4_air_quality_processing4.py | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Reshaping using stack: | # %load _solutions/case4_air_quality_processing5.py
# %load _solutions/case4_air_quality_processing6.py | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Combine date and hour: | # %load _solutions/case4_air_quality_processing7.py
# %load _solutions/case4_air_quality_processing8.py
# %load _solutions/case4_air_quality_processing9.py
data_stacked.head() | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex: | data_stacked.index
data_stacked.plot() | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Processing a collection of files
We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above.
<div class="alert alert-success">
<... | def read_airbase_file(filename, station):
"""
Read hourly AirBase data files.
Parameters
----------
filename : string
Path to the data file.
station : string
Name of the station.
Returns
-------
DataFrame
Processed dataframe.
"""
...
... | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Test the function on the data file from above: | import os
filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012"
station = os.path.split(filename)[-1][:7]
station
test = read_airbase_file(filename, station)
test.head() | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe.
<div class="alert alert-success">
**EXERCISE 5**:
Use the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBas... | from pathlib import Path
# %load _solutions/case4_air_quality_processing11.py | notebooks/case4_air_quality_processing.ipynb | jorisvandenbossche/DS-python-data-analysis | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.