markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
That was easy! All the information required by ``ionize_box`` was given directly by the ``perturbed_field`` object. If we had _also_ passed a redshift explicitly, this redshift would be checked against that from the ``perturbed_field`` and an error raised if they were incompatible: Let's see the fieldnames:
ionized_field.fieldnames
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Here the ``first_box`` field is actually just a flag to tell the C code whether this has been *evolved* or not. Here, it hasn't been, it's the "first box" of an evolutionary chain. Let's plot the neutral fraction:
plotting.coeval_sliceplot(ionized_field, "xH_box");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Brightness Temperature Now we can use what we have to get the brightness temperature:
brightness_temp = p21c.brightness_temperature(ionized_box=ionized_field, perturbed_field=perturbed_field)
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This has only a single field, ``brightness_temp``:
plotting.coeval_sliceplot(brightness_temp);
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
The Problem And there you have it -- you've computed each of the four steps (there's actually another, `spin_temperature`, that you require if you don't assume the saturated limit) individually. However, some problems quickly arise. What if you want the `perturb_field`, but don't care about the initial conditions? We ...
perturbed_field = p21c.perturb_field( redshift = 8.0, user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density");
2020-02-29 15:10:45,367 | INFO | Existing z=8.0 perturb_field boxes found and read in (seed=12345).
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Note that here we pass exactly the same parameters as were used in the previous section. It gives a message that the full box was found in the cache and immediately returns. However, if we change the redshift:
perturbed_field = p21c.perturb_field( redshift = 7.0, user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density");
2020-02-29 15:10:45,748 | INFO | Existing init_boxes found and read in (seed=12345).
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Now it finds the initial conditions, but it must compute the perturbed field at the new redshift. If we had changed the initial parameters as well, it would have to calculate everything:
perturbed_field = p21c.perturb_field( redshift = 8.0, user_params = {"HII_DIM": 50, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This shows that we don't need to perform the *previous* step to do any of the steps, they will be calculated automatically.Now, let's get an ionized box, but this time we won't assume the saturated limit, so we need to use the spin temperature. We can do this directly in the ``ionize_box`` function, but let's do it exp...
spin_temp = p21c.spin_temperature( perturbed_field = perturbed_field, zprime_step_factor=1.05, ) plotting.coeval_sliceplot(spin_temp, "Ts_box");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Let's note here that each of the functions accepts a few of the same arguments that modifies how the boxes are cached. There is a ``write`` argument, which if set to ``False``, will disable writing that box to cache (and it is passed through the recursive heirarchy). There is also ``regenerate``, which if ``True``, for...
ionized_box = p21c.ionize_box( spin_temp = spin_temp, zprime_step_factor=1.05, ) plotting.coeval_sliceplot(ionized_box, "xH_box");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Great! So again, we can just get the brightness temp:
brightness_temp = p21c.brightness_temperature( ionized_box = ionized_box, perturbed_field = perturbed_field, spin_temp = spin_temp )
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Now lets plot our brightness temperature, which has been evolved from high redshift with spin temperature fluctuations:
plotting.coeval_sliceplot(brightness_temp);
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
We can also check what the result would have been if we had limited the maximum redshift of heating. Note that this *recalculates* all previous spin temperature and ionized boxes, because they depend on both ``z_heat_max`` and ``zprime_step_factor``.
ionized_box = p21c.ionize_box( spin_temp = spin_temp, zprime_step_factor=1.05, z_heat_max = 20.0 ) brightness_temp = p21c.brightness_temperature( ionized_box = ionized_box, perturbed_field = perturbed_field, spin_temp = spin_temp ) plotting.coeval_sliceplot(brightness_temp);
2020-02-29 15:13:08,824 | INFO | Existing init_boxes found and read in (seed=521414794440). 2020-02-29 15:13:08,840 | INFO | Existing z=19.62816486020931 perturb_field boxes found and read in (seed=521414794440). 2020-02-29 15:13:11,438 | INFO | Existing z=18.645871295437438 perturb_field boxes found and read in (seed=...
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Composing a pipeline from reusable, pre-built, and lightweight componentsThis tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:- Write the program that contains you...
import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Create clientIf you run this notebook **outside** of a Kubeflow cluster, run the following command:- `host`: The URL of your Kubeflow Pipelines instance, for example "https://``.endpoints.``.cloud.goog/pipeline"- `client_id`: The client ID used by Identity-Aware Proxy- `other_client_id`: The client ID used to obtain t...
# Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleuse...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Build reusable components Writing the program code The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.Your component can create o...
%%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser()...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Create a Docker containerCreate your own container image that includes your program. Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The `FROM` statement specifies the Base Image from which you are b...
%%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:- Use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Container Registry (GCR). This ...
IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstra...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
If you want to use docker to build the imageRun the following in a cell```bash%%bash -s "{PROJECT_ID}"IMAGE_NAME="mnist_training_kf_pipeline"TAG="latest" "v_$(date +%Y%m%d_%H%M%S)" Create script to build docker image and push it.cat > ./tmp/components/mnist_training/build_image.sh <<HEREPROJECT_ID="${1}"IMAGE_NAME="$...
image_name = GCR_IMAGE
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Writing your component definition fileTo create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubef...
%%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'P...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Define deployment operation on AI Platform
mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mlengine_deploy_op( model_uri=mo...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Kubeflow serving deployment component as an option. **Note that, the deployed Endppoint URI is not availabe as output of this component.**```pythonkubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/ml_engine/deploy/component.yaml')def deploy_...
def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: ...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Create your workflow as a Python function Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration, and must include `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
# Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( ...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
Submit a pipeline run
pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_...
_____no_output_____
Apache-2.0
samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
danishsamad/pipelines
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{...
# import the drawing methods from matplotlib.pyplot import plot, figure, show # draw a figure figure(figsize=(6,6), dpi=80) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the origin plot(0,0,'ro') # a point in red color # draw these quantum states as points (in blue color) pl...
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Now, we draw the quantum states as arrows (vectors):
# import the drawing methods from matplotlib.pyplot import figure, arrow, show # draw a figure figure(figsize=(6,6), dpi=80) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the quantum states as vectors (in blue color) arrow(0,0,0.92,0,head_width=0.04, head_length=0.08, color="...
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Task 1 Write a function that returns a randomly created 2-dimensional (real-valued) quantum state._You can use your code written for [a task given in notebook "Quantum State](B28_Quantum_State.ipynbtask2)._Create 100 random quantum states by using your function and then draw all of them as points.Create 1000 random qu...
# randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) secon...
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
click for our solution Task 2 Repeat the previous task by drawing the quantum states as vectors (arrows) instead of points.The different colors can be used when drawing the points ([matplotlib.colors](https://matplotlib.org/2.0.2/api/colors_api.html))._Please keep the codes below for drawing axes for getting a better ...
# randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-1000,1001) second_entry = randrange(-1000,1001) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) s...
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
click for our solution Unit circle All quantum states of a qubit form the unit circle.The length of each quantum state is 1.All points that are 1 unit away from the origin form the circle with radius 1 unit.We can draw the unit circle with python.We have a predefined function for drawing the unit circle: "draw_unit_ci...
# import the drawing methods from matplotlib.pyplot import figure figure(figsize=(6,6), dpi=80) # size of the figure # include our predefined functions %run qlatvia.py # draw axes draw_axes() # draw the unit circle draw_unit_circle()
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Quantum state of a qubit Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space.It can also be represented as a vector from origin to that point.We draw the quantum state $ \myvector{3/5 \\ 4/5} $ and its elements. Our predefined function "draw_qub...
%run qlatvia.py draw_qubit() draw_quantum_state(3/5,4/5,"|v>")
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Now, we draw its angle with $ \ket{0} $-axis and its projections on both axes. For drawing the angle, we use the method "Arc" from library "matplotlib.patches".
%run qlatvia.py draw_qubit() draw_quantum_state(3/5,4/5,"|v>") from matplotlib.pyplot import arrow, text, gca # the projection on |0>-axis arrow(0,0,3/5,0,color="blue",linewidth=1.5) arrow(0,4/5,3/5,0,color="blue",linestyle='dotted') text(0.1,-0.1,"cos(a)=3/5") # the projection on |1>-axis arrow(0,0,0,4/5,color="b...
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Observations: The angle of quantum state with state $ \ket{0} $ is $a$. The amplitude of state $ \ket{0} $ is $ \cos(a) = \frac{3}{5} $. The probability of observing state $ \ket{0} $ is $ \cos^2(a) = \frac{9}{25} $. The amplitude of state $ \ket{1} $ is $ \sin(a) = \frac{4}{5} $. The probabil...
# set the angle from random import randrange myangle = randrange(361) ################################################ from matplotlib.pyplot import figure,gca from matplotlib.patches import Arc from math import sin,cos,pi # draw a figure figure(figsize=(6,6), dpi=60) %run qlatvia.py draw_axes() print("the selec...
the selected angle is 27 degrees it is 0.075 of a full circle its length is 0.075 x 2π = 0.47123889803846897 its radian value is 0.47123889803846897 the amplitude of state |0> is 0.8910065241883679 the amplitude of state |1> is 0.45399049973954675 the probability of observing state |0> is 0.7938926261462367 the proba...
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Random quantum states Any quantum state of a (real-valued) qubit is a point on the unit circle.We use this fact to create random quantum states by picking a random point on the unit circle. For this purpose, we randomly pick an angle between zero and 360 degrees and then find the amplitudes of the quantum state by usi...
# %%writefile FILENAME.py # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state2(): angle_degree = randrange(360) angle_radian = 2*pi*angle/360 return [cos(angle_radian),sin(angle_radian)]
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states.Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with name.We include our predefined functions with the following line of code: %run qla...
# visually test your function %run qlatvia.py # draw the axes draw_qubit() from random import randrange for i in range(6): [x,y]=random_quantum_state2() draw_quantum_state(x,y,"|v"+str(i)+">") # include our predefined functions %run qlatvia.py # draw the axes draw_qubit() for i in range(100): [x,y]=rand...
_____no_output_____
Apache-2.0
bronze/B30_Visualization_of_a_Qubit.ipynb
dilyaraahmetshina/quantum_computings
'power_act_.csv' In total we have 18 columns and 64328 rows Coulmns names : ['dt_start_utc', 'power_act_21', 'power_act_24', 'power_act_47' .....] date ranges from '2019-06-30' to '2021-04-30' Date seems to be recorded for every 15 minutesAll the columns contains missing values only for 'power_act_21' its 1.5% wher...
df1.tail(10) df1.info() df1.describe() df1.isnull().sum().sort_values(ascending=False)/len(df1)*100 df1['dt_start_utc'] = df1['dt_start_utc'].apply(pd.to_datetime) df1 = df1.set_index('dt_start_utc') plt.figure(figsize=(10,6)) df1['power_act_21'].plot() plt.figure(figsize=(10,6)) df1['power_act_47'].plot() plt.figure(...
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
'power_fc_.csv' In total we have 23 columns and 66020 rows Coulmns names : ['dt_start_utc', 'power_act_21', 'power_act_24', 'power_act_47' .....] date ranges from ''2019-06-13 07:00'' to ''2021-04-30 23:45'' Date seems to be recorded for every 15 minutesno null values for 'power_act_21' whereas for other features >...
df2['dt_start_utc'].max() df2.head() df2.shape df2.info() df2.isnull().sum().sort_values(ascending=False)/len(df1)*100 df2['dt_start_utc'] = df2['dt_start_utc'].apply(pd.to_datetime) #df2 = df2.reset_index('dt_start_utc') df2['dt_start_utc'] = df2['dt_start_utc'].apply(pd.to_datetime) df2.head() df3 = pd.read_csv('data...
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
'regelleistung_aggr_results.csv' In total we have 17 columns and 16068 rows Coulmns names : ['date_start', 'date_end', 'product', 'reserve_type', 'total_min_capacity_price_eur_mw', 'total_average_capacity_price_eur_mw', 'total_marginal_capacity_price_eur_mw','total_min_energy_price_eur_mwh', 'total_average_energy...
df3.shape df3.info() df3.groupby(by='date_start').count().head(2) df3['reserve_type'].unique() df3['product'].unique() #sns.pairplot(df3) df3.info() df3.isnull().sum().sort_values(ascending=False)/len(df1)*100 df3.shape df4 = pd.read_csv('data/regelleistung_demand.csv') df4.head()
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
'regelleistung_demand.csv' In total we have 6 columns and 16188 rows Coulmns names : ['date_start', 'date_end', 'product', 'total_demand_mw', 'germany_block_demand_mw', 'reserve_type'] 2 unique reserve type ['MRL', 'SRL'] 12 unique product type ['NEG_00_04', 'NEG_04_08', 'NEG_08_12', 'NEG_12_16', 'NEG_16_20','N...
df4.isnull().sum().sort_values(ascending=False) def check_unique(df): ''' check unique values for each columns and print them if they are below 15''' for col in df.columns: n = df[col].nunique() print(f'{col} has {n} unique values') if n < 15: print(df[col].unique()) df...
_____no_output_____
MIT
notebooks/eda_ravi.ipynb
Windbenders/capstone_energy
2.1A=[[5],[7],[0]]; A=Matrix(A); show("A=",A)B=[[-4]]; B=Matrix(B); show("B=",B)show("AB=",A*B)show("BA=",B*A)
#2.2 A=[[2,7,2],[8,-6,6]]; A=Matrix(A); show("A=",A) B=[[6,6],[-2,-3],[-6,8]]; B=Matrix(B); show("B=",B) show("AB=",A*B) show("BA=",B*A) #2.3 A=[[-4,-6],[0,6]]; A=Matrix(A); show("A=",A) B=[[-2],[-2]]; B=Matrix(B); show("B=",B) show("AB=",A*B) #show("BA=",B*A) #2.4 A=[[7],[6]]; A=Matrix(A); show("A=",A) B=[[-9,8]]; B=M...
_____no_output_____
MIT
latex/MD_SMC/MD03 Matrices.ipynb
asimovian-academy/optimizacion-combinatoria
Text Cleaningfrom http://udallasclassics.org/wp-content/uploads/maurer_files/APPARATUSABBREVIATIONS.pdf[...] Square brackets, or in recent editions wavy brackets ʺ{...}ʺ, enclose words etc. that an editor thinks should be deleted (see ʺdel.ʺ) or marked as out of place (see ʺsecl.ʺ).[...] Square brackets in a papyr...
truecase_file = 'truecase_counter.latin.pkl' if os.path.exists(truecase_file): with open(truecase_file, 'rb') as fin: case_counts = pickle.load(fin) else: tesserae = get_corpus_reader(corpus_name='latin_text_tesserae', language='latin') case_counts = Counter() jv_replacer = JVReplacer()...
_____no_output_____
Apache-2.0
probablistic_language_modeling/cicero_corpus_counts.ipynb
todd-cook/ML-You-Can-Use
Visualization of our corpus comparison: If you took one page from one author and placed it into Cicero, how surprising would it be?If the other author's vocabulary was substantially different, it would be noticeable. We can quantify this.As a result, since we want to predict as close as possible to the author, we shou...
results = [] for author in author_index: files = get_phi5_author_files(author, author_index) # cicero_lemmas, cicero_inflected_words = get_word_counts(cicero_files) author_lemmas, author_inflected_words = get_word_counts(files) author_words = set(author_lemmas.keys()) cicero_words = se...
_____no_output_____
Apache-2.0
probablistic_language_modeling/cicero_corpus_counts.ipynb
todd-cook/ML-You-Can-Use
T81-558: Applications of Deep Neural Networks**Module 7: Generative Adversarial Networks*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [c...
# Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Part 7.2: Implementing DCGANs in KerasPaper that described the type of DCGAN that we will create in this module. [[Cite:radford2015unsupervised]](https://arxiv.org/abs/1511.06434) This paper implements a DCGAN as follows:* No pre-processing was applied to training images besides scaling to the range of the tanh activa...
try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.c...
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
The following packages will be used to implement a basic GAN system in Python/Keras.
import tensorflow as tf from tensorflow.keras.layers import Input, Reshape, Dropout, Dense from tensorflow.keras.layers import Flatten, BatchNormalization from tensorflow.keras.layers import Activation, ZeroPadding2D from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import UpSampling2D, Conv2D...
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
These are the constants that define how the GANs will be created for this example. The higher the resolution, the more memory that will be needed. Higher resolution will also result in longer run times. For Google CoLab (with GPU) 128x128 resolution is as high as can be used (due to memory). Note that the resolutio...
# Generation resolution - Must be square # Training data is also scaled to this. # Note GENERATE_RES 4 or higher # will blow Google CoLab's memory and have not # been tested extensivly. GENERATE_RES = 3 # Generation resolution factor # (1=32, 2=64, 3=96, 4=128, etc.) GENERATE_SQUARE = 32 * GENERATE_RES # rows/cols ...
Will generate 96px square images.
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Next we will load and preprocess the images. This can take awhile. Google CoLab took around an hour to process. Because of this we store the processed file as a binary. This way we can simply reload the processed training data and quickly use it. It is most efficient to only perform this operation once. The dimen...
# Image set has 11,682 images. Can take over an hour # for initial preprocessing. # Because of this time needed, save a Numpy preprocessed file. # Note, that file is large enough to cause problems for # sume verisons of Pickle, # so Numpy binary files are used. training_binary_path = os.path.join(DATA_PATH, ...
Looking for file: /content/drive/My Drive/projects/faces/training_data_96_96.npy Loading previous training pickle...
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
We will use a TensorFlow **Dataset** object to actually hold the images. This allows the data to be quickly shuffled int divided into the appropriate batch sizes for training.
# Batch and shuffle the data train_dataset = tf.data.Dataset.from_tensor_slices(training_data) \ .shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
The code below creates the generator and discriminator. Next we actually build the discriminator and the generator. Both will be trained with the Adam optimizer.
def build_generator(seed_size, channels): model = Sequential() model.add(Dense(4*4*256,activation="relu",input_dim=seed_size)) model.add(Reshape((4,4,256))) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(A...
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
As we progress through training images will be produced to show the progress. These images will contain a number of rendered faces that show how good the generator has become. These faces will be
def save_images(cnt,noise): image_array = np.full(( PREVIEW_MARGIN + (PREVIEW_ROWS * (GENERATE_SQUARE+PREVIEW_MARGIN)), PREVIEW_MARGIN + (PREVIEW_COLS * (GENERATE_SQUARE+PREVIEW_MARGIN)), 3), 255, dtype=np.uint8) generated_images = generator.predict(noise) generated_images = 0.5 * generat...
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
generator = build_generator(SEED_SIZE, IMAGE_CHANNELS) noise = tf.random.normal([1, SEED_SIZE]) generated_image = generator(noise, training=False) plt.imshow(generated_image[0, :, :, 0])
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
image_shape = (GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS) discriminator = build_discriminator(image_shape) decision = discriminator(generated_image) print (decision)
tf.Tensor([[0.50032634]], shape=(1, 1), dtype=float32)
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Loss functions must be developed that allow the generator and discriminator to be trained in an adversarial way. Because these two neural networks are being trained independently they must be trained in two separate passes. This requires two separate loss functions and also two separate updates to the gradients. Whe...
# This method returns a helper function to compute cross entropy loss cross_entropy = tf.keras.losses.BinaryCrossentropy() def discriminator_loss(real_output, fake_output): real_loss = cross_entropy(tf.ones_like(real_output), real_output) fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output) t...
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Both the generator and discriminator use Adam and the same learning rate and momentum. This does not need to be the case. If you use a **GENERATE_RES** greater than 3 you may need to tune these learning rates, as well as other training and hyperparameters.
generator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5) discriminator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5)
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
The following function is where most of the training takes place for both the discriminator and the generator. This function was based on the GAN provided by the [TensorFlow Keras exmples](https://www.tensorflow.org/tutorials/generative/dcgan) documentation. The first thing you should notice about this function is th...
# Notice the use of `tf.function` # This annotation causes the function to be "compiled". @tf.function def train_step(images): seed = tf.random.normal([BATCH_SIZE, SEED_SIZE]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(seed, training=True) real_output ...
Epoch 1, gen loss=0.6737478375434875,disc loss=1.2876468896865845, 0:01:33.86 Epoch 2, gen loss=0.6754337549209595,disc loss=1.2804691791534424, 0:01:26.96 Epoch 3, gen loss=0.6792119741439819,disc loss=1.2002451419830322, 0:01:27.23 Epoch 4, gen loss=0.6704637408256531,disc loss=1.1011184453964233, 0:01:26.74 Epoch 5,...
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
generator.save(os.path.join(DATA_PATH,"face_generator.h5"))
_____no_output_____
MIT
gans_keras.ipynb
andresvilla86/diadx-ia-ml
Preprocess data
nb_classes = 10 # input image dimensions img_rows, img_cols = 28, 28 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling pool_size = (2, 2) # convolution kernel size kernel_size = (3, 3) # the data, shuffled and split between train and test sets (X_train, y_train), (X_test,...
X_train shape: (60000, 28, 28, 1) 60000 train samples 10000 test samples
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Build a Keras model using the `Sequential API`
batch_size = 50 nb_epoch = 10 model = Sequential() model.add(Convolution2D(nb_filters, kernel_size, padding='valid', input_shape=input_shape, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(nb_filters, kernel...
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 ________________________________________________________...
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Train and evaluate the model
model.fit(X_train[0:10000, ...], Y_train[0:10000, ...], batch_size=batch_size, epochs=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1])
Train on 10000 samples, validate on 10000 samples Epoch 1/10 10000/10000 [==============================] - 5s - loss: 0.4915 - acc: 0.8532 - val_loss: 0.2062 - val_acc: 0.9433 Epoch 2/10 10000/10000 [==============================] - 5s - loss: 0.1562 - acc: 0.9527 - val_loss: 0.1288 - val_acc: 0.9605 Epoch 3/10 10000...
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Save the model
model.save('example_keras_mnist_model.h5')
_____no_output_____
MIT
notebooks/keras/train.ipynb
Petr-By/qtpyvis
Compute forcing for 1%CO2 data
import os import numpy as np import pandas as pd import matplotlib.pyplot as plt filedir1 = '/Users/hege-beatefredriksen/OneDrive - UiT Office 365/Data/CMIP5_globalaverages/Forcingpaperdata' storedata = False # store anomalies in file? storeforcingdata = False createnewfile = False # if it is the first time this is ru...
_____no_output_____
MIT
Compute_forcing_1pctCO2.ipynb
Hegebf/CMIP5-forcing
Load my estimated parameters
filename = 'best_estimated_parameters.txt' parameter_table = pd.read_table(filename,index_col=0) GregoryT2x = parameter_table.loc[model,'GregoryT2x'] GregoryF2x = parameter_table.loc[model,'GregoryF2x'] fbpar = GregoryF2x/GregoryT2x #feedback parameter from Gregory plot print(fbpar) F = deltaN + fbpar*deltaT fig, ax...
_____no_output_____
MIT
Compute_forcing_1pctCO2.ipynb
Hegebf/CMIP5-forcing
How to do video classification In this tutorial, we will show how to train a video classification model in Classy Vision. Given an input video, the video classification task is to predict the most probable class label. This is very similar to image classification, which was covered in other tutorials, but there are a...
from classy_vision.dataset import build_dataset # set it to the folder where video files are saved video_dir = "[PUT YOUR VIDEO FOLDER HERE]" # set it to the folder where dataset splitting files are saved splits_dir = "[PUT THE FOLDER WHICH CONTAINS SPLITTING FILES HERE]" # set it to the file path for saving the metad...
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
Note we specify different transforms for training and testing split. For training split, we first randomly select a size from `size_range` [128, 160], and resize the video clip so that its short edge is equal to the random size. After that, we take a random crop of spatial size 112 x 112. We find such data augmentation...
from classy_vision.models import build_model model = build_model({ "name": "resnext3d", "frames_per_clip": 8, # The number of frames we have in each video clip "input_planes": 3, # We use RGB video frames. So the input planes is 3 "clip_crop_size": 112, # We take croppings of siz...
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
We also need to create a model head, which consists of an average pooling layer and a linear layer, by using the `fully_convolutional_linear` head. At test time, the shape (channels, frames, height, width) of input tensor is typically `(3 x 8 x 128 x 173)`. The shape of input tensor to the average pooling layer is `(20...
from classy_vision.heads import build_head from collections import defaultdict unique_id = "default_head" head = build_head({ "name": "fully_convolutional_linear", "unique_id": unique_id, "pool_size": [1, 7, 7], "num_classes": 101, "in_plane": 512 }) # In Classy Vision, the head can be attached...
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
3. Choose the metersThis is the biggest difference between video and image classification. For images we used `AccuracyMeter` to measure top-1 and top-5 accuracy. For videos you can also use both `AccuracyMeter` and `VideoAccuracyMeter`, but they behave differently: * `AccuracyMeter` takes one clip-level prediction an...
from classy_vision.meters import build_meters, AccuracyMeter, VideoAccuracyMeter meters = build_meters({ "accuracy": { "topk": [1, 5] }, "video_accuracy": { "topk": [1, 5], "clips_per_video_train": 1, "clips_per_video_test": 10 } })
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
4. Build a taskGreat! we have defined the minimal set of components necessary for video classification, including dataset, model, loss function, meters and optimizer. We proceed to define a video classification task, and populate it with all the components.
from classy_vision.tasks import ClassificationTask from classy_vision.optim import build_optimizer from classy_vision.losses import build_loss loss = build_loss({"name": "CrossEntropyLoss"}) optimizer = build_optimizer({ "name": "sgd", "lr": { "name": "multistep", "values": [0.005, 0.0005], ...
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
5. Start trainingAfter creating a task, you can simply pass that to a Trainer to start training. Here we will train on a single node and configure logging and checkpoints for training:
import time import os from classy_vision.trainer import LocalTrainer from classy_vision.hooks import CheckpointHook from classy_vision.hooks import LossLrMeterLoggingHook hooks = [LossLrMeterLoggingHook(log_freq=4)] checkpoint_dir = f"/tmp/classy_checkpoint_{time.time()}" os.mkdir(checkpoint_dir) hooks.append(Checkp...
_____no_output_____
MIT
tutorials/video_classification.ipynb
isunjin/ClassyVision
Bayesian Parametric Regression Notebook version: 1.5 (Sep 24, 2019) Author: Jerónimo Arenas García (jarenas@tsc.uc3m.es) Jesús Cid-Sueiro (jesus.cid@uc3m.es) Changes: v.1.0 - First version v.1.1 - ML Model selection included v.1.2 - Some typos corrected v.1.3 - ...
# Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline from IPython import display import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy.io # To read matlab files import pylab import time
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
A quick note on the mathematical notationIn this notebook we will make extensive use of probability distributions. In general, we will use capital letters${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take. In general, we will u...
n_grid = 200 degree = 3 nplots = 20 # Prior distribution parameters mean_w = np.zeros((degree+1,)) v_p = 0.2 ### Try increasing this value var_w = v_p * np.eye(degree+1) xmin = -1 xmax = 1 X_grid = np.linspace(xmin, xmax, n_grid) fig = plt.figure() ax = fig.add_subplot(111) for k in range(nplots): #Dra...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
The data observation will modify our belief about the true data model according to the posterior distribution. In the following we will analyze this in a Gaussian case. 4. Bayesian regression for a Gaussian model.We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model. 4.1. Step 1:...
# True data parameters w_true = 3 std_n = 0.4 # Generate the whole dataset n_max = 64 X_tr = 3 * np.random.random((n_max,1)) - 0.5 S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1) # Plot data plt.figure() plt.plot(X_tr, S_tr, 'b.') plt.xlabel('$x$') plt.ylabel('$s$') plt.show()
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Fit a Bayesian linear regression model assuming $z= x$ and
# Model parameters sigma_eps = 0.4 mean_w = np.zeros((1,)) sigma_p = 1e6 Var_p = sigma_p**2* np.eye(1)
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
# No. of points to analyze n_points = [1, 2, 4, 8, 16, 32, 64] # Prepare plots w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axis plt.figure() # Compute the prior distribution over the grid points in w_grid # p = <FILL IN> p = 1.0/(sigma_p*np.sqrt(2*np.pi)) * np.exp(-(w_grid**2)/(2*sigma_p**2)) plt.plot(w_gri...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Exercise 3: Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation `sigma_n` which is exactly equal to the value assumed by the model, stored in variable `sigma_eps`. Check what happens if we take `sigma_eps=4*sigma_n...
# <SOL> x = np.array([-1.0, 3.0]) s_pred = w_MSE * x plt.figure() plt.plot(X_tr, S_tr,'b.') plt.plot(x, s_pred) plt.show() # </SOL>
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
5. Maximum likelihood vs Bayesian Inference. 5.1. The Maximum Likelihood Estimate.For comparative purposes, it is interesting to see here that the likelihood function is enough to compute the Maximum Likelihood (ML) estimate \begin{align}{\bf w}_\text{ML} &= \arg \max_{\bf w} p(\mathcal{D}|{\bf w}) \\ &=...
n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 # Data generation X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) # Signal xmin = np.min(X_tr) - 0.1 xmax = np.max(X_tr) + 0.1 X_grid = np.linspace(xmin, xmax, n_grid) S_grid = - np.cos(frec*X_grid) #Noise...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Let us assume that the cosine form of the noise-free signal is unknown, and we assume a polynomial model with a high degree. The following code plots the LS estimate
degree = 12 # We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_LS = np.polyval(w_LS,X_grid) # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free ...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
The following fragment of code computes the posterior weight distribution, draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
nplots = 6 # Prior distribution parameters sigma_eps = 0.2 mean_w = np.zeros((degree+1,)) sigma_p = .5 Var_p = sigma_p**2 * np.eye(degree+1) # Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z = np.asmatrix(Z) #C...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
# Compute standard deviation std_x = [] for el in X_grid: x_ast = np.array([el**k for k in range(degree+1)]) std_x.append(np.sqrt(x_ast.dot(Var_w).dot(x_ast)[0,0])) std_x = np.array(std_x) # Plot data fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot the p...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Exercise 5:Assume the dataset ${\cal{D}} = \left\{ x_k, s_k \right\}_{k=0}^{K-1}$ containing $K$ i.i.d. samples from a distribution$$p(s|x,w) = w x \exp(-w x s), \qquad s>0,\quad x> 0,\quad w> 0$$We model also our uncertainty about the value of $w$ assuming a prior distribution for $w$ following a Gamma distribution w...
from math import pi n_points = 15 frec = 3 std_n = 0.2 max_degree = 12 #Prior distribution parameters sigma_eps = 0.2 mean_w = np.zeros((degree+1,)) sigma_p = 0.5 X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) #Compute matrix with training input data...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
The above curve may change the position of its maximum from run to run.We conclude the notebook by plotting the result of the Bayesian inference for $M=6$
n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 degree = 5 #M-1 nplots = 6 #Prior distribution parameters sigma_eps = 0.1 mean_w = np.zeros((degree+1,)) sigma_p = .5 * np.eye(degree+1) X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace...
_____no_output_____
MIT
R5.Bayesian_Regression/Bayesian_regression_professor.ipynb
ML4DS/ML4all
Goals 1. Learn to implement Resnet V2 Block (Type - 1) using monk - Monk's Keras - Monk's Pytorch - Monk's Mxnet 2. Use network Monk's debugger to create complex blocks 3. Understand how syntactically different it is to implement the same using - Traditional Keras - Traditional Pytorch - Traditi...
from IPython.display import Image Image(filename='imgs/resnet_v2_with_downsample.png')
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Table of contents[1. Install Monk](1)[2. Block basic Information](2) - [2.1) Visual structure](2-1) - [2.2) Layers in Branches](2-2)[3) Creating Block using monk visual debugger](3) - [3.1) Create the first branch](3-1) - [3.2) Create the second branch](3-2) - [3.3) Merge the branches](3-3) - [3.4) Debug t...
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Imports
# Common import numpy as np import math import netron from collections import OrderedDict from functools import partial # Monk import os import sys sys.path.append("monk_v1/monk/");
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Block Information Visual structure
from IPython.display import Image Image(filename='imgs/resnet_v2_with_downsample.png')
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Layers in Branches - Number of branches: 2 - Common element - batchnorm -> relu - Branch 1 - conv_1x1 - Branch 2 - conv_3x3 -> batchnorm -> relu -> conv_3x3 - Branches merged using - Elementwise addition (See Appendix to read blogs on resnets) Creating Block using mon...
# Imports and setup a project # To use pytorch backend - replace gluon_prototype with pytorch_prototype # To use keras backend - replace gluon_prototype with keras_prototype from gluon_prototype import prototype # Create a sample project gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment...
Mxnet Version: 1.5.1 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Create the first branch
def first_branch(output_channels=128, stride=1): network = []; network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=stride)); return network; # Debug the branch branch_1 = first_branch(output_channels=128, stride=1) network = []; network.append(branch_1); gtf.debug_custom_mo...
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Create the second branch
def second_branch(output_channels=128, stride=1): network = []; network.append(gtf.convolution(output_channels=output_channels, kernel_size=3, stride=stride)); network.append(gtf.batch_normalization()); network.append(gtf.relu()); network.append(gtf.convolution(output_channels=output_channels, kerne...
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Merge the branches
def final_block(output_channels=128, stride=1): network = []; # Common elements network.append(gtf.batch_normalization()); network.append(gtf.relu()); #Create subnetwork and add branches subnetwork = []; branch_1 = first_branch(output_channels=output_channels, stride=stride) b...
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Debug the merged network
final = final_block(output_channels=128, stride=1) network = []; network.append(final); gtf.debug_custom_model_design(network);
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Compile the network
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
Model Details Loading pretrained model Model Loaded on device Model name: Custom Model Num of potentially trainable layers: 5 Num of actual trainable layers: 5
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Run data through the network
import mxnet as mx x = np.zeros((1, 3, 224, 224)); x = mx.nd.array(x); y = gtf.system_dict["local"]["model"].forward(x); print(x.shape, y.shape)
(1, 3, 224, 224) (1, 128, 224, 224)
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Visualize network using netron
gtf.Visualize_With_Netron(data_shape=(3, 224, 224))
Using Netron To Visualize Not compatible on kaggle Compatible only for Jupyter Notebooks Stopping http://localhost:8080 Serving 'model-symbol.json' at http://localhost:8080
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Creating Using MONK LOW code API Mxnet backend
from gluon_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
Mxnet Version: 1.5.1 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/ Model Details Loading pretrained m...
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Pytorch backend - Only the import changes
#Change gluon_prototype to pytorch_prototype from pytorch_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_block(output_channels=128)); gtf.Compile_Network(network, data_sha...
Pytorch Version: 1.2.0 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/ Model Details Loading pretrained...
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Keras backend - Only the import changes
#Change gluon_prototype to keras_prototype from keras_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v1_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(...
Keras Version: 2.2.5 Tensorflow Version: 1.12.0 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/ Model Detai...
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Appendix Study links - https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec - https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691 - https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80...
# Traditional-Mxnet-gluon import mxnet as mx from mxnet.gluon import nn from mxnet.gluon.nn import HybridBlock, BatchNorm from mxnet.gluon.contrib.nn import HybridConcurrent, Identity from mxnet import gluon, init, nd def _conv3x3(channels, stride, in_channels): return nn.Conv2D(channels, kernel_size=3, strides=st...
_____no_output_____
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Creating block using traditional Pytorch - Code credits - https://pytorch.org/
# Traiditional-Pytorch import torch from torch import nn from torch.jit.annotations import List import torch.nn.functional as F def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, ...
torch.Size([1, 3, 224, 224]) torch.Size([1, 64, 224, 224]) Serving 'model.onnx' at http://localhost:9998
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
Creating block using traditional Keras - Code credits: https://keras.io/
# Traditional-Keras import keras import keras.layers as kla import keras.models as kmo import tensorflow as tf from keras.models import Model backend = 'channels_last' from keras import layers def resnet_conv_block(input_tensor, kernel_size, filters, stage, bl...
(1, 224, 224, 3) (1, 112, 112, 64) Stopping http://localhost:8082 Serving 'final.h5' at http://localhost:8082
Apache-2.0
study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/3) Resnet V2 Block (Type - 1).ipynb
arijitgupta42/monk_v1
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Equations of General Relativistic Hydrodynamics (GRHD) Authors: Zach Etienne & Patrick Nelson This notebook documents and constructs a number of quantities useful for buil...
# Step 1: Import needed core NRPy+ modules from outputC import nrpyAbs # NRPy+: Core C code output module import NRPy_param_funcs as par # NRPy+: parameter interface import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import indexedexp as ixp # NRPy+: Sy...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](toc)\]$$\label{stressenergy}$$Recall from above that$$T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu},$$where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu}$$
# Step 2.a: First define h, the enthalpy: def compute_enthalpy(rho_b,P,epsilon): global h h = 1 + epsilon + P/rho_b # Step 2.b: Define T^{mu nu} (a 4-dimensional tensor) def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U): global T4UU compute_enthalpy(rho_b,P,epsilon) # Then define g^{mu nu...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](toc)\]$$\label{primtoconserv}$$Recall from above that the conservative variables may be written as\begin{align}\rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\\tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\\tilde{S}_i &...
# Step 3: Writing the conservative variables in terms of the primitive variables def compute_sqrtgammaDET(gammaDD): global sqrtgammaDET gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD) sqrtgammaDET = sp.sqrt(gammaDET) def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U): global rho_star # C...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial
Step 4: Define the fluxes for the GRHD equations \[Back to [top](toc)\]$$\label{grhdfluxes}$$ Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](toc)\]$$\label{rhostarfluxterm}$$Recall from above that\begin{array}\ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0.\end{array}Here we will ...
# Step 4: Define the fluxes for the GRHD equations # Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U def compute_vU_from_u4U__no_speed_limit(u4U): global vU # Now compute v^i = u^i/u^0: vU = ixp.zerorank1(DIM=3) for j in range(3): vU[j] = u4U[j+1]/u4U[0] # Step 4.b: rho...
_____no_output_____
BSD-2-Clause
Tutorial-GRHD_Equations-Cartesian.ipynb
rhaas80/nrpytutorial