markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The types in the first text are:
print(types1)
notebooks/Python for Text Similarities.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
We can generate the instersection from the two sets of types in the following way:
print(set.intersection(types1, types2))
notebooks/Python for Text Similarities.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
To calculate the Jaccard coefficient we divide the length of the intersection of the sets of types by the length of the union of these sets:
lenIntersect = len(set.intersection(types1, types2)) lenUnion = len(set.union(types1, types2)) print(lenIntersect / lenUnion)
notebooks/Python for Text Similarities.ipynb
dcavar/python-tutorial-for-ipython
apache-2.0
onset_detect
def test_onset_detect(y, hop_length): y2 = y.copy() rms_tot = np.sqrt(np.mean(y**2)) y2[(np.abs(y) < (rms_tot * 1.5))] = 0.0 onsets = librosa.onset.onset_detect(y=y2, sr=sr, hop_length=hop_length) index_list = librosa.frames_to_samples(onsets, hop_length=hop_length) return index_list y, sr = l...
notebooks/experimental/librosa_detect_bat_pulses_in_time_domain.ipynb
cloudedbats/cloudedbats_dsp
mit
rmse + localmax
def test_rmse_localmax(y, hop_length): y2 = y.copy() rms_tot = np.sqrt(np.mean(y**2)) y2[(np.abs(y) < (rms_tot * 1.5))] = 0.0 rmse = librosa.feature.rms(y=y2, hop_length=384, frame_length=1024, center=True) locmax = librosa.util.localmax(rmse.T) maxindexlist = [] for index, a in enumerate(lo...
notebooks/experimental/librosa_detect_bat_pulses_in_time_domain.ipynb
cloudedbats/cloudedbats_dsp
mit
onset_strength and peak_pick
def test_onset_strength_and_peak_pick(y, hop_length): y2 = y.copy() rms_tot = np.sqrt(np.mean(y**2)) y2[(np.abs(y) < (rms_tot * 1.5))] = 0.0 onset_env = librosa.onset.onset_strength(y=y2, sr=sr, hop_length=384, agg...
notebooks/experimental/librosa_detect_bat_pulses_in_time_domain.ipynb
cloudedbats/cloudedbats_dsp
mit
peak_pick
def test_peak_pick(y, hop_length): y2 = y.copy() rms_tot = np.sqrt(np.mean(y**2)) y2[(np.abs(y) < (rms_tot * 1.5))] = 0.0 frames_per_ms = hop_length minmax_window = frames_per_ms / 4 mean_window = frames_per_ms / 8 sensitivity = rms_tot * 1.5 # 0.1 skip_ms = 1 index_list = libr...
notebooks/experimental/librosa_detect_bat_pulses_in_time_domain.ipynb
cloudedbats/cloudedbats_dsp
mit
In the code bellow, resize image into the special resolution
import numpy as np from scipy.misc import imread, imresize import matplotlib.pyplot as plt import tensorflow as tf raw_image = imread('model/datasets/nudity_dataset/3.jpg') image = tf.placeholder("uint8", [None, None, 3]) image1 = tf.image.convert_image_dtype(image, dtype = tf.float32) image1_t = tf.expand_dims(image...
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
1.1 Create a standard training dataset
%matplotlib inline %load_ext autoreload %autoreload 2 import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import cPickle as pickle from model.datasets.data import generate_standard_dataset # Load Normal and Nude images into the train dataset image_normal_ls, file_name_normal = generate_standar...
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
Generate tfrecords
import os import tensorflow as tf def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def convert_to(data_dir, dataset, labels, name): """Converts a dataset to tfr...
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
Read a batch images
import tensorflow as tf import matplotlib.pyplot as plt def read_and_decode(filename_queue): reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example( serialized_example, features={ 'image_raw': tf.FixedLenFeature([], tf.string)...
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
Example shuffle dataset
import tensorflow as tf f = ["f1", "f2", "f3", "f4", "f5", "f6", "f7", "f8"] l = ["l1", "l2", "l3", "l4", "l5", "l6", "l7", "l8"] fv = tf.constant(f) lv = tf.constant(l) rsq = tf.RandomShuffleQueue(10, 0, [tf.string, tf.string], shapes=[[],[]]) do_enqueues = rsq.enqueue_many([fv, lv]) gotf, gotl = rsq.dequeue() wi...
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
Example cPickle
import cPickle as pickle dict1 = {'name':[],'id':[]} dict2 = {'local':[], 'paza':[]} #with open('test.p', 'wb') as fp: # pickle.dump(dict1,fp) # pickle.dump(dict2,fp) with open('test.p', 'rb') as fp: d1 = pickle.load(fp) d2 = pickle.load(fp) print len(d1) print len(d2)
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
Example reshape
import tensorflow as tf import numpy as np a = tf.constant(np.array([[.1]])) init = tf.initialize_all_variables() with tf.Session() as session: session.run(init) b = session.run(tf.nn.softmax(a)) c = session.run(tf.nn.softmax_cross_entropy_with_logits([0.6, 0.4],[0,1])) #print b #print c label = n...
VNG_MODEL_EXPERIMENT.ipynb
taiducvu/NudityDetection
apache-2.0
Character counting and entropy Write a function char_probs that takes a string and computes the probabilities of each character in the string: First do a character count and store the result in a dictionary. Then divide each character counts by the total number of character to compute the normalized probabilties. Retu...
def char_probs(s): """Find the probabilities of the unique characters in the string s. Parameters ---------- s : str A string of characters. Returns ------- probs : dict A dictionary whose keys are the unique characters in s and whose values are the probabil...
midterm/AlgorithmsEx03.ipynb
edwardd1/phys202-2015-work
mit
The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as: $$H = - \Sigma_i P_i \log_2(P_i)$$ In this expression $\log_2$ is...
def entropy(d): """Compute the entropy of a dict d whose values are probabilities.""" """Return a list of 2-tuples of (word, count), sorted by count descending.""" #t = np.array(d) #t = np.sort(t) H = 0 l = [(i,d[i]) for i in d] t = sorted(l, key = lambda x:x[1], reverse = True) for n in...
midterm/AlgorithmsEx03.ipynb
edwardd1/phys202-2015-work
mit
Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
def z(x): print(entropy(char_probs(x))) return entropy(char_probs(x)) interact(z, x='string'); assert True # use this for grading the pi digits histogram
midterm/AlgorithmsEx03.ipynb
edwardd1/phys202-2015-work
mit
Print the variable a in all uppercase Print the variable a with every other letter in uppercase Print the variable a in reverse, i.e. god yzal ... Print the variable a with the words reversed, i.e. ehT kciuq ... Print the variable b in scientific notation with 4 decimal places
people = [{'name': 'Charlie', 'age': 35}, {'name': 'Alice', 'age': 30}, {'name': 'Eve', 'age': 20}, {'name': 'Gail', 'age': 30}, {'name': 'Dennis', 'age': 25}, {'name': 'Bob', 'age': 35}, {'name': 'Fred', 'age': 25},]
Wk01-Overview.ipynb
streety/biof509
mit
Print the items in people as comma seperated values Sort people so that they are ordered by age, and print Sort people so that they are ordered by age first, and then their names, i.e. Bob and Charlie should be next to each other due to their ages with Bob first due to his name.
coords = [(0,0), (10,5), (10,10), (5,10), (3,3), (3,7), (12,3), (10,11)]
Wk01-Overview.ipynb
streety/biof509
mit
Write a function that returns the first n prime numbers Given a list of coordinates calculate the distance covered travelling between all the points in order given using the Euclidean distance Given a list of coordinates arrange them in such a way that the distance traveled is minimized (the itertools module may be use...
np.random.seed(0) a = np.random.randint(0, 100, size=(10,20))
Wk01-Overview.ipynb
streety/biof509
mit
VXLAN and EVPN This category of questions allows you to query aspects of VXLAN and EVPN configuration and behavior. VXLAN VNI Properties VXLAN Edges L3 EVPN VNIs
bf.set_network('generate_questions') bf.set_snapshot('aristaevpn')
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
VXLAN VNI Properties Returns configuration settings of VXLANs. Lists VNI-level network segment settings configured for VXLANs. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include nodes matching this specifier. | NodeSpec | True | properties | Include properties matc...
result = bf.q.vxlanVniProperties().answer().frame()
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
Return Value Name | Description | Type --- | --- | --- Node | Node | str VRF | VRF | str VNI | VXLAN Segment ID | int Local_VTEP_IP | IPv4 address of the local VTEP | str Multicast_Group | IPv4 address of the multicast group | str VLAN | VLAN number for the VNI | int VTEP_Flood_List | All IPv4 addresses in the VTEP flo...
result.head(5)
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
VXLAN Edges Returns VXLAN edges. Lists all VXLAN edges in the network. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include edges whose first node matches this name or regex. | NodeSpec | True | . remoteNodes | Include edges whose second node matches this name or rege...
result = bf.q.vxlanEdges().answer().frame()
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
Return Value Name | Description | Type --- | --- | --- VNI | VNI of the VXLAN tunnel edge | int Node | Node from which the edge originates | str Remote_Node | Node at which the edge terminates | str VTEP_Address | VTEP IP of node from which the edge originates | str Remote_VTEP_Address | VTEP IP of node at which the ed...
result.head(5)
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
L3 EVPN VNIs Returns configuration settings of VXLANs. Lists VNI-level network segment settings configured for VXLANs. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include nodes matching this specifier. | NodeSpec | True | Invocation
result = bf.q.evpnL3VniProperties().answer().frame()
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
Return Value Name | Description | Type --- | --- | --- Node | Node | str VRF | VRF | str VNI | VXLAN Segment ID | int Route_Distinguisher | Route distinguisher | str Import_Route_Target | Import route target | str Export_Route_Target | Export route target | str Print the first 5 rows of the returned Dataframe
result.head(5)
docs/source/notebooks/vxlan_evpn.ipynb
batfish/pybatfish
apache-2.0
Time frequency with Stockwell transform in sensor space This script shows how to compute induced power and intertrial coherence using the Stockwell transform, a.k.a. S-Transform.
# Authors: Denis A. Engemann <denis.engemann@gmail.com> # Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import mne from mne import io from mne.time_frequency import tfr_stockwell from mne.datasets import somato print(__doc__)
0.14/_downloads/plot_stockwell.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = somato.data_path() raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif' event_id, tmin, tmax = 1, -1., 3. # Setup for reading the raw data raw = io.Raw(raw_fname) baseline = (None, 0) events = mne.find_events(raw, stim_channel='STI 014') # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad...
0.14/_downloads/plot_stockwell.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Calculate power and intertrial coherence
epochs = epochs.pick_channels([epochs.ch_names[82]]) # reduce computation power, itc = tfr_stockwell(epochs, fmin=6., fmax=30., decim=4, n_jobs=1, width=.3, return_itc=True) power.plot([0], baseline=(-0.5, 0), mode=None, title='S-transform (power)') itc.plot([0], baseline=None, mode=None,...
0.14/_downloads/plot_stockwell.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen disp...
from skrf.media import MLine W = 3.00e-3 H = 1.51e-3 T = 50e-6 L = 0.1 Er0 = 4.5 tand0 = 0.02 f_epr_tand = 1e9 x0 = [Er0, tand0] def model(x, freq, Er_eff, L, W, H, T, f_epr_tand, Loss_mea): ep_r = x[0] tand = x[1] m = MLine(frequency=freq, z0=50, w=W, h=H, t=T, ep_r=ep_r, mu_r=1, rho=1....
doc/source/examples/networktheory/Correlating microstripline model to measurement.ipynb
temmeand/scikit-rf
bsd-3-clause
As a sanity check, the model data are compared with the computed parameters
m = MLine(frequency=MSL100.frequency, z0=50, w=W, h=H, t=T, ep_r=Er, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6, f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand, diel='djordjevicsvensson', disp='kirschningjansen') DUT = m.line(L, 'm', embed=True, z0=m.Z0_f) DUT.name = 'DUT' Loss_mod = 20 * lo...
doc/source/examples/networktheory/Correlating microstripline model to measurement.ipynb
temmeand/scikit-rf
bsd-3-clause
The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonable agreement and is small whatsoever. Connector impedance adjustment by time-domain reflectometry Time-domain step responses of measurement and model are used to adjust the connector model characteristic impedance. The plot...
mod = left ** DUT ** right MSL100_dc = MSL100.extrapolate_to_dc(kind='linear') DUT_dc = mod.extrapolate_to_dc(kind='linear') plt.figure() plt.suptitle('Left-right and right-left TDR') plt.subplot(2,1,1) MSL100_dc.s11.plot_s_time_step(pad=2000, window='hamming', label='Measured L-R') DUT_dc.s11.plot_s_time_step(pad=20...
doc/source/examples/networktheory/Correlating microstripline model to measurement.ipynb
temmeand/scikit-rf
bsd-3-clause
Data Wrangling data extraction
with open('LocationHistory.json', 'r') as fh: raw = json.loads(fh.read()) # use location_data as an abbreviation for location data location_data = pd.DataFrame(raw['locations']) del raw #free up some memory # convert to typical units location_data['latitudeE7'] = location_data['latitudeE7']/float(1e7) location_d...
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
Explore Data view data and datatypes
location_data.head() location_data.dtypes location_data.describe()
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
data manipulation Degrees and Radians We're going to convert the degree-based geo data to radians to calculate distance traveled. I'm going to paraphrase an explanation (source below) about why the degree-to-radians conversion is necessary Degrees are arbitrary because they’re based on the sun and backwards because t...
degrees_to_radians = np.pi/180.0 location_data['phi'] = (90.0 - location_data.latitude) * degrees_to_radians location_data['theta'] = location_data.longitude * degrees_to_radians # Compute distance between two GPS points on a unit sphere location_data['distance'] = np.arccos( np.sin(location_data.phi)*np.sin(loca...
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
calculate speed during trips (in km/hr)
location_data['speed'] = location_data.distance/(location_data.timestamp - location_data.timestamp.shift(-1))*3600 #km/hr
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
Flight algorithm filter flights remove flights using conservative selection criteria
flights = flight_data[(flight_data.speed > 40) & (flight_data.distance > 80)].reset_index() # Combine instances of flight that are directly adjacent # Find the indices of flights that are directly adjacent _f = flights[flights['index'].diff() == 1] adjacent_flight_groups = np.split(_f, (_f['index'].diff() > 1).nonzer...
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
This algorithm worked 100% of the time for me - no false positives or negatives. But the adjacency-criteria of the algorithm is fairly brittle. The core of it centers around the assumption that inter-flight GPS data will be directly adjacent to one another. That's why the initial screening on line 1 of the previous cel...
fig = plt.figure(figsize=(18,12)) # Plotting across the international dateline is tough. One option is to break up flights # by hemisphere. Otherwise, you'd need to plot using a different projection like 'robin' # and potentially center on the Int'l Dateline (lon_0=-180) # flights = flights[(flights.start_lon < 0) & (...
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
You can draw entertaining conclusions from the flight visualization. For instance, you can see some popular layover locations, all those lines in/out of Seattle, plus a recent trip to Germany. And Basemap has made it so simple for us - no Shapefiles to import because all map information is included in the Basemap modul...
flights_in_miles = round(flights.distance.sum()*.621371) # distance column is in km, convert to miles flights_in_miles print("{0} miles traveled from {1} to {2}".format(flights_in_miles, earliest_obs, latest_obs))
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
Conclusion You've now got the code to go ahead and reproduce these maps. I'm working on creating functions to automate these visualizations Potential future directions Figure out where you usually go on the weekends Calculate your fastest commute route measure the amount of time you spend driving vs. walking. Downloa...
import time print("last updated: {}".format(time.strftime("%a, %d %b %Y %H:%M", time.localtime())))
output/downloads/notebooks/map_of_flights.ipynb
kpolimis/kpolimis.github.io-src
gpl-3.0
Demo 2: Plotting a candlestick chart for any stock in 11 lines of code
# Choose a start and end date in a slightly different format to before (YYYY/MM/DD) start = (2015, 10, 2) end = (2016, 4,2) company = "S&P 500" ticker = "^GSPC" quotes = mpf.quotes_historical_yahoo_ohlc(ticker, start, end) print(quotes[:2]) # We use Matplotlib to generate plots fig, ax = plt.subplots(figsize=(8, 5)) ...
giag.ipynb
trsherborne/learn-python
mit
Complete Code for Demo 1
# -*- coding: utf-8 -*- %matplotlib inline import numpy as np import pandas as pd from pandas_datareader import data as web # Choose a stock ticker = 'GOOG' # Choose a start date in US format MM/DD/YYYY stock_start = '10/2/2015' # Choose an end date in US format MM/DD/YYYY stock_end = '10/2/2016' # Retrieve the Data...
giag.ipynb
trsherborne/learn-python
mit
Complete Code for Demo 2
# -*- coding: utf-8 -*- %matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.finance as mpf start = (2015, 10, 2) end = (2016, 4,2) company = "S&P 500" ticker = "^GSPC" quotes = mpf.quotes_historical_yahoo_ohlc(ticker, start, end) print(quotes[:2]) fig, ax =...
giag.ipynb
trsherborne/learn-python
mit
Now turn on infos just for OPF module.
pypsa.opf.logger.setLevel(logging.INFO) out = network.lopf()
examples/notebooks/logging-demo.ipynb
PyPSA/PyPSA
mit
Now turn on warnings just for OPF module
pypsa.opf.logger.setLevel(logging.WARNING) out = network.lopf()
examples/notebooks/logging-demo.ipynb
PyPSA/PyPSA
mit
Now turn on all messages for the PF module
pypsa.pf.logger.setLevel(logging.DEBUG) out = network.lpf()
examples/notebooks/logging-demo.ipynb
PyPSA/PyPSA
mit
Now turn off all messages for the PF module again
pypsa.pf.logger.setLevel(logging.ERROR) out = network.lpf()
examples/notebooks/logging-demo.ipynb
PyPSA/PyPSA
mit
Google
start = datetime.datetime(2010, 1, 1) end = datetime.datetime(2016, 7, 15) google_df = data.DataReader("F", 'google', start, end) google_df.plot()
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_08/Begin/Remote Data.ipynb
adityaka/misc_scripts
bsd-3-clause
Pentru a calcula radacinile polinomului caracteristic al matricii $A$ si vectorii proprii corespunzatori, apelam functia np.linalg.eig(A) care returneaza array-ul 1D, Lamb, ce contine radacinile polinomului caracteristic si array-ul 2D, V, care are pe o coloana j coordonatele unui vector propriu corespunzator valorii...
Lamb, V=np.linalg.eig(A) print 'Radacinile polinomului caracteristic sunt\n', Lamb print'\n iar vectorii proprii corespunzatori: \n', V.round(2) print 'Vectorul propriu corespunzator valorii', Lamb[3], 'este:\n', V[:,3].round(2)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Matricea data este o matrice binara, deci poate fi interpretata ca matricea de adiacenta a unui graf. Fiind o matrice nenegativa asociata unui graf conex, i se poate aplica Teorema Perron-Frobenius. Sa determinam valoarea proprie dominanta, adica valoarea proprie reala, strict pozitiva $\lambda_d$ cu proprietatea ca $|...
print np.fabs(Lamb)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Concentrat putem scrie:
print np.amax(np.fabs(Lamb))# functia np.amax(array) returneaza elementul maxim dintr-un array 1D
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Deci valoarea proprie dominanta este:
lambD=np.amax(np.fabs(Lamb))# valoarea proprie dominanta calculata
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
iar pozitia ei in array-ul Lamb este returnata de np.argmax(np.fabs(Lamb)):
j=np.argmax(np.fabs(Lamb)) print 'Valoarea proprie dominanta este plasata in pozitia:', j #vectorul propriu corespunzator: x=V[:,j] print 'Valoarea proprie dominanta este:', lambD, \ '\n\n iar vectorul propriu dominant este\n', V[:,j].round(2)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Observam ca acest vector are toate coordonatele negative, deci -x este vectorul propriu cu toate coordonatele pozitive, conform teoremei Perron-Frobenius. Vectorul $x$ normalizat este $r=x/\sum_{i=0}^{n-1}x[i]$ si reprezinta vectorul rating, avand drept coordonate coeficientii de popularitate/importanta a nodurilor re...
r=x/np.sum(x) print 'Coeficientii de popularitate a nodurilor retelei de matricede conectivitate'+\ '$A$ sunt\n', r.round(2)# semnul + intre doua stringuri inseamna concatenarea lor # semnul \ reprezinta continuare pe linia urmatoare
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Sa realizam acum ranking-ul nodurilor, sortand elementele vectorului $r$, descrescator si retinand indicii ce dau pozitia initiala in r a elementelor sortate.
ranking=np.argsort(r)[::-1] #Functia np.argsort, sorteaza crescator array-ul 1D, rating, # si returneaza indicii din r a elementelor sortate # Pentru a gasi ordinea indicilor pentru sortarea descrescatoare # se inversea...
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Deci nodul retelei cu cea mai mare popularitate este nodul 4, urmat de 0, 3,2,1. Sa aplicam acum aceasta procedura pentru retele neorientate si apoi retele orientate, folosind pachetul networkx: Definirea unui graf in networkx Importam modulul networkx astfel:
import networkx as nx
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Linia urmatoare defineste un graf vid, G, neorientat (G este un obiect din clasa Graph):
G=nx.Graph()
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
1. Constructia grafului pornind de la lista nodurilor si lista arcelor Se defineste lista nodurilor, V, si lista arcelor, E, si apoi se apeleaza pentru graful G metoda add_nodes_from(V), respectiv add_edges_from(E). Se pot adauga noduri/arce individuale apeland metoda add_node()/add_edge():
n=9 V=[i for i in range(n)] G.add_nodes_from(V) E=[(0,1), (0,2), (1,3), (1,4), (1,7), (2,5), (2,8), (3, 4), (3,5),(4,6), (4,7), (4,8), (5,7)] G.add_edges_from(E) G.add_edge(6,8)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Dupa ce elementele definitorii au fost setate, urmeaza generarea/trasarea grafului, folosind functia nx.draw care se bazeaza pe functii din biblioteca grafica matplotlib.
%matplotlib inline # comanda "%matplotlib inline" se da pentru a insera figurile generate, inline, in notebook import matplotlib.pyplot as plt # importam biblioteca grafica nx.draw(G, node_color='c',edge_color='b', with_labels=True)# in mod implicit graful este trasat ...
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Pozitionarea relativa a nodurilor este realizata conform algoritmului numit spring layout algorithm. Exista mai multe modalitati de amplasare a nodurilor in spatiu, dar aceasta este cea mai convenabila pentru prezentarea noastra. Extragem matricea de adiacenta a grafului:
A=nx.adjacency_matrix(G)# A este un obiect al unei clase speciale in networkx #A.todense() defineste matricea de adiacenta ca un obiect al unei clase din numpy, #dar NU clasa `numpy.array` print A.todense() print type(A.todense())
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Pentru a lucra doar cu numpy.array, convertim A.todense() (se pot determina valorile si vectorii proprii ai lui A.todense(), dar e putin diferit de modul de lucru cu numpy.array):
A=np.array(A.todense())# interpretati aceasta linie ca un cast print type(A)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Sa determinam coeficientul de popularitate a nodurilor acestei retele. Cum graful asociat este neorientat matricea de adiacenta este simetrica si deci are sigur toate radacinile polinomului caracteristic, reale (Cursul 12).
Lamb,V=np.linalg.eig(A) lamb=np.amax(Lamb)# radacinile fiind reale, valoarea dominata este maximumul valorilor proprii j=np.argmax(Lamb)#pozitia in Lamb a valorii maxime print j x=V[:,j] print 'Valoarea proprie dominanta este:', lamb print 'Vectorul propriu corespunzator:\n', x.round(3)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Sa determinam vectorul rating asociat nodurilor retelei:
s=np.sum(x) rating=x/s # vectorul propriu dominant, normalizat print 'Vectorul rating al nodurilor\n', rating.round(3) ranking=np.argsort(rating)[::-1] print ranking
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Rezulta astfel ca nodul cu cea mai mare popularitate este nodul 4. Coeficientul de popularitate este:
print rating [ranking[0]]
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Fiecarui nod dintr-o retea i se asociaza gradul, ca fiind numarul de noduri cu care este conectat printr-un arc (drum de lungime 1). Functia grad=nx.degree(nod) returneaza gradul unui nod, iar grad=nx.degree(G), gradele tuturor nodurilor retelei. In acest al doilea caz, grad este un dictionar, adica o structura de date...
dictionar={'grupa1':35, 'grupa2':40, 'grupa3': 43, 'grupa4':45} print dictionar print dictionar.keys() print 'In grupa 2 sunt', dictionar['grupa2'], 'studenti' grad=nx.degree(G) print 'Dictionarul gradelor nodurilor:', grad print 'Gradul nodului 4, ce are ceam mai mare popularitate este:', grad[4]
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Remarcam ca nodul 4 care are cel mai mare coeficient de popularitate are si cel mai mare grad (este "cel mai conectat" nod din retea). 2. Constructia grafului neorientat pornind de la matricea sa de adiacenta. Daca se da matricea de adiacenta, $A$, a unui graf atunci graful este creat de functia: G= nx.from_numpy_matr...
Ad=np.array([[0,1,1,1,0,0,0,1], [1,0,1,0,1,1,1,0], [1,1,0,0,0,0,1,1], [1,0,0,0,1,1,1,1], [0,1,0,1,0,1,1,0], [0,1,0,1,1,0,1,0], [0,1,1,1,1,1,0,1], [1,0,1,1,0,0,1,0]], float) Gr=nx.from_numpy_matrix(Ad) print 'Nodurile grafului sun...
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Popularitatea nodurilor unei retele orientate Constructia unei retele (graf) orientat se realizeaza la fel ca in cazul celor neorientate, doar ca obiectul nu mai este declarat de tip Graph, ci DiGraph.
H=nx.DiGraph() n=5 Noduri=[k for k in range(n)] Arce=[(0,3), (0,4), (1,2),(1,3), (1,4), (2,3), (4,1), (4,3)] H.add_nodes_from(Noduri) H.add_edges_from(Arce) nx.draw(H, node_color='r', with_labels=True, alpha=0.5)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Sa construim o retea orientata din matricea sa adiacenta si sa determinam popularitatea nodurilor:
plt.rcParams['figure.figsize'] = 8, 8 #setam dimensiunile figurii W=np.array([[0,1,1,1,0,0,0,0],[0,0,1,0,1,1,1,0],[0,0,0,0,0,0,0,1],[0,0,0,0,1,1,0,0], [0,0,0,0,0,0,1,0], [0,0,0,0,1,0,1,0],[0,1,1,1,0,0,0,1], [1,0,0,1,0,0,0,0]], float) GW=nx.from_numpy_matrix(W, create_using=nx.DiGraph()) print 'Nodurile gr...
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Conform teoriei din cursul 11, vectorul rating asociat unei retele orientate este vectorul propriu al valorii proprii dominante a matricii de conectivitate, transpusa:
Lamb, V=np.linalg.eig(W.transpose()) # aflam radacinile polinomului caracteristic a matricii W^T print Lamb.round(3)
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Matricea de adiacenta nefiind simetrica, polinomul sau caracteristic poate avea si radacini complex conjugate. Radacinile reale sunt afisate si ele in forma complexa, $a=a+ 0.j$ (in Python numarul complex $i=\sqrt{-1}$ este notat $j$, ca in electronica). Determinam acum valoarea proprie dominanta, adica radacina reala,...
absLamb=np.abs(Lamb) j=np.argmax(absLamb) if not np.isreal(Lamb[j]):# daca valoarea absoluta maxima nu este reala raise ValueError("matricea A nu indeplineste conditiile T Perron-Frobenius sau alta cauza") else: lamD=np.real(Lamb[j])# afiseaza nr real fara 0*j print 'valoarea proprie dominanta este:', lamD ...
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Proiect: Determinarea popularitatii jucatorilor unei echipe de fotbal la Campionatul Mondial, Brazilia 2014 Sa se determine popularitatea jucatorilor unei echipe de fotbal intr-unul din meciurile jucate la campionatul Mondial de Fotbal, Brazilia 2014. Reteaua asociata echipei implicata intr-un joc are ca noduri jucator...
Jucatori={ 0: 'Manuel NEUER', 1: 'Benedikt HOEWEDES', 2: 'Mats HUMMELS'}# etc
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Avand acest dictionar atunci cand am calculat vectorul ranking, printam informatia in felul urmator: Cel mai popular jucator (cel care a primit cele mai multe pase in timpul meciului) este jucatorul Jucatori[ranking[0]]. i=ranking[0]este codul numeric al jucatorului, $i \in{0,1, \ldots, n-1}$, cel mai bun, iar J...
from IPython.core.display import HTML def css_styling(): styles = open("./custom.css", "r").read() return HTML(styles) css_styling()
Networks.ipynb
empet/LinAlgCS
bsd-3-clause
Now that we have the TensorFlow code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform. <p> <h2> Train on Cloud AI Platform</h2> <p> Training on Cloud AI Platform requires: <ol> <li> Making the code a Python package <li> Using gcloud to submit th...
%%writefile babyweight/trainer/task.py import argparse import json import os from . import model import tensorflow.compat.v1 as tf tf.disable_v2_behavior() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--bucket', help = 'GCS path to data. We assume that d...
courses/machine_learning/deepdive/06_structured/labs/5_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task 2 Address all the TODOs in the following code in babyweight/trainer/model.py with the cell below. This code is similar to the model training code we wrote in Lab 3. After addressing all TODOs, run the cell to write the code to the model.py file.
%%writefile babyweight/trainer/model.py import shutil import numpy as np import tensorflow.compat.v1 as tf tf.disable_v2_behavior() tf.logging.set_verbosity(tf.logging.INFO) BUCKET = None # set from task.py PATTERN = 'of' # gets all files # Determine CSV, label, and key columns CSV_COLUMNS = 'weight_pounds,is_male...
courses/machine_learning/deepdive/06_structured/labs/5_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task 5 Once the code works in standalone mode, you can run it on Cloud AI Platform. Change the parameters to the model (-train_examples for example may not be part of your model) appropriately. Because this is on the entire dataset, it will take a while. The training run took about <b> 2 hours </b> for me. You can...
%%bash OUTDIR=gs://${BUCKET}/babyweight/trained_model JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/babyweight/trainer \ --job-dir=$OUTDI...
courses/machine_learning/deepdive/06_structured/labs/5_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
When I ran it, I used train_examples=20000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was: <pre> Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186 </pre> The final RMSE was 1.03 pounds. <h2> Re...
%%bash OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/babyweight/trainer \ --job-dir=...
courses/machine_learning/deepdive/06_structured/labs/5_train.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Compute EBTEL Results Run the single- and two-fluid EBTEL models for a variety of inputs. This will be the basis for the rest of our analysis. First, import any needed modules.
import sys import os import subprocess import pickle import numpy as np sys.path.append(os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus/rsp_toolkit/python')) from xml_io import InputHandler,OutputHandler
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
Setup the base dictionary for all of the runs. We'll read in the base dictionary from the ebtel++ example configuration file.
ih = InputHandler(os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus','config','ebtel.example.cfg.xml')) config_dict = ih.lookup_vars() config_dict['use_adaptive_solver'] = False config_dict['loop_length'] = 40.0e+8 config_dict['adaptive_solver_error'] = 1e-8 config_dict['calculate_dem'] = False config_dict['total_time...
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
Next, construct a function that will make it easy to run all of the different EBTEL configurations.
def run_and_print(tau,h0,f,flux_opt,oh_inst): #create heating event oh_inst.output_dict['heating']['events'] = [ {'event':{'magnitude':h0,'rise_start':0.0,'rise_end':tau/2.0,'decay_start':tau/2.0,'decay_end':tau}} ] #set heat flux options oh_inst.output_dict['saturation_limit'] = f oh_in...
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
Configure instances of the XML output handler for printing files.
oh = OutputHandler(config_dict['output_filename']+'.xml',config_dict)
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
Finally, run the model for varying pulse duration.
tau_h = [20,40,200,500] tau_h_results = [] for t in tau_h: results = run_and_print(t,20.0/t,1.0,True,oh) results['loop_length'] = config_dict['loop_length'] tau_h_results.append(results)
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
And then run the models for varying flux-limiter, $f$.
flux_lim = [{'f':1.0,'opt':True},{'f':0.53,'opt':True},{'f':1.0/6.0,'opt':True},{'f':0.1,'opt':True}, {'f':1.0/30.0,'opt':True},{'f':1.0,'opt':False}] flux_lim_results = [] for i in range(len(flux_lim)): results = run_and_print(200.0,0.1,flux_lim[i]['f'],flux_lim[i]['opt'],oh) results['loop_length']...
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
Save both data structures to serialized files.
with open(__dest__[0],'wb') as f: pickle.dump(tau_h_results,f) with open(__dest__[1],'wb') as f: pickle.dump(flux_lim_results,f)
notebooks/compute_ebtel_results.ipynb
rice-solar-physics/hot_plasma_single_nanoflares
bsd-2-clause
Run training To help ensure this example runs quickly, we train for only 100000 steps, even though in our paper we used 40000 steps.
! gsutil cp gs://data-driven-discretization-public/training-data/burgers.h5 . %%time ! python data-driven-discretization-1d/pde_superresolution/scripts/run_training.py \ --checkpoint_dir burgers-checkpoints \ --equation burgers \ --hparams resample_factor=16,learning_stops=[5000,10000] \ --input_path burgers.h5
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
Run evaluation One key parameter here is the "warmup" time cutoff, which we use to ensure that we are only asking the neural network to make predictions on fully developed solutions, after all transiants are removed. We used warmup=10 for Burgers, warmup=100 for KS and warmup=50 for KdV.
# Use pre-computed "exact" solution from WENO. # You could also run this yourself using scripts/create_exact_data.py ! gsutil cp gs://data-driven-discretization-public/time-evolution/exact/burgers_weno.nc .
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
See also ks_spectral.nc and kdv_spectral.nc in the same directory for reference simulations with KS and KdV equations.
import xarray # remove extra samples, so evaluation runs faster reference = xarray.open_dataset('burgers_weno.nc').isel(sample=slice(10)).load() reference.to_netcdf('burgers_weno_10samples.nc') %%time ! python data-driven-discretization-1d/pde_superresolution/scripts/run_evaluation.py \ --checkpoint_dir burgers-check...
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
Very simple evaluation Evaluations have been saved to burgers-checkpoints/results.nc, but we'll download them from cloud storage instead:
! gsutil cp gs://data-driven-discretization-public/time-evolution/model/burgers_16x_samples.nc . import xarray results = xarray.open_dataset('burgers_16x_samples.nc').load() results reference
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
An example solution from our reference model, at high resolution:
reference.y[0].sel(time=slice(10, 60)).plot.imshow()
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
Coarse-grained simulation with our neural network:
results.y[0].plot.imshow()
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
Difference between the neural network results and coarse-grained reference results:
(results.y.sel(sample=0) - reference.y.sel(sample=0, time=slice(10, 60)).coarsen(x=16).mean() .assign_coords(x=results.x)).plot.imshow()
notebooks/time-integration.ipynb
google/data-driven-discretization-1d
apache-2.0
We'll just check that the pulse area is what we want.
print('The input pulse area is {0:.3f}.'.format( np.trapz(mbs.Omegas_zt[0,0,:].real, mbs.tlist)/np.pi))
docs/examples/mbs-two-sech-6pi.ipynb
tommyogden/maxwellbloch
mit
Solve the Problem
Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
docs/examples/mbs-two-sech-6pi.ipynb
tommyogden/maxwellbloch
mit
Plot Output
import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import numpy as np sns.set_style('darkgrid') fig = plt.figure(1, figsize=(16, 6)) ax = fig.add_subplot(111) cmap_range = np.linspace(0.0, 4.0, 11) cf = ax.contourf(mbs.tlist, mbs.zlist, np.abs(mbs.Omegas_zt[0]/(2*np.pi)), ...
docs/examples/mbs-two-sech-6pi.ipynb
tommyogden/maxwellbloch
mit
Analysis The $6 \pi$ sech pulse breaks up into three $2 \pi$ pulses, which travel at a speed according to their width. Movie
# C = 0.1 # speed of light # Y_MIN = 0.0 # Y-axis min # Y_MAX = 4.0 # y-axis max # ZOOM = 2 # level of linear interpolation # FPS = 60 # frames per second # ATOMS_ALPHA = 0.2 # Atom indicator transparency # FNAME = "images/mb-solve-two-sech-6pi" # FNAME_JSON = FNAME + '.json' # with open(FNAME_JSON, "w") as f: # f...
docs/examples/mbs-two-sech-6pi.ipynb
tommyogden/maxwellbloch
mit
Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
import pymt.models cem = pymt.models.Cem()
docs/demos/cem.ipynb
csdms/coupling
mit
Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
cem.output_var_names cem.input_var_names
docs/demos/cem.ipynb
csdms/coupling
mit
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.) cem.initialize(*args)
docs/demos/cem.ipynb
csdms/coupling
mit
With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but in...
grid_type = cem.get_grid_type(grid_id) grid_rank = cem.get_grid_ndim(grid_id) print('Type of grid: %s (%dD)' % (grid_type, grid_rank))
docs/demos/cem.ipynb
csdms/coupling
mit
Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include: * get_grid_shape * get_grid_spacing * get_grid_origin
spacing = np.empty((grid_rank, ), dtype=float) shape = cem.get_grid_shape(grid_id) cem.get_grid_spacing(grid_id, out=spacing) print('The grid has %d rows and %d columns' % (shape[0], shape[1])) print('The spacing between rows is %f and between columns is %f' % (spacing[0], spacing[1]))
docs/demos/cem.ipynb
csdms/coupling
mit