markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Fikk vi med alle rader og kolonner? Ta en kikk i Excel-filen din og se etter antall rader og antall kolonner og sammenlikn med tallet du med bruk av df.shape.
df.shape
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Det første tallet er rader, det andre er kolonner. Vi har altså over 61.757 rader. Og det stemmer med rader i Excel-filen. Så langt alt fint! Ta en sjekk for å se at dataene ser ok ut Vi sjekker topp og bunn
df.head(n=3) df.tail(n=3)
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Hvilke kolonner har vi og hva slags datatype har de?
df.dtypes
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Forklaring: int64 betyr heltall, object betyr som regel at det er tekst, float64 betyr et tall med desimaler. Fjern kolonner du ikke trenger Gjør det det mer oversiktlig å jobbe med. Her kan vi fjerne lat og lon kolonnene som angir kartreferanse.
df = df.drop(['Lat', 'Lon'], axis='columns')
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Forklaring: Her lager vi en ny DataFrame med samme navnet, der vi dropper kolonnene Lat og Lon. axis=columns betyr at det er kolonner vi skal droppe, ikke rader. Endre kolonnenavn Noen ganger har kolonnene rare og lange navn, la oss lage dem kortere. Vi lager et objekt som viser hvilke kolonner vi vil endre navn på o...
df = df.rename(columns={'Voksne hunnlus': 'hunnlus', 'Sjøtemperatur': 'sjotemp'})
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Har vi manglende data? Har vi rader uten data, eller kolonner uten data? La oss først se på de 5 første radene.
df.head(n=5)
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Allerede her ser vi gjengangeren NaN som betyr Not a Number. Altså at denne cellen ikke har en numerisk verdi (slik som de andre i kolonnen) La oss se hvor mange rader som mangler verdi (isnull) i hunnlus-feltet
df['hunnlus'].isnull().sum()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Oi, nokså mange uten verdi. Trolig har ikke de rapportert lusetall den uka. Vi skal se på det senere. Fyll inn manglende data Vi ser på et nytt eksempel, en Excel-fil der det er manglende data i mange celler.
df2 = pd.read_excel('data/bord4_20171028_kommunedummy.xlsx') df2
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Her ser vi et typisk mønster der kun første raden i hvert fylke har verdi i fylkekolonnen. For å kunne behandle disse dataene i Pandas må alle ha verdi. Så la oss fylle celler med tomme verdier (fillna) nedover, en såkalt Forward Fill eller ffill
df2['Fylke'] = df2['Fylke'].fillna(method='ffill') df2
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Normaliser dataene Er det mennesker som har punchet dataene du har? Da er det garantert ord som er skrevet NESTEN likt og som roter til data-analysen din. Vi bruker her et datasett fra UiBs medborgerpanel hvor velgernes holdninger til andre partier er angitt. Datasettet er satt sammen av flere Excel-filer, laget på ul...
df = pd.read_csv('data/uib_medborgerpanelet_20170601_partiomparti.csv') df.head()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Det er ofte lurt å se hva slags unike verdier som finnes i en kolonne. Det kan du gjøre slik.
df['omtaler_parti'].value_counts().to_frame().sort_index()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Her var det mye forskjellig! Legg merke til at partiene er skrevet på forskjellige måter. Det betyr trøbbel om vi skal gruppere senere. Vi må normalisere disse verdiene, dvs samle oss om en måte å skrive på partiene på. En måte å gjøre det på er å lage en fra-til liste med verdier.
partimapping = { 'FRP': 'Frp', 'FrP': 'Frp', 'AP': 'Ap', 'Høyre': 'H', 'SP': 'Sp', 'Venstre': 'V', 'KRF': 'KrF' }
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Så må vi fortelle Pandas at vi vil bytte ut innholdet i de tre kolonnene som inneholder partinavn med den riktige formen av partinavn.
df = df.replace({ 'parti_valgt_2013': partimapping, 'omtaler_parti': partimapping} )
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Så tar vi igjen en sjekk på hvilke unike verdier vi har
df['omtaler_parti'].value_counts().to_frame().sort_index()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Making a HTTP Connection
import requests req = requests.get('http://google.com') print(req.text) def connect(prot='http', **q): """ Makes a connection with CAPE. Required that at least one query is made. Parameters ---------- :params prot: Either HTTP or HTTPS :params q: Query Dictionary Returns ----...
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Running the Code **q is a variable set of keyword arguments that it will apply to the URL ```python connect(department=CHEM) ``` Will make a request to http://cape.ucsd.edu/responses/Results.aspx?department=CHEM and return the result.
# URL: http://cape.com/responses/Results.aspx? req = connect(department="CHEM") print(req.text)
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Cleaning up the result using BeautifulSoup4 BeautifulSoup is a HTML Parser Let's grab all the class listings within html <option value="">Select a Department</option> <option value="ANTH">ANTH - Anthropology</option> <option value="BENG">BENG - Bioengineering</option> ...
from bs4 import BeautifulSoup # Grab the HTML req = connect(department="CHEM") # Shove it into BeautifulSoup soup = BeautifulSoup(req.text, 'lxml') # Find all Option Tags options = soup.find_all('option') # Returns a list of options options # Grab the `value= ` Attribute for option in options: print(option.at...
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Now Grab all the Departments Kind of.....
def departments(): """ Gets a mapping of all the deparments by key. """ logging.info('Grabbing a list of Departments') prototype = connect("http", department="CHEM") soup = BeautifulSoup(prototype.content, 'lxml') options = list(reversed(soup.find_all('option'))) options.pop() ...
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Data Munging
def create_table(courses): """ Generates a pandas DataFrame by querying UCSD Cape Website. Parameters ========== :params courses: Either Course or Path to HTML File Returns ======= :returns df: Query Results :rtype: pandas.DataFrame """ header = [ 'inst...
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Make it Go Fast with Multi Threading
def main(threads=6): """ Get all departments """ logging.info('Program is Starting') # Get Departments deps = departments() keys = [department.strip() for department in deps.keys()] # Run Scraper Concurrently Using ThreadPool pool = ThreadPool(threads) logging.info('Initiali...
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Target Configuration
# Setup a target configuration my_target_conf = { # Target platform and board "platform" : 'linux', "board" : 'aboard', # Target board IP/MAC address "host" : '192.168.0.1', # Login credentials "username" : 'root', "password" : 'test0000', }
ipynb/tutorial/04_ExecutorUsage.ipynb
JaviMerino/lisa
apache-2.0
Tests Configuration
my_tests_conf = { # Folder where all the results will be collected "results_dir" : "ExecutorExample", # Platform configurations to test "confs" : [ { "tag" : "base", "flags" : "ftrace", # Enable FTrace events "sched_features" : ...
ipynb/tutorial/04_ExecutorUsage.ipynb
JaviMerino/lisa
apache-2.0
Tests execution
from executor import Executor executor = Executor(my_target_conf, my_tests_conf) executor.run() !tree {executor.te.res_dir}
ipynb/tutorial/04_ExecutorUsage.ipynb
JaviMerino/lisa
apache-2.0
Exercice 2 : json Un premier essai.
obj = dict(a=[50, "r"], gg=(5, 't')) import jsonpickle frozen = jsonpickle.encode(obj) frozen
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Ce module est équivalent au module json sur les types standard du langage Python (liste, dictionnaires, nombres, ...). Mais le module json ne fonctionne pas sur les dataframe.
frozen = jsonpickle.encode(df) len(frozen), type(frozen), frozen[:55]
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
La methode to_json donnera un résultat statisfaisant également mais ne pourra s'appliquer à un modèle de machine learning produit par scikit-learn.
def to_json(obj, filename): frozen = jsonpickle.encode(obj) with open(filename, "w", encoding="utf-8") as f: f.write(frozen) def read_json(filename): with open(filename, "r", encoding="utf-8") as f: enc = f.read() return jsonpickle.decode(enc) to_json(df, "df_text.json") try: ...
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Visiblement, cela ne fonctionne pas sur les DataFrame. Il faudra s'inspirer du module numpyson. json + scikit-learn Il faut lire l'issue 147 pour saisir l'intérêt des deux lignes suivantes.
import jsonpickle.ext.numpy as jsonpickle_numpy jsonpickle_numpy.register_handlers() from sklearn import datasets iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. y = iris.target from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X,y) clf.pr...
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Donc on essaye d'une essaye d'une autre façon. Si le code précédent ne fonctionne pas et le suivant si, c'est un bug de jsonpickle.
class EncapsulateLogisticRegression: def __init__(self, obj): self.obj = obj def __getstate__(self): return {k: v for k, v in sorted(self.obj.__getstate__().items())} def __setstate__(self, data): self.obj = LogisticRegression() self.obj.__setstate__(data) enc = Enca...
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Fit_Transform: 1) Fits the model and learns the vocabulary 2) transoforms the data into feature vectors
#using only the "Text Feed" column to build the features features = vector_data.fit_transform(anomaly_data.TextFeed.tolist()) #converting the data into the array features = features.toarray() features.shape #printing the words in the vocabulary vocab = vector_data.get_feature_names() print (vocab) # Sum up the counts...
AnomaliesTwitterText/anomalies_in_tweets.ipynb
manojkumar-github/NLP-TextAnalytics
mit
Analytic I Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. Excluding PowerShell.exe is a good way to find alternate PowerShell hosts | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Powershell | Wi...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Channel FROM sdTable WHERE (Channel = "Microsoft-Windows-PowerShell/Operational" OR Channel = "Windows PowerShell") AND (EventID = 400 OR EventID = 4103) AND NOT Message LIKE "%Host Application%powershell%" ''' ) df.show(10,False)
docs/notebooks/windows/02_execution/WIN-190610201010.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic II Looking for processes loading a specific PowerShell DLL is a very effective way to document the use of PowerShell in your environment | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Module | Microsoft-Windows-Sysmon/Operational | Process load...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, Description FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 7 AND (lower(Description) = "system.management.automation" OR lower(ImageLoaded) LIKE "%system.management.automation%") AND NOT Image LIKE "%powershell.exe"...
docs/notebooks/windows/02_execution/WIN-190610201010.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic III Monitoring for PSHost* pipes is another interesting way to find other alternate PowerShell hosts in your environment. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Named pipe | Microsoft-Windows-Sysmon/Operational | Process created Pipe | ...
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, PipeName FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 17 AND lower(PipeName) LIKE "\\\pshost%" AND NOT Image LIKE "%powershell.exe" ''' ) df.show(10,False)
docs/notebooks/windows/02_execution/WIN-190610201010.ipynb
VVard0g/ThreatHunter-Playbook
mit
reference for LaTeX commands in MathJax http://www.onemathematicalcat.org/MathJaxDocumentation/TeXSyntax.htm http://oeis.org/wiki/List_of_LaTeX_mathematical_symbols
# define symbol x = sympy.symbols('x') print(type(x)) x # define fuction f = x**2 + 4*x f # differentiation sympy.diff(f) # simplify function sympy.simplify(f) # solving equation from sympy import solve solve(f) # factorize from sympy import factor sympy.factor(f) # partial differentiation x, y = sympy.symbols('...
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
Lattecom/HYStudy
mit
Draw function graph
# draw 2nd degree function def f2(x): return x**3 + 2*x**2 - 20 x = np.linspace(-21, 21, 500) y = f2(x) plt.plot(x, y) plt.show()
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
Lattecom/HYStudy
mit
Gradient vector, quiver & contour plot
import numpy as np import matplotlib as mpl import matplotlib.pylab as plt # function definition def f(x, y): return 3*x**2 + 4*x*y + 4*y**2 - 50*x - 20*y + 100 # coordinate range xx = np.linspace(-11, 16, 500) yy = np.linspace(-11, 16, 500) # make coordinate point X, Y = np.meshgrid(xx, yy) # dependent variabl...
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
Lattecom/HYStudy
mit
2. Read data The data are read from numpy npy files and wrapped as Datasets. Features (vertices) are normalized to have unit variance.
dss_train = [] dss_test = [] subjects = ['rid000005', 'rid000011', 'rid000014'] for subj in subjects: ds = Dataset(np.load('raiders/{subj}_run00_lh.npy'.format(subj=subj))) ds.fa['node_indices'] = np.arange(ds.shape[1], dtype=int) zscore(ds, chunks_attr=None) dss_train.append(ds) ds = Dataset(np.lo...
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
3. Create SearchlightHyperalignment instance The QueryEngine is used to find voxel/vertices within a searchlight. This SurfaceQueryEngine use a searchlight radius of 5 mm based on the fsaverage surface.
sl_radius = 5.0 qe = SurfaceQueryEngine(read_surface('fsaverage.lh.surf.gii'), radius=sl_radius) hyper = SearchlightHyperalignment( queryengine=qe, compute_recon=False, # We don't need to project back from common space to subject space nproc=1, # Number of processes to use. Change "Docker - Preferences - A...
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
4. Create common template space with training data This step may take a long time. In my case it's 10 minutes with nproc=1.
# mappers = hyper(dss_train) # h5save('mappers.hdf5.gz', mappers, compression=9) mappers = h5load('mappers.hdf5.gz') # load pre-computed mappers
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
5. Project testing data to the common space
dss_aligned = [mapper.forward(ds) for ds, mapper in zip(dss_test, mappers)] _ = [zscore(ds, chunks_attr=None) for ds in dss_aligned]
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
6. Benchmark inter-subject correlations
def compute_average_similarity(dss, metric='correlation'): """ Returns ======= sim : ndarray A 1-D array with n_features elements, each element is the average pairwise correlation similarity on the corresponding feature. """ n_features = dss[0].shape[1] sim = np.zeros((n_feat...
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
7. Benchmark movie segment classifications
def movie_segment_classification_no_overlap(dss, window_size=6, dist_metric='correlation'): """ Parameters ========== dss : list of ndarray or Datasets window_size : int, optional dist_metric : str, optional Returns ======= cv_results : ndarray An n_subjects x n_segments boo...
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
<h3> Simulate some time-series data </h3> Essentially a set of sinusoids with random amplitudes and frequencies.
import tensorflow as tf print(tf.__version__) import numpy as np import seaborn as sns def create_time_series(): freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6 ampl = np.random.random() + 0.5 # 0.5 to 1.5 noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed ...
courses/machine_learning/deepdive/09_sequence_keras/sinewaves.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h3> Train model locally </h3> Make sure the code works as intended.
%%bash DATADIR=$(pwd)/data/sines OUTDIR=$(pwd)/trained/sines rm -rf $OUTDIR gcloud ml-engine local train \ --module-name=sinemodel.task \ --package-path=${PWD}/sinemodel \ -- \ --train_data_path="${DATADIR}/train-1.csv" \ --eval_data_path="${DATADIR}/valid-1.csv" \ --output_dir=${OUTDIR} \ ...
courses/machine_learning/deepdive/09_sequence_keras/sinewaves.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h3> Cloud ML Engine </h3> Now to train on Cloud ML Engine with more data.
import shutil shutil.rmtree(path = "data/sines", ignore_errors = True) os.makedirs("data/sines/") np.random.seed(1) # makes data generation reproducible for i in range(0,10): to_csv("data/sines/train-{}.csv".format(i), 1000) # 1000 sequences to_csv("data/sines/valid-{}.csv".format(i), 250) %%bash gsutil -m rm...
courses/machine_learning/deepdive/09_sequence_keras/sinewaves.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Ejercicio Crea codigo para una iteración mas con estos mismos parametros y despliega el resultado.
x3 = # Escribe el codigo de tus calculos aqui from pruebas_2 import prueba_2_1 prueba_2_1(x0, x1, x2, x3, _)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Momento... que esta pasando? Resulta que este $\Delta t$ es demasiado grande, intentemos con 20 iteraciones: $$ \begin{align} \Delta t &= 0.5 \ x(0) &= 1 \end{align} $$
x0 = 1 n = 20 Δt = 10/n F = lambda x : -x x1 = x0 + F(x0)*Δt x1 x2 = x1 + F(x1)*Δt x2 x3 = x2 + F(x2)*Δt x3
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Esto va a ser tardado, mejor digamosle a Python que es lo que tenemos que hacer, y que no nos moleste hasta que acabe, podemos usar un ciclo for y una lista para guardar todos los valores de la trayectoria:
xs = [x0] for t in range(20): xs.append(xs[-1] + F(xs[-1])*Δt) xs
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ahora que tenemos estos valores, podemos graficar el comportamiento de este sistema, primero importamos la libreria matplotlib:
%matplotlib inline from matplotlib.pyplot import plot
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Mandamos a llamar la función plot:
plot(xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Sin embargo debido a que el periodo de integración que utilizamos es demasiado grande, la solución es bastante inexacta, podemos verlo al graficar contra la que sabemos es la solución de nuestro problema:
from numpy import linspace, exp ts = linspace(0, 10, 20) plot(xs) plot(exp(-ts));
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Si ahora utilizamos un numero de pedazos muy grande, podemos mejorar nuestra aproximación:
xs = [x0] n = 100 Δt = 10/n for t in range(100): xs.append(xs[-1] + F(xs[-1])*Δt) ts = linspace(0, 10, 100) plot(xs) plot(exp(-ts));
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
odeint Este método funciona tan bien, que ya viene programado dentro de la libreria scipy, por lo que solo tenemos que importar esta librería para utilizar este método. Sin embargo debemos de tener cuidado al declarar la función $F(x, t)$. El primer argumento de la función se debe de referir al estado de la función, es...
from scipy.integrate import odeint F = lambda x, t : -x x0 = 1 ts = linspace(0, 10, 100) xs = odeint(func=F, y0=x0, t=ts) plot(ts, xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Grafica el comportamiento de la siguiente ecuación diferencial. $$ \dot{x} = x^2 - 5 x + \frac{1}{2} \sin{x} - 2 $$ Nota: Asegurate de impotar todas las librerias que puedas necesitar
ts = # Escribe aqui el codigo que genera un arreglo de puntos equidistantes (linspace) x0 = # Escribe el valor de la condicion inicial # Importa las funciones de librerias que necesites aqui G = lambda x, t: # Escribe aqui el codigo que describe los calculos que debe hacer la funcion xs = # Escribe aqui el comando n...
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Sympy Y por ultimo, hay veces en las que incluso podemos obtener una solución analítica de una ecuación diferencial, siempre y cuando cumpla ciertas condiciones de simplicidad.
from sympy import var, Function, dsolve from sympy.physics.mechanics import mlatex, mechanics_printing mechanics_printing() var("t") x = Function("x")(t) x, x.diff(t) solucion = dsolve(x.diff(t) + x, x) solucion
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Implementa el codigo necesario para obtener la solución analítica de la siguiente ecuación diferencial: $$ \dot{x} = x^2 - 5x $$
# Declara la variable independiente de la ecuación diferencial var("") # Declara la variable dependiente de la ecuación diferencial = Function("")() # Escribe la ecuación diferencial con el formato necesario (Ecuacion = 0) # adentro de la función dsolve sol = dsolve() sol from pruebas_2 import prueba_2_3 prueba_2_3...
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Solución a ecuaciones diferenciales de orden superior Si ahora queremos obtener el comportamiento de una ecuacion diferencial de orden superior, como: $$ \ddot{x} = -\dot{x} - x + 1 $$ Tenemos que convertirla en una ecuación diferencial de primer orden para poder resolverla numericamente, por lo que necesitaremos conve...
from numpy import matrix, array def F(X, t): A = matrix([[0, 1], [-1, -1]]) B = matrix([[0], [1]]) return array((A*matrix(X).T + B).T).tolist()[0] ts = linspace(0, 10, 100) xs = odeint(func=F, y0=[0, 0], t=ts) plot(xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Implementa la solución de la siguiente ecuación diferencial, por medio de un modelo en representación de espacio de estados: $$ \ddot{x} = -8\dot{x} - 15x + 1 $$ Nota: Tomalo con calma y paso a paso * Empieza anotando la ecuación diferencial en tu cuaderno, junto a la misma identidad del ejemplo * Extrae la ...
def G(X, t): A = # Escribe aqui el codigo para la matriz A B = # Escribe aqui el codigo para el vector B return array((A*matrix(X).T + B).T).tolist()[0] ts = linspace(0, 10, 100) xs = odeint(func=G, y0=[0, 0], t=ts) plot(xs); from pruebas_2 import prueba_2_4 prueba_2_4(xs)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Funciones de transferencia Sin embargo, no es la manera mas facil de obtener la solución, tambien podemos aplicar una transformada de Laplace, y aplicar las funciones de la libreria de control para simular la función de transferencia de esta ecuación; al aplicar la transformada de Laplace, obtendremos: $$ G(s) = \frac{...
from control import tf, step F = tf([0, 0, 1], [1, 1, 1]) xs, ts = step(F) plot(ts, xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Modela matematicamente la ecuación diferencial del ejercicio anterior, usando una representación de función de transferencia. Nota: De nuevo, no desesperes, escribe tu ecuación diferencial y aplica la transformada de Laplaca tal como te enseñaron tus abuelos hace tantos años...
G = tf([], []) # Escribe los coeficientes de la función de transferencia xs, ts = step(G) plot(ts, xs); from pruebas_2 import prueba_2_5 prueba_2_5(ts, xs)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Introduction to Divide-and-Conquer Algorithms The subfamily of Divide-and-Conquer algorithms is one of the main paradigms of algorithmic problem solving next to Dynamic Programming and Greedy Algorithms. The main goal behind greedy algorithms is to implement an efficient procedure for often computationally more complex...
def linear_search(lst, item): for i in range(len(lst)): if lst[i] == item: return i return -1 lst = [1, 5, 8, 12, 13] for k in [8, 1, 23, 11]: print(linear_search(lst=lst, item=k))
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
The runtime of linear search is obviously $O(n)$ since we are checking each element in the array -- remember that big-Oh is our upper bound. Now, a cleverer way of implementing a search algorithm would be binary search, which is a simple, yet nice example of a divide-and-conquer algorithm. The idea behind divide-and-co...
def binary_search(lst, item): first = 0 last = len(lst) - 1 found = False while first <= last and not found: midpoint = (first + last) // 2 if lst[midpoint] == item: found = True else: if item < lst[midpoint]: last = midpoint - 1 ...
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
Example 2 -- Finding the Majority Element "Finding the Majority Element" is a problem where we want to find an element in an array positive integers with length n that occurs more than n/2 in that array. For example, if we have an array $a = [1, 2, 3, 3, 3]$, $3$ would be the majority element. In another array, b = [1,...
def majority_ele_lin(lst): cnt = {} for ele in lst: if ele not in cnt: cnt[ele] = 1 else: cnt[ele] += 1 for ele, c in cnt.items(): if c > (len(lst) // 2): return (ele, c, cnt) return (-1, -1, cnt) ############################################...
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
Now, "finding the majority element" is a nice task for a Divide and Conquer algorithm. Here, we use the fact that if a list has a majority element it is also the majority element of one of its two sublists, if we split it into 2 halves. More concretely, what we do is: Split the array into 2 halves Run the majority el...
def majority_ele_dac(lst): n = len(lst) left = lst[:n // 2] right = lst[n // 2:] l_maj = majority_ele_lin(left) r_maj = majority_ele_lin(right) # case 3A if l_maj[0] == -1 and r_maj[0] == -1: return -1 # case 3B elif l_maj[0] == -1 and r_maj[0] > -1: ...
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
In algorithms such as binary search that we saw at the beginning of this notebook, we recursively break down our problem into smaller subproblems. Thus, we have a recurrence problem with time complexity $T(n) = T(\frac{2}{n}) + O(1) \rightarrow T(n) = O(\log n).$ In this example, finding the majority element, we break ...
import multiprocessing as mp def majority_ele_dac_mp(lst): n = len(lst) left = lst[:n // 2] right = lst[n // 2:] results = (pool.apply_async(majority_ele_lin, args=(x,)) for x in (left, right)) l_maj, r_maj = [p.get() for p in results] if l_maj[0] == -1 and r_ma...
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
SQLAlchemy SQLAlchemy is a commonly used database toolkit. Unlike many database libraries it not only provides an ORM (Object-relational mapping) layer but also a generalized API for writing database-agnostic code without SQL. $ pip install sqlalchemy Example
from sqlalchemy import create_engine, ForeignKey from sqlalchemy import Column, Date, Integer, String from sqlalchemy.ext.declarative import declarative_base # engine.dispose() engine = create_engine('sqlite:///userlist.db', echo=True) Base = declarative_base() class User(Base): __tablename__ = 'users' id =...
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Records Records is minimalist SQL library, designed for sending raw SQL queries to various databases. Data can be used programmatically, or exported to a number of useful data formats. $ pip install records Also included is a command-line tool for exporting SQL data.
import json # https://docs.python.org/3/library/json.html import requests # https://github.com/kennethreitz/requests import records # https://github.com/kennethreitz/records # randomuser.me generates random 'user' data (name, email, addr, phone number, etc) r = requests.get('http://api.randomuser.me/0.6/?nat=us&result...
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
SQLObject SQLObject is yet another ORM. It supports a wide variety of databases: Common database systems MySQL, Postgres and SQLite and more exotic systems like SAP DB, SyBase and MSSQL. SQLObject is a popular Object Relational Manager for providing an object interface to your database, with tables as classes, rows as...
import sqlobject from sqlobject.sqlite import builder conn = builder()('sqlobject_demo.db') class PhoneNumber(sqlobject.SQLObject): _connection = conn number = sqlobject.StringCol(length=14, unique=True) owner = sqlobject.StringCol(length=255) lastCall = sqlobject.DateTimeCol(default=None) Phone...
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Defining relationships among tables SQLObject lets you define relationships among tables as foreign keys
import sqlobject from sqlobject.sqlite import builder conn = builder()('sqlobject_demo_relationships.db') class PhoneNumber(sqlobject.SQLObject): _connection = conn number = sqlobject.StringCol(length=14, unique=True) owner = sqlobject.ForeignKey('Person') lastCall = sqlobject.DateTimeCol(default=No...
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
现在查询机构很多,我们可以根据不同的查询机构和查询方式,来通过继承的方式实现其对应的股票查询器类。例如,WebA和WebB的查询器类可以构造如下:
class WebAStockQueryDevice(StockQueryDevice): def login(self,usr,pwd): if usr=="myStockA" and pwd=="myPwdA": print ("Web A:Login OK... user:%s pwd:%s"%(usr,pwd)) return True else: print ("Web A:Login ERROR... user:%s pwd:%s"%(usr,pwd)) return False ...
DesignPattern/TemplatePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
在场景中,想要在网站A上查询股票,需要进行如下操作:
web_a_query_dev=WebAStockQueryDevice() web_a_query_dev.login("myStockA","myPwdA") web_a_query_dev.setCode("12345") web_a_query_dev.queryPrice() web_a_query_dev.showPrice()
DesignPattern/TemplatePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
每次操作,都会调用登录,设置代码,查询,展示这几步,是不是有些繁琐?既然有些繁琐,何不将这几步过程封装成一个接口。由于各个子类中的操作过程基本满足这个流程,所以这个方法可以写在父类中:
class StockQueryDevice(): stock_code="0" stock_price=0.0 def login(self,usr,pwd): pass def setCode(self,code): self.stock_code=code def queryPrice(self): pass def showPrice(self): pass def operateQuery(self,usr,pwd,code): self.login(usr,pwd) se...
DesignPattern/TemplatePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Setting up the properties of time-space and create the domain:
t = 27 / 365 dx = 0.2 L = 40 phi = 0.8 dt = 1e-4 ftc = Column(L, dx, t, dt)
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
To make things interesting lets create not simple inital conditions for iron:
x = np.linspace(0, L, int(L / dx) + 1) Fe3_init = np.zeros(x.size) Fe3_init[x > 5] = 75 Fe3_init[x > 15] = 0 Fe3_init[x > 25] = 75 Fe3_init[x > 35] = 0
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Adding species with names, diffusion coefficients, initial concentrations and boundary top and bottom conditions:
ftc.add_species(theta=phi, name='O2', D=368, init_conc=0, bc_top_value=0.231, bc_top_type='dirichlet', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=phi, name='TIC', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=phi, name='Fe2', D=127, init...
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Specify the constants used in the rates:
ftc.constants['k_OM'] = 1 ftc.constants['Km_O2'] = 1e-3 ftc.constants['Km_FeOH3'] = 2 ftc.constants['k8'] = 1.4e+5 ftc.constants['Q10'] = 4 ### added ftc.constants['CF'] = (1-phi)/phi ### conversion factor
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Simulate Temperature with thermal diffusivity coefficient 281000 and init and boundary temperature 5C:
ftc.add_species(theta=0.99, name='Temperature', D=281000, init_conc=5, bc_top_value=5., bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Add Q10 factor:
ftc.rates['R1'] = 'Q10**((Temperature-5)/10) * k_OM * OM * O2 / (Km_O2 + O2)' ftc.rates['R2'] = 'Q10**((Temperature-5)/10) * k_OM * OM * FeOH3 / (Km_FeOH3 + FeOH3) * Km_O2 / (Km_O2 + O2)' ftc.rates['R8'] = 'k8 * O2 * Fe2'
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
ODEs for specific species:
ftc.dcdt['OM'] = '-R1-R2' ftc.dcdt['O2'] = '-R1-R8' ftc.dcdt['FeOH3'] = '-4*R2+R8/CF' ftc.dcdt['Fe2'] = '-R8+4*R2*CF' ftc.dcdt['TIC'] = 'R1+R2*CF'
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Because we are changing the boundary conditions for temperature and Oxygen (when T < 0 => no oxygen at the top), then we need to have a time loop:
# %pdb for i in range(1, len(ftc.time)): day_of_bi_week = (ftc.time[i]*365) % 14 if day_of_bi_week < 7: ftc.Temperature.bc_top_value = 5 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365) else: ftc.Temperature.bc_top_value = -10 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365) # when T ...
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
What we did with temperature
ftc.plot_depths("Temperature",[0,1,3,7,10,40])
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Concentrations of different species during the whole period of simulation:
ftc.plot_contourplots()
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
The rates of consumption and production of species:
ftc.reconstruct_rates() ftc.plot_contourplots_of_rates() ftc.plot_contourplots_of_deltas()
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Profiles at the end of the simulation
Fx = ftc.estimate_flux_at_top('CO2g') ftc.custom_plot(ftc.time*365, 1e+3*Fx*1e+4/365/24/60/60,x_lbl='Days, [day]' , y_lbl='$F_{CO_2}$, $[\mu mol$ $m^{-2}$ $s^{-1}]$') Fxco2 = 1e+3*Fx*1e+4/365/24/60/60 Fxco2nz = (ftc.time*365<7)*Fxco2 + ((ftc.time*365>14) & (ftc.time*365<21))*Fxco2 import seaborn as sns fig, ax1 = pl...
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Collocations between two data arrays Let's try out the simplest case: You have two xarray datasets with temporal-spatial data and you want to find collocations between them. At first, we create two example xarray datasets with faked measurements. Let's assume, these data arrays represent measurements from two different...
# Create the data primary = xr.Dataset( coords={ "lat": (('along_track'), 30.*np.sin(np.linspace(-3.14, 3.14, 24))+20), "lon": (('along_track'), np.linspace(0, 90, 24)), "time": (('along_track'), np.arange("2018-01-01", "2018-01-02", dtype="datetime64[h]")), }, data_vars={ "T...
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Now, let’s find all measurements of primary that have a maximum distance of 300 kilometers to the measurements of secondary:
collocator = Collocator(name='primary_secondary_collocator') collocations = collocator.collocate( primary=('primary', primary), secondary=('secondary', secondary), max_distance=600, # collocation radius in km ) print(f'Found collocations are {collocations["Collocations/distance"].values} km apart') colloca...
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
The obtained collocations dataset contains variables of 3 groups: primary, secondary and Collocations. The first two correspond to the variables of the two respective input datasets and contain only the matched data points. The Collocations group adds some new variables containing information about the collocations, e...
def collocations_wmap(collocations): fig = plt.figure(figsize=(10, 10)) # Plot the collocations wmap = worldmap( collocations['primary/lat'], collocations['primary/lon'], facecolor="r", s=128, marker='x', bg=True ) worldmap( collocations['secondary/lat'], col...
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
We can also add a temporal filter that filters out all points which difference in time is bigger than a time interval. We are doing this by using max_interval. Note that our testdata is sampled very sparsely in time.
collocations = collocator.collocate( primary=('primary', primary), secondary=('secondary', secondary), max_distance=300, # collocation radius in km max_interval=timedelta(hours=1), # temporal collocation interval as timedelta ) print( f'Found collocations are {collocations["Collocations/distance"].v...
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
As mentioned in :func:collocate, the collocations are returned in compact format, e.g. an efficient way to store the collocated data. When several data points in the secondary group collocate with a single observation of the primary group, it is not obvious how this should be handled. The compact format accounts for th...
expand(collocations)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Applying collapse to the collocations will calculate some generic statistics (mean, std, count) over the datapoints that match with a single data point of the other dataset.
collapse(collocations)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Purely temporal collocations are not implemented yet and attempts will raise a NotImplementedError. Find collocations between two filesets Normally, one has the data stored in a set of many files. typhon provides an object to handle those filesets (see the typhon doc). It is very simple to find collocations between the...
fh = NetCDF4() fh.write(secondary, 'testdata/secondary/2018/01/01/000000-235959.nc') # Create the filesets objects and point them to the input files a_fileset = FileSet( name="primary", path="testdata/primary/{year}/{month}/{day}/" "{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc", # ...
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Now, we can search for collocations between a_dataset and b_dataset and store them to ab_collocations.
# Create the output dataset: ab_collocations = Collocations( name="ab_collocations", path="testdata/ab_collocations/{year}/{month}/{day}/" "{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc", ) ab_collocations.search( [a_fileset, b_fileset], start="2018", end="2018-01-02", max_inter...
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Exercise Try out these commands to see what they return: data.head() data.tail(3) data.shape
data.shape
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data:
vals = data.value vals vals[5] = 0 vals data
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
If we plan on modifying an extracted Series, its a good idea to make a copy.
vals = data.value.copy() vals[5] = 1000 data
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Exercise From the data table above, create an index to return all rows for which the phylum name ends in "bacteria" and the value is greater than 1000.
# Write your answer here
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Importing data A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this ...
!cat ../data/microbiome.csv
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
This table can be read into a DataFrame using read_csv:
mb = pd.read_csv("../data/microbiome.csv") mb
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Notice that read_csv automatically considered the first row in the file to be a header row. We can override default behavior by customizing some the arguments, like header, names or index_col.
pd.read_csv("../data/microbiome.csv", header=None).head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
read_csv is just a convenience function for read_table, since csv is such a common format:
mb = pd.read_table("../data/microbiome.csv", sep=',')
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats: sep='\s+' For a more useful index, we can specify the first two columns, which together prov...
mb = pd.read_csv("../data/microbiome.csv", index_col=['Patient','Taxon']) mb.head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0