markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Fikk vi med alle rader og kolonner? Ta en kikk i Excel-filen din og se etter antall rader og antall kolonner og sammenlikn med tallet du med bruk av df.shape.
df.shape
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Det første tallet er rader, det andre er kolonner. Vi har altså over 61.757 rader. Og det stemmer med rader i Excel-filen. Så langt alt fint! Ta en sjekk for å se at dataene ser ok ut Vi sjekker topp og bunn
df.head(n=3) df.tail(n=3)
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Hvilke kolonner har vi og hva slags datatype har de?
df.dtypes
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Forklaring: int64 betyr heltall, object betyr som regel at det er tekst, float64 betyr et tall med desimaler. Fjern kolonner du ikke trenger Gjør det det mer oversiktlig å jobbe med. Her kan vi fjerne lat og lon kolonnene som angir kartreferanse.
df = df.drop(['Lat', 'Lon'], axis='columns')
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Forklaring: Her lager vi en ny DataFrame med samme navnet, der vi dropper kolonnene Lat og Lon. axis=columns betyr at det er kolonner vi skal droppe, ikke rader. Endre kolonnenavn Noen ganger har kolonnene rare og lange navn, la oss lage dem kortere. Vi lager et objekt som viser hvilke kolonner vi vil endre navn på og til hva.
df = df.rename(columns={'Voksne hunnlus': 'hunnlus', 'Sjøtemperatur': 'sjotemp'})
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Har vi manglende data? Har vi rader uten data, eller kolonner uten data? La oss først se på de 5 første radene.
df.head(n=5)
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Allerede her ser vi gjengangeren NaN som betyr Not a Number. Altså at denne cellen ikke har en numerisk verdi (slik som de andre i kolonnen) La oss se hvor mange rader som mangler verdi (isnull) i hunnlus-feltet
df['hunnlus'].isnull().sum()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Oi, nokså mange uten verdi. Trolig har ikke de rapportert lusetall den uka. Vi skal se på det senere. Fyll inn manglende data Vi ser på et nytt eksempel, en Excel-fil der det er manglende data i mange celler.
df2 = pd.read_excel('data/bord4_20171028_kommunedummy.xlsx') df2
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Her ser vi et typisk mønster der kun første raden i hvert fylke har verdi i fylkekolonnen. For å kunne behandle disse dataene i Pandas må alle ha verdi. Så la oss fylle celler med tomme verdier (fillna) nedover, en såkalt Forward Fill eller ffill
df2['Fylke'] = df2['Fylke'].fillna(method='ffill') df2
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Normaliser dataene Er det mennesker som har punchet dataene du har? Da er det garantert ord som er skrevet NESTEN likt og som roter til data-analysen din. Vi bruker her et datasett fra UiBs medborgerpanel hvor velgernes holdninger til andre partier er angitt. Datasettet er satt sammen av flere Excel-filer, laget på ulikt tidspunkt. La oss se på dataene
df = pd.read_csv('data/uib_medborgerpanelet_20170601_partiomparti.csv') df.head()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Det er ofte lurt å se hva slags unike verdier som finnes i en kolonne. Det kan du gjøre slik.
df['omtaler_parti'].value_counts().to_frame().sort_index()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Her var det mye forskjellig! Legg merke til at partiene er skrevet på forskjellige måter. Det betyr trøbbel om vi skal gruppere senere. Vi må normalisere disse verdiene, dvs samle oss om en måte å skrive på partiene på. En måte å gjøre det på er å lage en fra-til liste med verdier.
partimapping = { 'FRP': 'Frp', 'FrP': 'Frp', 'AP': 'Ap', 'Høyre': 'H', 'SP': 'Sp', 'Venstre': 'V', 'KRF': 'KrF' }
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Så må vi fortelle Pandas at vi vil bytte ut innholdet i de tre kolonnene som inneholder partinavn med den riktige formen av partinavn.
df = df.replace({ 'parti_valgt_2013': partimapping, 'omtaler_parti': partimapping} )
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Så tar vi igjen en sjekk på hvilke unike verdier vi har
df['omtaler_parti'].value_counts().to_frame().sort_index()
notebooks/02 Vasking av data.ipynb
BergensTidende/dataskup-2017-notebooks
mit
Making a HTTP Connection
import requests req = requests.get('http://google.com') print(req.text) def connect(prot='http', **q): """ Makes a connection with CAPE. Required that at least one query is made. Parameters ---------- :params prot: Either HTTP or HTTPS :params q: Query Dictionary Returns ------- :return: Request :rtype : request.Request """ host = 'cape.ucsd.edu' inputs = 'Name', 'courseNumber', 'department' prot = prot.lower() base = '%s://%s/responses/Results.aspx' % (prot, host) assert prot in ['http', 'https'] assert any(val in inputs for val in q) headers = { "Host": host, "Accept": ','.join([ "text/html", "application/xhtml+xml", "application/xml;q=0.9,*/*;q=0.8"]), "Accept-Language": "en-US,en;q=0.5", "User-Agent": ' '.join([ "Mozilla/5.0]", "(Macintosh; Intel Mac OS X 10_10_2)", "AppleWebKit/600.3.18", "(KHTML, like Gecko)", "Version/8.0.3 Safari/600.3.18"]), "Cache-Control": "no-cache" } queries = '&'.join( [ '{key}={value}'.format(key=key, value=value) for key, value in q.items() if key in inputs ] ) req = requests.get('?'.join([base, queries]), headers=headers) if not req.ok: print("Request didn't make it", file=sys.stderr) req.raise_for_status() return req
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Running the Code **q is a variable set of keyword arguments that it will apply to the URL ```python connect(department=CHEM) ``` Will make a request to http://cape.ucsd.edu/responses/Results.aspx?department=CHEM and return the result.
# URL: http://cape.com/responses/Results.aspx? req = connect(department="CHEM") print(req.text)
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Cleaning up the result using BeautifulSoup4 BeautifulSoup is a HTML Parser Let's grab all the class listings within html <option value="">Select a Department</option> <option value="ANTH">ANTH - Anthropology</option> <option value="BENG">BENG - Bioengineering</option> <option value="BIOL">BIOL - Biological Sciences</option> <option value="CAT">CAT - Sixth College</option> <option value="CENG">CENG - Chemical Engineering</option> ... ...
from bs4 import BeautifulSoup # Grab the HTML req = connect(department="CHEM") # Shove it into BeautifulSoup soup = BeautifulSoup(req.text, 'lxml') # Find all Option Tags options = soup.find_all('option') # Returns a list of options options # Grab the `value= ` Attribute for option in options: print(option.attrs['value'])
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Now Grab all the Departments Kind of.....
def departments(): """ Gets a mapping of all the deparments by key. """ logging.info('Grabbing a list of Departments') prototype = connect("http", department="CHEM") soup = BeautifulSoup(prototype.content, 'lxml') options = list(reversed(soup.find_all('option'))) options.pop() # Initial Course Mapping mapping = dict(option.text.split(' - ') for option in options) # Cleanup for dept in ['BIOL', 'SOC', 'HIST', 'LING', 'LIT', 'NENG', 'RSM ', 'SOE', 'THEA']: mapping.pop(dept) # Actual Departments mapping.update({ 'BIBC': 'Biology Biochemistry', 'BILD': 'Biology Lower Division', 'BIMM': 'Biology Molecular, Microbiology', 'BIPN': 'Biology Physiology and Neuroscience', 'SOCA': 'Sociology Theory & Methods', 'SOCB': 'Sociology Cult, Lang, & Soc Interact', 'SOCC': 'Sociology Organiz & Institutions', 'SOCD': 'Sociology Comparative & Historical', 'SOCE': 'Sociology Ind Research & Honors Prog', 'SOCI': 'Sociology', 'SOCL': 'Sociology Lower Division', 'HILD': 'History Lower Division', 'HIAF': 'History of Africa', 'HIEA': 'History of East Asia', 'HIEU': 'History of Europe', 'HINE': 'History of Near East', 'HILA': 'History of Latin America', 'HISC': 'History of Science', 'HIUS': 'History of the United States', 'HITO': 'History Topics', 'LTAF': 'Literature African', 'LTAM': 'Literature of the Americas', 'LTCH': 'Literature Chinese', 'LTCS': 'Literature Cultural Studies', 'LTEA': 'Literature East Asian', 'LTEU': 'Literature European/Eurasian', 'LTFR': 'Literature French', 'LTGM': 'Literature General', 'LTGK': 'Literature Greek', 'LTGM': 'Literature German', 'LTIT': 'Literature Italian', 'LTKO': 'Literature Korean', 'LTLA': 'Literature Latin', 'LTRU': 'Literature Russian', 'LTSP': 'Literature Spanish', 'LTTH': 'Literature Theory', 'LTWL': 'Literature of the World', 'LTWR': 'Literature Writing', 'RELI': 'Literature Study of Religion', 'TWS' : 'Literature Third World Studies', 'NANO': 'Nano Engineering', 'MGT' : 'Rady School of Management', 'ENG' : 'Jacobs School of Engineering', 'LIGN': 'Linguistics', 'TDAC': 'Theatre Acting', 'TDCH': 'Theatre Dance Choreography', 'TDDE': 'Theatre Design', 'TDDR': 'Theatre Directing/Stage Management', 'TDGE': 'Theatre General', 'TDHD': 'Theatre Dance History', 'TDHT': 'Theatre History', 'TDMV': 'Theatre Dance Movement', 'TDPF': 'Theatre Dance Performance', 'TDPW': 'Theatre Playwriting', 'TDTR': 'Theatre Dance Theory', }) # Create Categorical Series dep = pd.Series(name='department_name', data=mapping) # Reindexing dep = dep.map(lambda x: np.nan if x == '' else x) dep = dep.dropna() dep.index.name = 'Departments' return dep
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Data Munging
def create_table(courses): """ Generates a pandas DataFrame by querying UCSD Cape Website. Parameters ========== :params courses: Either Course or Path to HTML File Returns ======= :returns df: Query Results :rtype: pandas.DataFrame """ header = [ 'instructor', 'course', 'term', 'enroll', 'evals', 'recommend_class', 'recommend_instructor', 'study_hours_per_week', 'average_grade_expected', 'average_grade_received' ] first, second = itemgetter(0), itemgetter(1) print('\nGrabbing Classes: {0}'.format(courses)) # Get Data base = 'http://cape.ucsd.edu/responses/' req = ( open(courses).read() if os.path.isfile(courses) else connect("http", courseNumber=courses).content ) html = BeautifulSoup(req, 'lxml') table = first(html.find_all('table')) # Create Dataframe df = first(pd.read_html(str(table)), flavor=None, na_values=['No CAPEs submitted']) # Data Clean Up df.columns = header df['link'] = [ urljoin(base, link.attrs['href']) if link.has_attr('href') else np.nan for link in table.find_all('a') ] df['instructor'] = df.instructor.map( lambda name: ( str.title(name) if isinstance(name, str) else 'Unknown, Unknonwn' ) ) # Data Extraction df['first_name'] = df.instructor.map(lambda name: second(name.split(',')).strip('.')) df['last_name'] = df.instructor.map(lambda name: first(name.split(','))) df['class_id'] = df.course.map( lambda course: first(course.split(' - '))) df['department'] = df.class_id.map(lambda course: first(course.split(' '))) df['class_name'] = df.course.map( lambda course: ( second(course.split(' - '))[:-4] if ' - ' in course else np.nan) ) # Data Types df['recommend_class'] = df.recommend_class.map(calculate_percentage) df['recommend_instructor'] = df.recommend_instructor.map(calculate_percentage) df['average_grade_expected'] = df.average_grade_expected.map(calculate_grades) df['average_grade_received'] = df.average_grade_received.map(calculate_grades) # Reindexing and Transforms df['section_id'] = df.link.map(calculate_section_id) df = df.dropna(subset=['section_id']) df = df.drop_duplicates(subset='section_id') df['section_id'] = df.section_id.astype(np.int32) return df.set_index('section_id', drop=True) def calculate_percentage(element): if isinstance(element, str): return np.float(element.strip('%').strip()) / 100 else: return np.nan def calculate_grades(element): if isinstance(element, str): return np.float(element[1:].lstrip('+-').lstrip().strip('()')) else: return np.nan def calculate_section_id(element): if isinstance(element, str): return int(element.lower().rsplit('sectionid=')[-1].strip(string.ascii_letters)) else: return np.nan def to_db(df, table, user='postgres', db='graphucsd', resolve='replace', host='localhost'): """ Helper Function to Push DataFrame to Postgresql Database """ url = 'postgresql+psycopg2://{user}@{host}/{db}'.format(user=user, db=db, host=host) if not database_exists(url): create_database(url) engine = create_engine(url) return df.to_sql(table, engine, if_exists=resolve) df = create_table('CHEM') header = [ 'instructor', 'course', 'term', 'enroll', 'evals', 'recommend_class', 'recommend_instructor', 'study_hours_per_week', 'average_grade_expected', 'average_grade_received' ] first, second = itemgetter(0), itemgetter(1) base = 'http://cape.ucsd.edu/responses/' req = connect("http", courseNumber='CSE').content html = BeautifulSoup(req, 'lxml') table = first(html.find_all('table')) def calculate_percentage(element): if isinstance(element, str): return np.float(element.strip('%').strip()) / 100 else: return np.nan import pandas as pd df = first(pd.read_html(str(table)), flavor=None, na_values=['No CAPEs submitted'])
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Make it Go Fast with Multi Threading
def main(threads=6): """ Get all departments """ logging.info('Program is Starting') # Get Departments deps = departments() keys = [department.strip() for department in deps.keys()] # Run Scraper Concurrently Using ThreadPool pool = ThreadPool(threads) logging.info('Initialize Scraper with {} Threads'.format(threads)) table = pool.map(create_table, keys) logging.info('Scrape Complete') # Manage ThreadPool pool.close(); pool.join() df = pd.concat(table) return df.groupby(level=0).first() df = main(threads=4) df
notebooks/cape.ipynb
jjangsangy/GraphUCSD
apache-2.0
Target Configuration
# Setup a target configuration my_target_conf = { # Target platform and board "platform" : 'linux', "board" : 'aboard', # Target board IP/MAC address "host" : '192.168.0.1', # Login credentials "username" : 'root', "password" : 'test0000', }
ipynb/tutorial/04_ExecutorUsage.ipynb
JaviMerino/lisa
apache-2.0
Tests Configuration
my_tests_conf = { # Folder where all the results will be collected "results_dir" : "ExecutorExample", # Platform configurations to test "confs" : [ { "tag" : "base", "flags" : "ftrace", # Enable FTrace events "sched_features" : "NO_ENERGY_AWARE", # Disable EAS "cpufreq" : { # Use PERFORMANCE CpuFreq "governor" : "performance", }, }, { "tag" : "eas", "flags" : "ftrace", # Enable FTrace events "sched_features" : "ENERGY_AWARE", # Enable EAS "cpufreq" : { # Use PERFORMANCE CpuFreq "governor" : "performance", }, }, ], # Workloads to run (on each platform configuration) "wloads" : { # Run hackbench with 1 group using pipes "perf" : { "type" : "perf_bench", "conf" : { "class" : "messaging", "params" : { "group" : 1, "loop" : 10, "pipe" : True, "thread": True, } } }, # Run a 20% duty-cycle periodic task "rta" : { "type" : "rt-app", "loadref" : "big", "conf" : { "class" : "profile", "params" : { "p20" : { "kind" : "periodic", "params" : { "duty_cycle_pct" : 20, }, }, }, }, }, }, # Number of iterations for each workload "iterations" : 1, # FTrace events to collect for all the tests configuration which have # the "ftrace" flag enabled "ftrace" : { "events" : [ "sched_switch", "sched_wakeup", "sched_wakeup_new", "cpu_frequency", ], "buffsize" : 80 * 1024, }, # Tools required by the experiments "tools" : [ 'trace-cmd', 'perf' ], # Modules required by these experiments "modules" : [ 'bl', 'cpufreq' ], }
ipynb/tutorial/04_ExecutorUsage.ipynb
JaviMerino/lisa
apache-2.0
Tests execution
from executor import Executor executor = Executor(my_target_conf, my_tests_conf) executor.run() !tree {executor.te.res_dir}
ipynb/tutorial/04_ExecutorUsage.ipynb
JaviMerino/lisa
apache-2.0
Exercice 2 : json Un premier essai.
obj = dict(a=[50, "r"], gg=(5, 't')) import jsonpickle frozen = jsonpickle.encode(obj) frozen
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Ce module est équivalent au module json sur les types standard du langage Python (liste, dictionnaires, nombres, ...). Mais le module json ne fonctionne pas sur les dataframe.
frozen = jsonpickle.encode(df) len(frozen), type(frozen), frozen[:55]
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
La methode to_json donnera un résultat statisfaisant également mais ne pourra s'appliquer à un modèle de machine learning produit par scikit-learn.
def to_json(obj, filename): frozen = jsonpickle.encode(obj) with open(filename, "w", encoding="utf-8") as f: f.write(frozen) def read_json(filename): with open(filename, "r", encoding="utf-8") as f: enc = f.read() return jsonpickle.decode(enc) to_json(df, "df_text.json") try: df = read_json("df_text.json") except Exception as e: print(e)
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Visiblement, cela ne fonctionne pas sur les DataFrame. Il faudra s'inspirer du module numpyson. json + scikit-learn Il faut lire l'issue 147 pour saisir l'intérêt des deux lignes suivantes.
import jsonpickle.ext.numpy as jsonpickle_numpy jsonpickle_numpy.register_handlers() from sklearn import datasets iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. y = iris.target from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X,y) clf.predict_proba([[0.1, 0.2]]) to_json(clf, "logreg.json") try: clf2 = read_json("logreg.json") except AttributeError as e: # Pour une raison inconnue, un bug sans doute, le code ne fonctionne pas. print(e)
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Donc on essaye d'une essaye d'une autre façon. Si le code précédent ne fonctionne pas et le suivant si, c'est un bug de jsonpickle.
class EncapsulateLogisticRegression: def __init__(self, obj): self.obj = obj def __getstate__(self): return {k: v for k, v in sorted(self.obj.__getstate__().items())} def __setstate__(self, data): self.obj = LogisticRegression() self.obj.__setstate__(data) enc = EncapsulateLogisticRegression(clf) to_json(enc, "logreg.json") enc2 = read_json("logreg.json") clf2 = enc2.obj clf2.predict_proba([[0.1, 0.2]]) with open("logreg.json", "r") as f: content = f.read() content
_doc/notebooks/td2a/td2a_correction_session_2E.ipynb
sdpython/ensae_teaching_cs
mit
Fit_Transform: 1) Fits the model and learns the vocabulary 2) transoforms the data into feature vectors
#using only the "Text Feed" column to build the features features = vector_data.fit_transform(anomaly_data.TextFeed.tolist()) #converting the data into the array features = features.toarray() features.shape #printing the words in the vocabulary vocab = vector_data.get_feature_names() print (vocab) # Sum up the counts of each vocabulary word dist = np.sum(features, axis=0) # For each, print the vocabulary word and the number of times it # appears in the data set a = zip(vocab,dist) print (list(a))
AnomaliesTwitterText/anomalies_in_tweets.ipynb
manojkumar-github/NLP-TextAnalytics
mit
Analytic I Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. Excluding PowerShell.exe is a good way to find alternate PowerShell hosts | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Powershell | Windows PowerShell | Application host started | 400 | | Powershell | Microsoft-Windows-PowerShell/Operational | User started Application host | 4103 |
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Channel FROM sdTable WHERE (Channel = "Microsoft-Windows-PowerShell/Operational" OR Channel = "Windows PowerShell") AND (EventID = 400 OR EventID = 4103) AND NOT Message LIKE "%Host Application%powershell%" ''' ) df.show(10,False)
docs/notebooks/windows/02_execution/WIN-190610201010.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic II Looking for processes loading a specific PowerShell DLL is a very effective way to document the use of PowerShell in your environment | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, Description FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 7 AND (lower(Description) = "system.management.automation" OR lower(ImageLoaded) LIKE "%system.management.automation%") AND NOT Image LIKE "%powershell.exe" ''' ) df.show(10,False)
docs/notebooks/windows/02_execution/WIN-190610201010.ipynb
VVard0g/ThreatHunter-Playbook
mit
Analytic III Monitoring for PSHost* pipes is another interesting way to find other alternate PowerShell hosts in your environment. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Named pipe | Microsoft-Windows-Sysmon/Operational | Process created Pipe | 17 |
df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, PipeName FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 17 AND lower(PipeName) LIKE "\\\pshost%" AND NOT Image LIKE "%powershell.exe" ''' ) df.show(10,False)
docs/notebooks/windows/02_execution/WIN-190610201010.ipynb
VVard0g/ThreatHunter-Playbook
mit
reference for LaTeX commands in MathJax http://www.onemathematicalcat.org/MathJaxDocumentation/TeXSyntax.htm http://oeis.org/wiki/List_of_LaTeX_mathematical_symbols
# define symbol x = sympy.symbols('x') print(type(x)) x # define fuction f = x**2 + 4*x f # differentiation sympy.diff(f) # simplify function sympy.simplify(f) # solving equation from sympy import solve solve(f) # factorize from sympy import factor sympy.factor(f) # partial differentiation x, y = sympy.symbols('x y') f = x**2 + 4*x*y + y**2 f # sympy.diff(a, b) ## a: function, b: variable sympy.diff(f, x) sympy.diff(f, y)
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
Lattecom/HYStudy
mit
Draw function graph
# draw 2nd degree function def f2(x): return x**3 + 2*x**2 - 20 x = np.linspace(-21, 21, 500) y = f2(x) plt.plot(x, y) plt.show()
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
Lattecom/HYStudy
mit
Gradient vector, quiver & contour plot
import numpy as np import matplotlib as mpl import matplotlib.pylab as plt # function definition def f(x, y): return 3*x**2 + 4*x*y + 4*y**2 - 50*x - 20*y + 100 # coordinate range xx = np.linspace(-11, 16, 500) yy = np.linspace(-11, 16, 500) # make coordinate point X, Y = np.meshgrid(xx, yy) # dependent variable point on coordinate Z = f(X, Y) from mpl_toolkits.mplot3d import Axes3D # draw surface plot fig = plt.figure(figsize=(15, 10)) fig.gca(projection='3d').plot_surface(X, Y, Z) plt.xlabel('x') plt.ylabel('y') plt.show() # gradient vector(x) def gx(x, y): return 6*x + 4*y - 50 # gradient vector(y) def gy(x, y): return 8*y + 4*x - 20 # gradient vector point and coordinate xx2 = np.linspace(-10, 15, 10) yy2 = np.linspace(-10, 15, 10) X2, Y2 = np.meshgrid(xx2, yy2) GX = gx(X2, Y2) GY = gy(X2, Y2) # gradient vector quiver plot plt.figure(figsize=(10, 10)) ## make contour plot contour = plt.contour(X, Y, Z, cmap='pink', levels=[-100, 0, 100, 200, 400, 800, 1600]) ## contour plot labeling plt.clabel(contour, inline=1, fontsize=10) ## make quiver plot ## plt.quiver(x, y, gx, gy): draw vector from (x, y) to (gx, gy) plt.quiver(X2, Y2, GX, GY, color='pink', width=0.003, scale=600) ## plot labeling plt.axis('equal') plt.xlabel('x') plt.ylabel('y') plt.show()
scripts/[HYStudy 14th] SymPy, Matplotlib 1.ipynb
Lattecom/HYStudy
mit
2. Read data The data are read from numpy npy files and wrapped as Datasets. Features (vertices) are normalized to have unit variance.
dss_train = [] dss_test = [] subjects = ['rid000005', 'rid000011', 'rid000014'] for subj in subjects: ds = Dataset(np.load('raiders/{subj}_run00_lh.npy'.format(subj=subj))) ds.fa['node_indices'] = np.arange(ds.shape[1], dtype=int) zscore(ds, chunks_attr=None) dss_train.append(ds) ds = Dataset(np.load('raiders/{subj}_run01_lh.npy'.format(subj=subj))) ds.fa['node_indices'] = np.arange(ds.shape[1], dtype=int) zscore(ds, chunks_attr=None) dss_test.append(ds) # Each run has 336 time points and 10242 features per subject. print(dss_train[0].shape) print(dss_test[0].shape)
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
3. Create SearchlightHyperalignment instance The QueryEngine is used to find voxel/vertices within a searchlight. This SurfaceQueryEngine use a searchlight radius of 5 mm based on the fsaverage surface.
sl_radius = 5.0 qe = SurfaceQueryEngine(read_surface('fsaverage.lh.surf.gii'), radius=sl_radius) hyper = SearchlightHyperalignment( queryengine=qe, compute_recon=False, # We don't need to project back from common space to subject space nproc=1, # Number of processes to use. Change "Docker - Preferences - Advanced - CPUs" accordingly. )
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
4. Create common template space with training data This step may take a long time. In my case it's 10 minutes with nproc=1.
# mappers = hyper(dss_train) # h5save('mappers.hdf5.gz', mappers, compression=9) mappers = h5load('mappers.hdf5.gz') # load pre-computed mappers
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
5. Project testing data to the common space
dss_aligned = [mapper.forward(ds) for ds, mapper in zip(dss_test, mappers)] _ = [zscore(ds, chunks_attr=None) for ds in dss_aligned]
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
6. Benchmark inter-subject correlations
def compute_average_similarity(dss, metric='correlation'): """ Returns ======= sim : ndarray A 1-D array with n_features elements, each element is the average pairwise correlation similarity on the corresponding feature. """ n_features = dss[0].shape[1] sim = np.zeros((n_features, )) for i in range(n_features): data = np.array([ds.samples[:, i] for ds in dss]) dist = pdist(data, metric) sim[i] = 1 - dist.mean() return sim sim_test = compute_average_similarity(dss_test) sim_aligned = compute_average_similarity(dss_aligned) plt.figure(figsize=(6, 6)) plt.scatter(sim_test, sim_aligned) plt.xlim([-.2, .5]) plt.ylim([-.2, .5]) plt.xlabel('Surface alignment', size='xx-large') plt.ylabel('SL Hyperalignment', size='xx-large') plt.title('Average pairwise correlation', size='xx-large') plt.plot([-1, 1], [-1, 1], 'k--') plt.show()
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
7. Benchmark movie segment classifications
def movie_segment_classification_no_overlap(dss, window_size=6, dist_metric='correlation'): """ Parameters ========== dss : list of ndarray or Datasets window_size : int, optional dist_metric : str, optional Returns ======= cv_results : ndarray An n_subjects x n_segments boolean array, 1 means correct classification. """ dss = [ds.samples if hasattr(ds, 'samples') else ds for ds in dss] def flattern_movie_segment(ds, window_size=6): n_seg = ds.shape[0] // window_size ds = ds[:n_seg*window_size, :].reshape((n_seg, window_size, -1)) ds = ds.reshape((n_seg, -1)) return ds dss = [flattern_movie_segment(ds, window_size=window_size) for ds in dss] n_subj, n_seg = len(dss), dss[0].shape[0] ds_sum = np.sum(dss, axis=0) cv_results = np.zeros((n_subj, n_seg), dtype=bool) for i, ds in enumerate(dss): dist = cdist(ds, (ds_sum - ds) / float(n_subj - 1), dist_metric) predicted = np.argmin(dist, axis=1) acc = (predicted == np.arange(n_seg)) cv_results[i, :] = acc return cv_results acc_test = movie_segment_classification_no_overlap(dss_test) acc_aligned = movie_segment_classification_no_overlap(dss_aligned) print('Classification accuracy with surface alignment: %.1f%%' % (acc_test.mean()*100, )) print('Classification accuracy with SL hyperalignment: %.1f%%' % (acc_aligned.mean()*100, )) print('Classification accuracy with surface alignment per subject:', acc_test.mean(axis=1)) print('Classification accuracy with SL hyperalignment per subject:', acc_aligned.mean(axis=1))
Tutorials/hyperalignment/hyperalignment_tutorial.ipynb
Summer-MIND/mind_2017
mit
<h3> Simulate some time-series data </h3> Essentially a set of sinusoids with random amplitudes and frequencies.
import tensorflow as tf print(tf.__version__) import numpy as np import seaborn as sns def create_time_series(): freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6 ampl = np.random.random() + 0.5 # 0.5 to 1.5 noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl + noise return x flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] for i in range(0, 5): sns.tsplot( create_time_series(), color=flatui[i%len(flatui)] ); # 5 series def to_csv(filename, N): with open(filename, 'w') as ofp: for lineno in range(0, N): seq = create_time_series() line = ",".join(map(str, seq)) ofp.write(line + '\n') import os try: os.makedirs("data/sines/") except OSError: pass np.random.seed(1) # makes data generation reproducible to_csv("data/sines/train-1.csv", 1000) # 1000 sequences to_csv("data/sines/valid-1.csv", 250) !head -5 data/sines/*-1.csv
courses/machine_learning/deepdive/09_sequence_keras/sinewaves.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h3> Train model locally </h3> Make sure the code works as intended.
%%bash DATADIR=$(pwd)/data/sines OUTDIR=$(pwd)/trained/sines rm -rf $OUTDIR gcloud ml-engine local train \ --module-name=sinemodel.task \ --package-path=${PWD}/sinemodel \ -- \ --train_data_path="${DATADIR}/train-1.csv" \ --eval_data_path="${DATADIR}/valid-1.csv" \ --output_dir=${OUTDIR} \ --model=rnn2 --train_steps=10 --sequence_length=$SEQ_LEN
courses/machine_learning/deepdive/09_sequence_keras/sinewaves.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h3> Cloud ML Engine </h3> Now to train on Cloud ML Engine with more data.
import shutil shutil.rmtree(path = "data/sines", ignore_errors = True) os.makedirs("data/sines/") np.random.seed(1) # makes data generation reproducible for i in range(0,10): to_csv("data/sines/train-{}.csv".format(i), 1000) # 1000 sequences to_csv("data/sines/valid-{}.csv".format(i), 250) %%bash gsutil -m rm -rf gs://${BUCKET}/sines/* gsutil -m cp data/sines/*.csv gs://${BUCKET}/sines %%bash for MODEL in linear dnn cnn rnn rnn2; do OUTDIR=gs://${BUCKET}/sinewaves/${MODEL} JOBNAME=sines_${MODEL}_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=sinemodel.task \ --package-path=${PWD}/sinemodel \ --job-dir=$OUTDIR \ --scale-tier=BASIC \ --runtime-version=$TFVERSION \ -- \ --train_data_path="gs://${BUCKET}/sines/train*.csv" \ --eval_data_path="gs://${BUCKET}/sines/valid*.csv" \ --output_dir=$OUTDIR \ --train_steps=3000 --sequence_length=$SEQ_LEN --model=$MODEL done
courses/machine_learning/deepdive/09_sequence_keras/sinewaves.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Ejercicio Crea codigo para una iteración mas con estos mismos parametros y despliega el resultado.
x3 = # Escribe el codigo de tus calculos aqui from pruebas_2 import prueba_2_1 prueba_2_1(x0, x1, x2, x3, _)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Momento... que esta pasando? Resulta que este $\Delta t$ es demasiado grande, intentemos con 20 iteraciones: $$ \begin{align} \Delta t &= 0.5 \ x(0) &= 1 \end{align} $$
x0 = 1 n = 20 Δt = 10/n F = lambda x : -x x1 = x0 + F(x0)*Δt x1 x2 = x1 + F(x1)*Δt x2 x3 = x2 + F(x2)*Δt x3
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Esto va a ser tardado, mejor digamosle a Python que es lo que tenemos que hacer, y que no nos moleste hasta que acabe, podemos usar un ciclo for y una lista para guardar todos los valores de la trayectoria:
xs = [x0] for t in range(20): xs.append(xs[-1] + F(xs[-1])*Δt) xs
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ahora que tenemos estos valores, podemos graficar el comportamiento de este sistema, primero importamos la libreria matplotlib:
%matplotlib inline from matplotlib.pyplot import plot
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Mandamos a llamar la función plot:
plot(xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Sin embargo debido a que el periodo de integración que utilizamos es demasiado grande, la solución es bastante inexacta, podemos verlo al graficar contra la que sabemos es la solución de nuestro problema:
from numpy import linspace, exp ts = linspace(0, 10, 20) plot(xs) plot(exp(-ts));
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Si ahora utilizamos un numero de pedazos muy grande, podemos mejorar nuestra aproximación:
xs = [x0] n = 100 Δt = 10/n for t in range(100): xs.append(xs[-1] + F(xs[-1])*Δt) ts = linspace(0, 10, 100) plot(xs) plot(exp(-ts));
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
odeint Este método funciona tan bien, que ya viene programado dentro de la libreria scipy, por lo que solo tenemos que importar esta librería para utilizar este método. Sin embargo debemos de tener cuidado al declarar la función $F(x, t)$. El primer argumento de la función se debe de referir al estado de la función, es decir $x$, y el segundo debe de ser la variable independiente, en nuestro caso el tiempo.
from scipy.integrate import odeint F = lambda x, t : -x x0 = 1 ts = linspace(0, 10, 100) xs = odeint(func=F, y0=x0, t=ts) plot(ts, xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Grafica el comportamiento de la siguiente ecuación diferencial. $$ \dot{x} = x^2 - 5 x + \frac{1}{2} \sin{x} - 2 $$ Nota: Asegurate de impotar todas las librerias que puedas necesitar
ts = # Escribe aqui el codigo que genera un arreglo de puntos equidistantes (linspace) x0 = # Escribe el valor de la condicion inicial # Importa las funciones de librerias que necesites aqui G = lambda x, t: # Escribe aqui el codigo que describe los calculos que debe hacer la funcion xs = # Escribe aqui el comando necesario para simular la ecuación diferencial plot(ts, xs); from pruebas_2 import prueba_2_2 prueba_2_2(ts, xs)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Sympy Y por ultimo, hay veces en las que incluso podemos obtener una solución analítica de una ecuación diferencial, siempre y cuando cumpla ciertas condiciones de simplicidad.
from sympy import var, Function, dsolve from sympy.physics.mechanics import mlatex, mechanics_printing mechanics_printing() var("t") x = Function("x")(t) x, x.diff(t) solucion = dsolve(x.diff(t) + x, x) solucion
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Implementa el codigo necesario para obtener la solución analítica de la siguiente ecuación diferencial: $$ \dot{x} = x^2 - 5x $$
# Declara la variable independiente de la ecuación diferencial var("") # Declara la variable dependiente de la ecuación diferencial = Function("")() # Escribe la ecuación diferencial con el formato necesario (Ecuacion = 0) # adentro de la función dsolve sol = dsolve() sol from pruebas_2 import prueba_2_3 prueba_2_3(sol)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Solución a ecuaciones diferenciales de orden superior Si ahora queremos obtener el comportamiento de una ecuacion diferencial de orden superior, como: $$ \ddot{x} = -\dot{x} - x + 1 $$ Tenemos que convertirla en una ecuación diferencial de primer orden para poder resolverla numericamente, por lo que necesitaremos convertirla en una ecuación diferencial matricial, por lo que empezamos escribiendola junto con la identidad $\dot{x} = \dot{x}$ en un sistema de ecuaciones: $$ \begin{align} \dot{x} &= \dot{x} \ \ddot{x} &= -\dot{x} - x + 1 \end{align} $$ Si extraemos el operador derivada del lado izquierda, tenemos: $$ $$ \begin{align} \frac{d}{dt} x &= \dot{x} \ \frac{d}{dt} \dot{x} &= -\dot{x} - x + 1 \end{align} $$ $$ O bien, de manera matricial: $$ \frac{d}{dt} \begin{pmatrix} x \ \dot{x} \end{pmatrix} = \begin{pmatrix} 0 & 1 \ -1 & -1 \end{pmatrix} \begin{pmatrix} x \ \dot{x} \end{pmatrix} + \begin{pmatrix} 0 \ 1 \end{pmatrix} $$ Esta ecuación ya no es de segundo orden, es de hecho, de primer orden, sin embargo nuestra variable ha crecido a ser un vector de estados, por el momento le llamaremos $X$, asi pues, lo podemos escribir como: $$ \frac{d}{dt} X = A X + B $$ en donde: $$ A = \begin{pmatrix} 0 & 1 \ -1 & -1 \end{pmatrix} \quad \text{y} \quad B = \begin{pmatrix} 0 \ 1 \end{pmatrix} $$ y de manera similar, declarar una función para dar a odeint.
from numpy import matrix, array def F(X, t): A = matrix([[0, 1], [-1, -1]]) B = matrix([[0], [1]]) return array((A*matrix(X).T + B).T).tolist()[0] ts = linspace(0, 10, 100) xs = odeint(func=F, y0=[0, 0], t=ts) plot(xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Implementa la solución de la siguiente ecuación diferencial, por medio de un modelo en representación de espacio de estados: $$ \ddot{x} = -8\dot{x} - 15x + 1 $$ Nota: Tomalo con calma y paso a paso * Empieza anotando la ecuación diferencial en tu cuaderno, junto a la misma identidad del ejemplo * Extrae la derivada del lado izquierdo, para que obtengas el estado de tu sistema * Extrae las matrices A y B que corresponden a este sistema * Escribe el codigo necesario para representar estas matrices
def G(X, t): A = # Escribe aqui el codigo para la matriz A B = # Escribe aqui el codigo para el vector B return array((A*matrix(X).T + B).T).tolist()[0] ts = linspace(0, 10, 100) xs = odeint(func=G, y0=[0, 0], t=ts) plot(xs); from pruebas_2 import prueba_2_4 prueba_2_4(xs)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Funciones de transferencia Sin embargo, no es la manera mas facil de obtener la solución, tambien podemos aplicar una transformada de Laplace, y aplicar las funciones de la libreria de control para simular la función de transferencia de esta ecuación; al aplicar la transformada de Laplace, obtendremos: $$ G(s) = \frac{1}{s^2 + s + 1} $$
from control import tf, step F = tf([0, 0, 1], [1, 1, 1]) xs, ts = step(F) plot(ts, xs);
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Ejercicio Modela matematicamente la ecuación diferencial del ejercicio anterior, usando una representación de función de transferencia. Nota: De nuevo, no desesperes, escribe tu ecuación diferencial y aplica la transformada de Laplaca tal como te enseñaron tus abuelos hace tantos años...
G = tf([], []) # Escribe los coeficientes de la función de transferencia xs, ts = step(G) plot(ts, xs); from pruebas_2 import prueba_2_5 prueba_2_5(ts, xs)
Practicas/.ipynb_checkpoints/Practica 2 - Solucion de ecuaciones diferenciales-checkpoint.ipynb
robblack007/clase-dinamica-robot
mit
Introduction to Divide-and-Conquer Algorithms The subfamily of Divide-and-Conquer algorithms is one of the main paradigms of algorithmic problem solving next to Dynamic Programming and Greedy Algorithms. The main goal behind greedy algorithms is to implement an efficient procedure for often computationally more complex, often infeasible brute-force methods such as exhaustive search algorithms by splitting a task into subtasks that can be solved indpendently and in parallel; later, the solutions are combined to yield the final result. Example 1 -- Binary Search Let's say we want to implement an algorithm that returns the index position of an item that we are looking for in an array. in an array. Here, we assume that the array is alreadt sorted. The simplest (and computationally most expensive) approach would be to check each element in the array iteratively, until we find the desired match or return -1:
def linear_search(lst, item): for i in range(len(lst)): if lst[i] == item: return i return -1 lst = [1, 5, 8, 12, 13] for k in [8, 1, 23, 11]: print(linear_search(lst=lst, item=k))
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
The runtime of linear search is obviously $O(n)$ since we are checking each element in the array -- remember that big-Oh is our upper bound. Now, a cleverer way of implementing a search algorithm would be binary search, which is a simple, yet nice example of a divide-and-conquer algorithm. The idea behind divide-and-conquer algorithm is to break a problem down into non-overlapping subproblems of the original problem, which we can then solve recursively. Once, we processed these recursive subproblems, we combine the solutions into the end result. Using a divide-and-conquer approach, we can implement an $O(\log n)$ search algorithm called binary search. The idea behind binary search is quite simple: We take the midpoint of an array and compare it to its search key If the search key is equal to the midpoint, we are done, else search key < midpoint? Yes: repeat search (back to step 1) with subarray that ends at index position midpoint - 1 No: repeat search (back step 1) with subarray that starts midpoint + 1 Assuming that we are looking for the search key k=5, the individual steps of binary search can be illustrated as follows: And below follows our Python implementation of this idea:
def binary_search(lst, item): first = 0 last = len(lst) - 1 found = False while first <= last and not found: midpoint = (first + last) // 2 if lst[midpoint] == item: found = True else: if item < lst[midpoint]: last = midpoint - 1 else: first = midpoint + 1 if found: return midpoint else: return -1 for k in [8, 1, 23, 11]: print(binary_search(lst=lst, item=k))
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
Example 2 -- Finding the Majority Element "Finding the Majority Element" is a problem where we want to find an element in an array positive integers with length n that occurs more than n/2 in that array. For example, if we have an array $a = [1, 2, 3, 3, 3]$, $3$ would be the majority element. In another array, b = [1, 2, 3, 3] there exists no majority element, since $2$ (where $2$ is the the count of element $3$) is not greater than $n / 2$. Let's start with a simple implementation where we count how often each unique element occurs in the array. Then, we return the element that meets the criterion "$\text{occurences } > n / 2$", and if such an element does not exist, we return -1. Note that we return a tuple of three items: (element, number_occurences, count_dictionary), which we will use later ...
def majority_ele_lin(lst): cnt = {} for ele in lst: if ele not in cnt: cnt[ele] = 1 else: cnt[ele] += 1 for ele, c in cnt.items(): if c > (len(lst) // 2): return (ele, c, cnt) return (-1, -1, cnt) ################################################### lst0 = [] print(lst0, '->', majority_ele_lin(lst=lst0)[0]) lst1 = [1, 2, 3, 4, 4, 5] print(lst1, '->', majority_ele_lin(lst=lst1)[0]) lst2 = [1, 2, 4, 4, 4, 5] print(lst2, '->', majority_ele_lin(lst=lst2)[0]) lst3 = [4, 2, 4, 4, 4, 5] print(lst3, '->', majority_ele_lin(lst=lst3)[0]) print(lst3[::-1], '->', majority_ele_lin(lst=lst3[::-1])[0]) lst4 = [2, 3, 9, 2, 2] print(lst4, '->',majority_ele_lin(lst=lst4)[0]) print(lst4[::-1], '->', majority_ele_lin(lst=lst4[::-1])[0]) lst5 = [0, 0, 2, 2, 2] print(lst5, '->',majority_ele_lin(lst=lst5)[0]) print(lst5[::-1], '->', majority_ele_lin(lst=lst5[::-1])[0])
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
Now, "finding the majority element" is a nice task for a Divide and Conquer algorithm. Here, we use the fact that if a list has a majority element it is also the majority element of one of its two sublists, if we split it into 2 halves. More concretely, what we do is: Split the array into 2 halves Run the majority element search on each of the two halves Combine the 2 subresults Neither of the 2 sub-arrays has a majority element; thus, the combined list can't have a majority element so that we return -1 The right sub-array has a majority element, whereas the left sub-array hasn't. Now, we need to take the count of this "right" majority element, add the number of times it occurs in the left sub-array, and check if the combined count satisfies the "$\text{occurences} > \frac{n}{2}$" criterion. Same as above but with "left" and "right" sub-array swapped in the comparison. Both sub-arrays have an majority element. Compute the combined count of each of the elements as before and check whether one of these elements satisfies the "$\text{occurences} > \frac{n}{2}$" criterion.
def majority_ele_dac(lst): n = len(lst) left = lst[:n // 2] right = lst[n // 2:] l_maj = majority_ele_lin(left) r_maj = majority_ele_lin(right) # case 3A if l_maj[0] == -1 and r_maj[0] == -1: return -1 # case 3B elif l_maj[0] == -1 and r_maj[0] > -1: cnt = r_maj[1] if r_maj[0] in l_maj[2]: cnt += l_maj[2][r_maj[0]] if cnt > n // 2: return r_maj[0] # case 3C elif r_maj[0] == -1 and l_maj[0] > -1: cnt = l_maj[1] if l_maj[0] in r_maj[2]: cnt += r_maj[2][l_maj[0]] if cnt > n // 2: return l_maj[0] # case 3D else: c1, c2 = l_maj[1], r_maj[1] if l_maj[0] in r_maj[2]: c1 = l_maj[1] + r_maj[2][l_maj[0]] if r_maj[0] in l_maj[2]: c2 = r_maj[1] + l_maj[2][r_maj[0]] m = max(c1, c2) if m > n // 2: return m return -1 ################################################### lst0 = [] print(lst0, '->', majority_ele_dac(lst=lst0)) lst1 = [1, 2, 3, 4, 4, 5] print(lst1, '->', majority_ele_dac(lst=lst1)) lst2 = [1, 2, 4, 4, 4, 5] print(lst2, '->', majority_ele_dac(lst=lst2)) lst3 = [4, 2, 4, 4, 4, 5] print(lst3, '->', majority_ele_dac(lst=lst3)) print(lst3[::-1], '->', majority_ele_dac(lst=lst3[::-1])) lst4 = [2, 3, 9, 2, 2] print(lst4, '->',majority_ele_dac(lst=lst4)) print(lst4[::-1], '->', majority_ele_dac(lst=lst4[::-1])) lst5 = [0, 0, 2, 2, 2] print(lst5, '->',majority_ele_dac(lst=lst5)) print(lst5[::-1], '->', majority_ele_dac(lst=lst5[::-1]))
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
In algorithms such as binary search that we saw at the beginning of this notebook, we recursively break down our problem into smaller subproblems. Thus, we have a recurrence problem with time complexity $T(n) = T(\frac{2}{n}) + O(1) \rightarrow T(n) = O(\log n).$ In this example, finding the majority element, we break our problem down into only 2 subproblems. Thus, the complexity of our algorithm is $T(n) = 2T (\frac{2}{n} + O(n) \rightarrow T(n) = O(n \log n).$ Adding multiprocessing Our Divide and Conquer approach above is actually a good candidate for multi-processing, since we can parallelize the majority element search in the two sub-lists. So, let's make a simple modification and use Python's multiprocessing module for that. Here, we use the apply_async method from the Pool class, which doesn't return the results in order (in contrast to the apply method). Thus, the left sublist and right sublist may be swapped in the variable assignment l_maj, r_maj = [p.get() for p in results]. However, for our implementation, this doesn't make a difference.
import multiprocessing as mp def majority_ele_dac_mp(lst): n = len(lst) left = lst[:n // 2] right = lst[n // 2:] results = (pool.apply_async(majority_ele_lin, args=(x,)) for x in (left, right)) l_maj, r_maj = [p.get() for p in results] if l_maj[0] == -1 and r_maj[0] == -1: return -1 elif l_maj[0] == -1 and r_maj[0] > -1: cnt = r_maj[1] if r_maj[0] in l_maj[2]: cnt += l_maj[2][r_maj[0]] if cnt > n // 2: return r_maj[0] elif r_maj[0] == -1 and l_maj[0] > -1: cnt = l_maj[1] if l_maj[0] in r_maj[2]: cnt += r_maj[2][l_maj[0]] if cnt > n // 2: return l_maj[0] else: c1, c2 = l_maj[1], r_maj[1] if l_maj[0] in r_maj[2]: c1 = l_maj[1] + r_maj[2][l_maj[0]] if r_maj[0] in l_maj[2]: c2 = r_maj[1] + l_maj[2][r_maj[0]] m = max(c1, c2) if m > n // 2: return m return -1 ################################################### lst0 = [] print(lst0, '->', majority_ele_dac(lst=lst0)) lst1 = [1, 2, 3, 4, 4, 5] print(lst1, '->', majority_ele_dac(lst=lst1)) lst2 = [1, 2, 4, 4, 4, 5] print(lst2, '->', majority_ele_dac(lst=lst2)) lst3 = [4, 2, 4, 4, 4, 5] print(lst3, '->', majority_ele_dac(lst=lst3)) print(lst3[::-1], '->', majority_ele_dac(lst=lst3[::-1])) lst4 = [2, 3, 9, 2, 2] print(lst4, '->',majority_ele_dac(lst=lst4)) print(lst4[::-1], '->', majority_ele_dac(lst=lst4[::-1])) lst5 = [0, 0, 2, 2, 2] print(lst5, '->',majority_ele_dac(lst=lst5)) print(lst5[::-1], '->', majority_ele_dac(lst=lst5[::-1]))
ipython_nbs/essentials/divide-and-conquer-algorithm-intro.ipynb
rasbt/algorithms_in_ipython_notebooks
gpl-3.0
SQLAlchemy SQLAlchemy is a commonly used database toolkit. Unlike many database libraries it not only provides an ORM (Object-relational mapping) layer but also a generalized API for writing database-agnostic code without SQL. $ pip install sqlalchemy Example
from sqlalchemy import create_engine, ForeignKey from sqlalchemy import Column, Date, Integer, String from sqlalchemy.ext.declarative import declarative_base # engine.dispose() engine = create_engine('sqlite:///userlist.db', echo=True) Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String) fullname = Column(String) password = Column(String) def __repr__(self): return "<User(name='%s', fullname='%s', password='%s')>" % (self.name, self.fullname, self.password) Base.metadata.create_all(engine) from sqlalchemy.orm import sessionmaker Session = sessionmaker(bind=engine) session = Session() session_dec = Session() ed_user = User(name='Meenu', fullname='Meenakshi Johri', password='meenuInIndia') print(ed_user) session.add(ed_user) print("Ued_user.id") session.add(User(name='GV',fullname='GV', password='gv@ibm')) session.flush() print(ed_user.id) #Now let’s commit the changes: session.commit() # #SQLAlchemy sends the COMMIT statement that permanently commits the flushed changes and ends the transaction. # # Delete # #To delete the test_page object from the database you would use: session.delete(ed_user) session.flush() # #At this point you can either commit the transaction or do a rollback. Let’s do a rollback this time: session.commit() session.close() engine.dispose() # from sqlalchemy import create_engine, ForeignKey # from sqlalchemy import Column, Date, Integer, String # from sqlalchemy.ext.declarative import declarative_base # from sqlalchemy.orm import relationship # # engine.dispose() # engine = create_engine('sqlite:///userlist.db', echo=True) # Base = declarative_base() # class User(Base): # __tablename__ = 'users' # id = Column(Integer, primary_key=True) # name = Column(String) # def __repr__(self): # return "<User(name='%s', fullname='%s', password='%s')>" % (self.name, self.fullname, self.password) # class Address(Base): # __tablename__ = 'address' # address_id = Column(Integer, primary_key=True) # house_name = Column(String) # house_no = Column(String) # city = Column(String) # user_id = Column(Integer, ForeignKey('User.id')) # Base.metadata.create_all(engine) # from sqlalchemy.orm import sessionmaker # Session = sessionmaker(bind=engine) # session = Session() # session_dec = Session() # ed_user = User(name='Meenu', fullname='Meenakshi Johri', password='meenuInIndia') # # ed_user.address = Address(house_no="20", house_name="Raj Ghar", city= "Jaipur") # session.commit() # session.close() # engine.dispose()
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Records Records is minimalist SQL library, designed for sending raw SQL queries to various databases. Data can be used programmatically, or exported to a number of useful data formats. $ pip install records Also included is a command-line tool for exporting SQL data.
import json # https://docs.python.org/3/library/json.html import requests # https://github.com/kennethreitz/requests import records # https://github.com/kennethreitz/records # randomuser.me generates random 'user' data (name, email, addr, phone number, etc) r = requests.get('http://api.randomuser.me/0.6/?nat=us&results=3') j = r.json()['results'] # Valid SQLite URL forms are: # sqlite:///:memory: (or, sqlite://) # sqlite:///relative/path/to/file.db # sqlite:////absolute/path/to/file.db # records will create this db on disk if 'users.db' doesn't exist already db = records.Database('sqlite:///users.db') db.query('DROP TABLE IF EXISTS persons') db.query('CREATE TABLE persons (key int PRIMARY KEY, fname text, lname text, email text)') for rec in j: user = rec['user'] name = user['name'] key = user['registered'] fname = name['first'] lname = name['last'] email = user['email'] db.query('INSERT INTO persons (key, fname, lname, email) VALUES(:key, :fname, :lname, :email)', key=key, fname=fname, lname=lname, email=email) rows = db.query('SELECT * FROM persons') print(rows.export('csv'))
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
SQLObject SQLObject is yet another ORM. It supports a wide variety of databases: Common database systems MySQL, Postgres and SQLite and more exotic systems like SAP DB, SyBase and MSSQL. SQLObject is a popular Object Relational Manager for providing an object interface to your database, with tables as classes, rows as instances, and columns as attributes. SQLObject includes a Python-object-based query language that makes SQL more abstract, and provides substantial database independence for applications.
import sqlobject from sqlobject.sqlite import builder conn = builder()('sqlobject_demo.db') class PhoneNumber(sqlobject.SQLObject): _connection = conn number = sqlobject.StringCol(length=14, unique=True) owner = sqlobject.StringCol(length=255) lastCall = sqlobject.DateTimeCol(default=None) PhoneNumber.createTable(ifNotExists=True) myPhone = PhoneNumber(number='(415) 555-1212', owner='Leonard Richardson') # Running code with partial information will result in error duplicatePhone = PhoneNumber(number="(415) 555-1212") duplicatePhone = PhoneNumber(number="(415) 555-1212")
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Defining relationships among tables SQLObject lets you define relationships among tables as foreign keys
import sqlobject from sqlobject.sqlite import builder conn = builder()('sqlobject_demo_relationships.db') class PhoneNumber(sqlobject.SQLObject): _connection = conn number = sqlobject.StringCol(length=14, unique=True) owner = sqlobject.ForeignKey('Person') lastCall = sqlobject.DateTimeCol(default=None) class Person(sqlobject.SQLObject): _idName='fooID' _connection = conn name = sqlobject.StringCol(length=255) #The SQLObject-defined name for the "owner" field of PhoneNumber #is "owner_id" since it's a reference to another table's primary #key. numbers = sqlobject.MultipleJoin('PhoneNumber', joinColumn='owner_id') Person.createTable(ifNotExists=True) PhoneNumber.createTable(ifNotExists=True) person = Person(name='Vinay') p = PhoneNumber(number="2222", owner=person)
Section 2 - Advance Python/Chapter S2.04 - Database/Databases.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
现在查询机构很多,我们可以根据不同的查询机构和查询方式,来通过继承的方式实现其对应的股票查询器类。例如,WebA和WebB的查询器类可以构造如下:
class WebAStockQueryDevice(StockQueryDevice): def login(self,usr,pwd): if usr=="myStockA" and pwd=="myPwdA": print ("Web A:Login OK... user:%s pwd:%s"%(usr,pwd)) return True else: print ("Web A:Login ERROR... user:%s pwd:%s"%(usr,pwd)) return False def queryPrice(self): print ("Web A Querying...code:%s "%self.stock_code) self.stock_price=20.00 def showPrice(self): print ("Web A Stock Price...code:%s price:%s"%(self.stock_code,self.stock_price)) class WebBStockQueryDevice(StockQueryDevice): def login(self,usr,pwd): if usr=="myStockB" and pwd=="myPwdB": print ("Web B:Login OK... user:%s pwd:%s"%(usr,pwd)) return True else: print ("Web B:Login ERROR... user:%s pwd:%s"%(usr,pwd)) return False def queryPrice(self): print ("Web B Querying...code:%s "%self.stock_code) self.stock_price=30.00 def showPrice(self): print ("Web B Stock Price...code:%s price:%s"%(self.stock_code,self.stock_price))
DesignPattern/TemplatePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
在场景中,想要在网站A上查询股票,需要进行如下操作:
web_a_query_dev=WebAStockQueryDevice() web_a_query_dev.login("myStockA","myPwdA") web_a_query_dev.setCode("12345") web_a_query_dev.queryPrice() web_a_query_dev.showPrice()
DesignPattern/TemplatePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
每次操作,都会调用登录,设置代码,查询,展示这几步,是不是有些繁琐?既然有些繁琐,何不将这几步过程封装成一个接口。由于各个子类中的操作过程基本满足这个流程,所以这个方法可以写在父类中:
class StockQueryDevice(): stock_code="0" stock_price=0.0 def login(self,usr,pwd): pass def setCode(self,code): self.stock_code=code def queryPrice(self): pass def showPrice(self): pass def operateQuery(self,usr,pwd,code): self.login(usr,pwd) self.setCode(code) self.queryPrice() self.showPrice() return True web_a_query_dev=WebAStockQueryDevice() web_a_query_dev.operateQuery("myStockA","myPwdA","12345")
DesignPattern/TemplatePattern.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Setting up the properties of time-space and create the domain:
t = 27 / 365 dx = 0.2 L = 40 phi = 0.8 dt = 1e-4 ftc = Column(L, dx, t, dt)
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
To make things interesting lets create not simple inital conditions for iron:
x = np.linspace(0, L, int(L / dx) + 1) Fe3_init = np.zeros(x.size) Fe3_init[x > 5] = 75 Fe3_init[x > 15] = 0 Fe3_init[x > 25] = 75 Fe3_init[x > 35] = 0
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Adding species with names, diffusion coefficients, initial concentrations and boundary top and bottom conditions:
ftc.add_species(theta=phi, name='O2', D=368, init_conc=0, bc_top_value=0.231, bc_top_type='dirichlet', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=phi, name='TIC', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=phi, name='Fe2', D=127, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=1-phi, name='OM', D=1e-18, init_conc=15, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=1-phi, name='FeOH3', D=1e-18, init_conc=Fe3_init, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.add_species(theta=phi, name='CO2g', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.henry_equilibrium('TIC', 'CO2g', 0.2*0.83)
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Specify the constants used in the rates:
ftc.constants['k_OM'] = 1 ftc.constants['Km_O2'] = 1e-3 ftc.constants['Km_FeOH3'] = 2 ftc.constants['k8'] = 1.4e+5 ftc.constants['Q10'] = 4 ### added ftc.constants['CF'] = (1-phi)/phi ### conversion factor
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Simulate Temperature with thermal diffusivity coefficient 281000 and init and boundary temperature 5C:
ftc.add_species(theta=0.99, name='Temperature', D=281000, init_conc=5, bc_top_value=5., bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Add Q10 factor:
ftc.rates['R1'] = 'Q10**((Temperature-5)/10) * k_OM * OM * O2 / (Km_O2 + O2)' ftc.rates['R2'] = 'Q10**((Temperature-5)/10) * k_OM * OM * FeOH3 / (Km_FeOH3 + FeOH3) * Km_O2 / (Km_O2 + O2)' ftc.rates['R8'] = 'k8 * O2 * Fe2'
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
ODEs for specific species:
ftc.dcdt['OM'] = '-R1-R2' ftc.dcdt['O2'] = '-R1-R8' ftc.dcdt['FeOH3'] = '-4*R2+R8/CF' ftc.dcdt['Fe2'] = '-R8+4*R2*CF' ftc.dcdt['TIC'] = 'R1+R2*CF'
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Because we are changing the boundary conditions for temperature and Oxygen (when T < 0 => no oxygen at the top), then we need to have a time loop:
# %pdb for i in range(1, len(ftc.time)): day_of_bi_week = (ftc.time[i]*365) % 14 if day_of_bi_week < 7: ftc.Temperature.bc_top_value = 5 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365) else: ftc.Temperature.bc_top_value = -10 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365) # when T < 0 => 0 flux of oxygen and CO2 at the top: if ftc.Temperature.bc_top_value < 0: ftc.change_boundary_conditions('O2', i, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') ftc.change_boundary_conditions('CO2g', i, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux') else: ftc.change_boundary_conditions('O2', i, bc_top_value=0.231, bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux') ftc.change_boundary_conditions('CO2g', i, bc_top_value=0, bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux') # Integrate one timestep: ftc.integrate_one_timestep(i)
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
What we did with temperature
ftc.plot_depths("Temperature",[0,1,3,7,10,40])
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Concentrations of different species during the whole period of simulation:
ftc.plot_contourplots()
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
The rates of consumption and production of species:
ftc.reconstruct_rates() ftc.plot_contourplots_of_rates() ftc.plot_contourplots_of_deltas()
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Profiles at the end of the simulation
Fx = ftc.estimate_flux_at_top('CO2g') ftc.custom_plot(ftc.time*365, 1e+3*Fx*1e+4/365/24/60/60,x_lbl='Days, [day]' , y_lbl='$F_{CO_2}$, $[\mu mol$ $m^{-2}$ $s^{-1}]$') Fxco2 = 1e+3*Fx*1e+4/365/24/60/60 Fxco2nz = (ftc.time*365<7)*Fxco2 + ((ftc.time*365>14) & (ftc.time*365<21))*Fxco2 import seaborn as sns fig, ax1 = plt.subplots(figsize=(5,3), dpi=200) ax2 = ax1.twinx() ax1.plot(ftc.time*365, Fxco2nz, label='$F_{CO_2}$', lw=3) ax2.plot(ftc.time*365, ftc.Temperature.concentration[0, :], 'k', lw=1, label='T at 0 cm') ax2.plot(ftc.time*365, ftc.Temperature.concentration[100, :], ls='-', c=sns.color_palette("deep", 10)[3], lw=2, label='T at 20 cm') # ax1.scatter(NO3_t, NO3, c=sns.color_palette("deep", 10)[0], lw=1) ax2.grid(False) ax1.grid(lw=0.2) ax2.set_ylim(-20, 20) ax1.set_xlim(0, 27) ax1.set_xlabel('Time, [days]') ax1.set_ylabel('$CO_2(g)$ flux, $[\mu mol$ $m^{-2}$ $s^{-1}]$') ax2.set_ylabel('Temperature, [C]') ax1.set_ylim(0, 20) ax1.legend(frameon=1, loc=2) ax2.legend(frameon=1, loc=1) import math from matplotlib.colors import ListedColormap lab = ftc element = 'Fe2' labels=False days=False last_year=False plt.figure(figsize=(5,3), dpi=200) # plt.title('$Fe(II)$ concentration') resoluion = 100 n = math.ceil(lab.time.size / resoluion) if last_year: k = n - int(1 / lab.dt) else: k = 1 if days: X, Y = np.meshgrid(lab.time[k::n] * 365, -lab.x) plt.xlabel('Time') else: X, Y = np.meshgrid(lab.time[k::n] * 365, -lab.x) plt.xlabel('Time, [days]') z = lab.species[element]['concentration'][:, k - 1:-1:n] CS = plt.contourf(X, Y, z, 51, cmap=ListedColormap( sns.color_palette("Blues", 51)), origin='lower') if labels: plt.clabel(CS, inline=1, fontsize=10, colors='w') cbar = plt.colorbar(CS) plt.ylabel('Depth, [cm]') ax = plt.gca() ax.ticklabel_format(useOffset=False) cbar.ax.set_ylabel('%s, [mM]' % element) plt.figure(figsize=(5,3), dpi=200) r='R2' n = math.ceil(lab.time.size / resoluion) if last_year: k = n - int(1 / lab.dt) else: k = 1 z = lab.estimated_rates[r][:, k - 1:-1:n] # lim = np.max(np.abs(z)) # lim = np.linspace(-lim - 0.1, +lim + 0.1, 51) X, Y = np.meshgrid(lab.time[k::n], -lab.x) plt.xlabel('Time, [days]') CS = plt.contourf(X*365, Y, z/365, 20, cmap=ListedColormap( sns.color_palette("Blues", 51))) if labels: plt.clabel(CS, inline=1, fontsize=10, colors='w') cbar = plt.colorbar(CS) plt.ylabel('Depth, [cm]') ax = plt.gca() ax.ticklabel_format(useOffset=False) cbar.ax.set_ylabel(r'Rate R2, [$mM$ $d^{-1}$]') plt.figure(figsize=(5,3),dpi=200) element='FeOH3' resoluion = 100 n = math.ceil(lab.time.size / resoluion) if last_year: k = n - int(1 / lab.dt) else: k = 1 z = lab.species[element]['rates'][:, k - 1:-1:n]/365 lim = np.max(np.abs(z)) lim = np.linspace(-lim, +lim, 51) X, Y = np.meshgrid(lab.time[k:-1:n], -lab.x) plt.xlabel('Time, [days]') CS = plt.contourf(X*365, Y, z, 20, cmap=ListedColormap(sns.color_palette( "RdBu_r", 101)), origin='lower', levels=lim, extend='both') if labels: plt.clabel(CS, inline=1, fontsize=10, colors='w') cbar = plt.colorbar(CS) plt.ylabel('Depth, [cm]') ax = plt.gca() ax.ticklabel_format(useOffset=False) cbar.ax.set_ylabel('$\Delta$ $Fe(OH)_3$ [$mM$ $d^{-1}$]')
examples/Column - Freeze-Thaw.ipynb
biogeochemistry/PorousMediaLab
mit
Collocations between two data arrays Let's try out the simplest case: You have two xarray datasets with temporal-spatial data and you want to find collocations between them. At first, we create two example xarray datasets with faked measurements. Let's assume, these data arrays represent measurements from two different instruments (e.g. on satellites). Each measurement has a time attribute indicating when it was taken and a geo-location (latitude and longitude) indicating where this happened. Note that the lat and lon variables must share their first dimension with the time coordinate.
# Create the data primary = xr.Dataset( coords={ "lat": (('along_track'), 30.*np.sin(np.linspace(-3.14, 3.14, 24))+20), "lon": (('along_track'), np.linspace(0, 90, 24)), "time": (('along_track'), np.arange("2018-01-01", "2018-01-02", dtype="datetime64[h]")), }, data_vars={ "Temperature": (("along_track"), np.random.normal(290, 5, (24))), } ) secondary = xr.Dataset( coords={ "lat": (('along_track'), 30.*np.sin(np.linspace(-3.14, 3.14, 24)+1.)+20), "lon": (('along_track'), np.linspace(0, 90, 24)), "time": (('along_track'), np.arange("2018-01-01", "2018-01-02", dtype="datetime64[h]")), }, data_vars={ "Temperature": (("along_track"), np.random.normal(290, 5, (24))), } ) # Plot the data fig = plt.figure(figsize=(10, 10)) wmap = worldmap(primary["lat"], primary["lon"], s=24, bg=True) worldmap(secondary["lat"], secondary["lon"], s=24, ax=wmap.axes,)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Now, let’s find all measurements of primary that have a maximum distance of 300 kilometers to the measurements of secondary:
collocator = Collocator(name='primary_secondary_collocator') collocations = collocator.collocate( primary=('primary', primary), secondary=('secondary', secondary), max_distance=600, # collocation radius in km ) print(f'Found collocations are {collocations["Collocations/distance"].values} km apart') collocations
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
The obtained collocations dataset contains variables of 3 groups: primary, secondary and Collocations. The first two correspond to the variables of the two respective input datasets and contain only the matched data points. The Collocations group adds some new variables containing information about the collocations, e.g. the temporal and spatial distances. Additional information can be found in the typhon documentation Let’s mark the collocations with red crosses on the map:
def collocations_wmap(collocations): fig = plt.figure(figsize=(10, 10)) # Plot the collocations wmap = worldmap( collocations['primary/lat'], collocations['primary/lon'], facecolor="r", s=128, marker='x', bg=True ) worldmap( collocations['secondary/lat'], collocations['secondary/lon'], facecolor="r", s=128, marker='x', bg=True, ax=wmap.axes ) # Plot all points: worldmap(primary["lat"], primary["lon"], s=24, ax=wmap.axes,) worldmap(secondary["lat"], secondary["lon"], s=24, ax=wmap.axes,) wmap.axes.set(ylim=[-15, 55], xlim=[-10, 100]) collocations_wmap(collocations)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
We can also add a temporal filter that filters out all points which difference in time is bigger than a time interval. We are doing this by using max_interval. Note that our testdata is sampled very sparsely in time.
collocations = collocator.collocate( primary=('primary', primary), secondary=('secondary', secondary), max_distance=300, # collocation radius in km max_interval=timedelta(hours=1), # temporal collocation interval as timedelta ) print( f'Found collocations are {collocations["Collocations/distance"].values} km apart in space ' f'and {collocations["Collocations/interval"].values} hours apart in time.' ) collocations_wmap(collocations)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
As mentioned in :func:collocate, the collocations are returned in compact format, e.g. an efficient way to store the collocated data. When several data points in the secondary group collocate with a single observation of the primary group, it is not obvious how this should be handled. The compact format accounts for this by introducing the Collocations/pairs variable, which contains the respective indices of the collocated datapoints. This might not be the most practical solution In practice, the two functions expand and collapse offer two convenient ways to handle this. Applying expand to the collocations will repeat data points for cases where one datapoint matches with several data points of the other dataset.
expand(collocations)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Applying collapse to the collocations will calculate some generic statistics (mean, std, count) over the datapoints that match with a single data point of the other dataset.
collapse(collocations)
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Purely temporal collocations are not implemented yet and attempts will raise a NotImplementedError. Find collocations between two filesets Normally, one has the data stored in a set of many files. typhon provides an object to handle those filesets (see the typhon doc). It is very simple to find collocations between them. First, we need to create FileSet objects and let them know where to find their files:
fh = NetCDF4() fh.write(secondary, 'testdata/secondary/2018/01/01/000000-235959.nc') # Create the filesets objects and point them to the input files a_fileset = FileSet( name="primary", path="testdata/primary/{year}/{month}/{day}/" "{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc", # handler=handlers.NetCDF4, ) b_fileset = FileSet( name="secondary", path="testdata/secondary/{year}/{month}/{day}/" "{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc", # handler=handlers.NetCDF4, )
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Now, we can search for collocations between a_dataset and b_dataset and store them to ab_collocations.
# Create the output dataset: ab_collocations = Collocations( name="ab_collocations", path="testdata/ab_collocations/{year}/{month}/{day}/" "{hour}{minute}{second}-{end_hour}{end_minute}{end_second}.nc", ) ab_collocations.search( [a_fileset, b_fileset], start="2018", end="2018-01-02", max_interval=timedelta(hours=1), max_distance=300, ) fh.read('testdata/primary/2018/01/01/000000-235959.nc')
doc/tutorials/collocations.ipynb
atmtools/typhon
mit
Exercise Try out these commands to see what they return: data.head() data.tail(3) data.shape
data.shape
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data:
vals = data.value vals vals[5] = 0 vals data
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
If we plan on modifying an extracted Series, its a good idea to make a copy.
vals = data.value.copy() vals[5] = 1000 data
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Exercise From the data table above, create an index to return all rows for which the phylum name ends in "bacteria" and the value is greater than 1000.
# Write your answer here
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Importing data A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure: genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '&lt;f4')]) Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported. Let's start with some more bacteria data, stored in csv format.
!cat ../data/microbiome.csv
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
This table can be read into a DataFrame using read_csv:
mb = pd.read_csv("../data/microbiome.csv") mb
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Notice that read_csv automatically considered the first row in the file to be a header row. We can override default behavior by customizing some the arguments, like header, names or index_col.
pd.read_csv("../data/microbiome.csv", header=None).head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
read_csv is just a convenience function for read_table, since csv is such a common format:
mb = pd.read_table("../data/microbiome.csv", sep=',')
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats: sep='\s+' For a more useful index, we can specify the first two columns, which together provide a unique index to the data.
mb = pd.read_csv("../data/microbiome.csv", index_col=['Patient','Taxon']) mb.head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0