markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Exercise 3: The function parse given below is the from the notebook Shift-Reduce-Parser.ipynb. Adapt this function so that it does not just return Trueor False but rather returns a parse tree as a nested list. The key idea is that the list Symbols should now be a list of parse trees and tokens instead of just syntacti...
def parse(self, TL): """ Edit this code so that it returns a parse tree. Make use of the auxiliary function combine_trees that you have to implement in Exercise 4. """ index = 0 # points to next token Symbols = [] # stack of symbols States = ['s0'] # stack of states, s0 is st...
ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb
Danghor/Formal-Languages
gpl-2.0
Exercise 4: Given a list of tokens and parse trees TL the function combine_trees combines these trees into a new parse tree. The parse trees are represented as nested tuples. The data type of a nested tuple is defined recursively: - A nested tuple is a tuple of the form (Head,) + Body where * Head is a string and ...
def combine_trees(TL): if len(TL) == 0: return () if isinstance(TL, str): return (str(TL),) Literals = [t for t in TL if isinstance(t, str)] Trees = [t for t in TL if not isinstance(t, str)] if len(Literals) > 0: label = Literals[0] else: label = '' res...
ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb
Danghor/Formal-Languages
gpl-2.0
Exercise 5: The function simplfy_tree(tree) transforms the parse tree tree into an abstract syntax tree. The parse tree tree is represented as a nested tuple of the form tree = (head,) + body The function should simplify the tree as follows: - If head == '' and body is a tuple of length 2 that starts with an empty st...
def simplify_tree(tree): if isinstance(tree, int) or isinstance(tree, str): return tree head, *body = tree if body == []: return tree if head == '' and len(body) == 2 and body[0] == ('',): return simplify_tree(body[1]) if head in VoidKeys and len(body) == 1: return si...
ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb
Danghor/Formal-Languages
gpl-2.0
Testing The notebook ../AST-2-Dot.ipynb implements the function tuple2dot(nt) that displays the nested tuple nt as a tree via graphvis.
%run ../AST-2-Dot.ipynb cat -n Examples/sum-for.sl def test(file): with open(file, 'r', encoding='utf-8') as file: program = file.read() parser = ShiftReduceParser(actionTable, gotoTable) TL = tokenize(program) st = parser.parse(TL) ast = simplify_tree(st) return st, ast
ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb
Danghor/Formal-Languages
gpl-2.0
Calling the function test below should produce the following nested tuple as parse tree: ('', ('', ('', ('function', ('ID', 'sum'), ('', ('ID', 'n')), ('', ('', ('', ('',), (';', (':=', ('ID', 's'), ('', ('', ('', ('NUMBER', 0))))))), ('for', (':=', ('ID', 'i'), ('', ('', ('', ('NUMBER', 1))))), ('', ('', ('', ('≤', ('...
st, ast = test('Examples/sum-for.sl') print(st) print(ast) display(tuple2dot(st)) display(tuple2dot(ast))
ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb
Danghor/Formal-Languages
gpl-2.0
Checkerboard Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0: Your function should work for both odd and even size. The 0,0 element should be 1.0. The dtype should be float.
# there's got to be a more efficient way using some sort # of list comprehension def checkerboard(size): cb = np.ones((size,size), dtype = float) for i in range(size): for j in range(size): if(i+j) % 2 == 1: cb[i,j] = 0.0 return cb checkerboard(4) a = checkerboard...
assignments/assignment03/NumpyEx01.ipynb
aschaffn/phys202-2015-work
mit
Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
va.set_block_size(10) va.enable() checkerboard(20) assert True
assignments/assignment03/NumpyEx01.ipynb
aschaffn/phys202-2015-work
mit
Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
va.set_block_size(5) va.enable() checkerboard(27) assert True
assignments/assignment03/NumpyEx01.ipynb
aschaffn/phys202-2015-work
mit
載入原始 RAW Data
import json import pprint pp = pprint.PrettyPrinter(indent=2) path = "./pixnet.txt" all_content = sc.textFile(path).map(json.loads).map(parseRaw)
3.AnalysisArticle_HTML.ipynb
texib/spark_tutorial
gpl-2.0
利用 LXML Parser 來分析文章結構 lxml.html urlparse 需在涵式內被import,以供RDD運算時使用 其他import python package的方法 Submitting Applications Use spark-submit --py-files to add .py, .zip or .egg files to be distributed with your application. lxml.html.fromstring 的input為HTML string,回傳為可供 xpath 處理的物件 XPath syntax Ref_1, Ref_2 / Selects fr...
def parseImgSrc(x): try: urls = list() import lxml.html from urlparse import urlparse node = lxml.html.fromstring(x) root = node.getroottree() for src in root.xpath('//img/@src'): try : host = urlparse(src).netloc if '.' no...
3.AnalysisArticle_HTML.ipynb
texib/spark_tutorial
gpl-2.0
取出 Image Src 的列表
image_list = all_content.map(lambda x :parseImgSrc(x[1])) pp.pprint(image_list.first()[:10])
3.AnalysisArticle_HTML.ipynb
texib/spark_tutorial
gpl-2.0
統計 Image Src 的列表
img_src_count = all_content.map( lambda x :parseImgSrc(x[1])).flatMap( lambda x: x).countByValue() for i in img_src_count: print i , ':' , img_src_count[i]
3.AnalysisArticle_HTML.ipynb
texib/spark_tutorial
gpl-2.0
<span style="color: blue">請使用 reduceByKey , sortBy 來計算出 img src 排行榜</span> 請參照以下文件 [http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD] 幾種RDD sorting的方式 針對key值排序 使用 sortByKey sc.parallelize(tmp).sortByKey(True, 1).collect() 使用 sortBy sc.parallelize(tmp).sortBy(lambda x: x[0]).collect() 使用 take...
from operator import add all_content.map( lambda x :parseImgSrc(x[1])).flatMap(lambda x: x).map(lambda x: (x,1)).reduceByKey(add).sortBy( lambda x: x[1], ascending=False).collect()
3.AnalysisArticle_HTML.ipynb
texib/spark_tutorial
gpl-2.0
Packaging the Model In order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in tpu_models with the data processing functions from the pevious lab copied into <a href="tpu_models/trainer/util.py">util.py</a>. Similarly, the model building and traini...
%%writefile tpu_models/trainer/task.py import argparse import json import os import sys import tensorflow as tf from . import model from . import util def _parse_arguments(argv): """Parses command-line arguments.""" parser = argparse.ArgumentParser() parser.add_argument( '--epochs', help...
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The TPU server Before we can start training with this code, we need a way to pull in MobileNet. When working with TPUs in the cloud, the TPU will not have access to the VM's local file directory since the TPU worker acts as a server. Because of this all data used by our model must be hosted on an outside storage system...
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.
%%bash rm -r tpu_models/hub mkdir tpu_models/hub tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.
!gsutil rm -r gs://$BUCKET/tpu_models !gsutil cp -r tpu_models gs://$BUCKET/tpu_models
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Spinning up a TPU Time to wake up a TPU! Open the Google Cloud Shell and copy the gcloud compute command below. Say 'Yes' to the prompts to spin up the TPU. gcloud compute tpus execution-groups create \ --name=my-tpu \ --zone=us-central1-b \ --tf-version=2.3.2 \ --machine-type=n1-standard-1 \ --accelerator-type=v3...
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out. TODO: Complete the code below by adding flags for tpu_address and the hub_path. Have another look at task.py to s...
%%bash export TPU_NAME=my-tpu echo "export TPU_NAME="$TPU_NAME echo "python3 -m tpu_models.trainer.task \ # TODO: Your code goes here \ # TODO: Your code goes here \ --job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
3. Undirected graph The following undirected graph: |---0---| | | | | 1-------2 | | | | 3-------4 | | 5 can be defined as:
adj_undirected = np.array([[0, 1, 1, 0, 0, 0], [1, 0, 1, 1, 0, 0], [1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1], [0, 0, 1, 1, 0, 0], [0, 0, 0, 1, 0, 0]]) undirected_graph = UndirectedGraph(a...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
or
adj_undirected = csr_matrix(([1] * 14, ([0, 1, 0, 2, 1, 2, 1, 3, 2, 4, 3, 4, 3, 5], [1, 0, 2, 0, 2, 1, 3, 1, 4, 2, 4, 3, 5, 3])), shape=(6, 6)) undirected_graph = UndirectedGraph(adj_undirected) print(undirected_graph)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
4. Isolated vertices Note that any directed or undirected graph (not a tree) can have isolated vertices, i.e. vertices with no edge connections. For example the following undirected graph: ``` 0---| | | 1 2 | | 3-------4 ...
adj_isolated = np.array([[0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0], [0, 0, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0]]) isolated_graph = UndirectedGraph(adj_isolated) pr...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
or
adj_isolated = csr_matrix(([1] * 6, ([0, 2, 2, 4, 3, 4], [2, 0, 4, 2, 4, 3])), shape=(6, 6)) isolated_graph = UndirectedGraph(adj_isolated) print(isolated_graph)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
5. Directed graph The following directed graph: |--&gt;0&lt;--| | | | | 1&lt;-----&gt;2 | | v v 3------&gt;4 | v 5 can be defined as:
adj_directed = np.array([[0, 0, 0, 0, 0, 0], [1, 0, 1, 1, 0, 0], [1, 1, 0, 0, 1, 0], [0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) directed_graph = DirectedGraph(adj_directed) prin...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
or
adj_directed = csr_matrix(([1] * 8, ([1, 2, 1, 2, 1, 2, 3, 3], [0, 0, 2, 1, 3, 4, 4, 5])), shape=(6, 6)) directed_graph = DirectedGraph(adj_directed) print(directed_graph)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
6. Tree A Tree in Menpo is defined as a directed graph, thus Tree is a subclass of DirectedGraph. The following tree: 0 | ___|___ 1 2 | | _|_ | 3 4 5 | | | | | | 6 7 8 can be defined a...
adj_tree = np.array([[0, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1], ...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
or
adj_tree = csr_matrix(([1] * 8, ([0, 0, 1, 1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6, 7, 8])), shape=(9, 9)) tree = Tree(adj_tree, root_vertex=0) print(tree)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
7. Basic graph properties Below we show how to retrieve basic properties from all the previously defined graphs, i.e. undirected_graph, isolated_graph, directed_graph and tree of Sections 3, 4, 5 and 6 repsectively. Number of vertices and edges For all the above defined graphs, we can get the number of vertices $|V|$ a...
print("The undirected_graph has {} vertices and {} edges.".format(undirected_graph.n_vertices, undirected_graph.n_edges)) print("The isolated_graph has {} vertices and {} edges.".format(isolated_graph.n_vertices, isolated_graph.n_edges)) print("The directed_graph has {} vertices and {} edges.".format(directed_graph.n_v...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Sets of vertices and edges We can also get the sets of vertices and edges, i.e. $V$ and $E$ respectively, as:
print("undirected_graph: The set of vertices $V$ is") print(undirected_graph.vertices) print("and the set of edges $E$ is") print(undirected_graph.edges)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Adjacency list We can also retrieve the adjacency list, i.e. a list that for each vertex stores the list of its neighbours (or children in the case of directed graphs). For example:
print("The adjacency list of the undirected_graph is {}.".format(undirected_graph.get_adjacency_list())) print("The adjacency list of the directed_graph is {}.".format(directed_graph.get_adjacency_list()))
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Isolated vertices There are methods to check and retrieve isolated vertices, for example:
print("Has the undirected_graph any isolated vertices? {}.".format(undirected_graph.has_isolated_vertices())) print("Has the isolated_graph any isolated vertices? {}, it has {}.".format(isolated_graph.has_isolated_vertices(), isolated_graph.is...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Neighbours and is_edge We can check if a pair of vertices are connected with an edge:
i = 4 j = 7 print("Are vertices {} and {} of the tree connected? {}.".format(i, j, tree.is_edge(i, j))) i = 5 j = 1 print("Are vertices {} and {} of the directed_graph connected? {}.".format(i, j, directed_graph.is_edge(i, j)))
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
We can also retrieve whether a vertex has neighbours (or children) and who are they, as:
v = 1 print("How many neighbours does vertex {} of the isolated_graph have? {}.".format(v, isolated_graph.n_neighbours(v))) print("How many children does vertex {} of the directed_graph have? {}, they are {}.".format(v, directed_graph.n_children(v), ...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Cycles and trees We can check whether a graph has cycles
print("Does the undirected_graph have cycles? {}.".format(undirected_graph.has_cycles())) print("Does the isolated_graph have cycles? {}.".format(isolated_graph.has_cycles())) print("Does the directed_graph have cycles? {}.".format(directed_graph.has_cycles())) print("Does the tree have cycles? {}.".format(tree.has_cyc...
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
and, of course whether a graph is a tree
print("Is the undirected_graph a tree? {}.".format(undirected_graph.is_tree())) print("Is the directed_graph a tree? {}.".format(directed_graph.is_tree())) print("Is the tree a tree? {}.".format(tree.is_tree()))
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
8. Basic tree properties Menpo's Tree instance has additional basic properties. Predecessors list Apart from the adjacency list mentioned above, a tree can also be represented by a predecessors list, i.e. a list that stores the parent for each vertex. None denotes the root vertex. For example
print(tree.predecessors_list)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Depth We can find the maximum depth of a tree
print(tree.maximum_depth)
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
as well as the depth of a specific vertex
print("The depth of vertex 4 is {}.".format(tree.depth_of_vertex(4))) print("The depth of vertex 0 is {}.".format(tree.depth_of_vertex(0)))
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Leaves Finally, we can get the number of leaves as well as whether a specific vertex is a leaf (has no children):
print("The tree has {} leaves.".format(tree.n_leaves)) print("Is vertex 7 a leaf? {}.".format(tree.is_leaf(7)))
menpo/Shape/Graph.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Text Searcher with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/models/modify/model_maker/text_searcher"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a t...
!sudo apt -y install libportaudio2 !pip install -q tflite-model-maker !pip install gdown
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Import the required packages.
from tflite_model_maker import searcher
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Prepare the dataset This tutorial uses the dataset CNN / Daily Mail summarization dataset from the GitHub repo. First, download the text and urls of cnn and dailymail and unzip them. If it failed to download from google drive, please wait a few minutes to try it again or download it manually and then upload it to the c...
!gdown https://drive.google.com/uc?id=0BwmD_VLjROrfTHk4NFg2SndKcjQ !gdown https://drive.google.com/uc?id=0BwmD_VLjROrfM1BxdkxVaTY2bWs !wget -O all_train.txt https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt !tar xzf cnn_stories.tgz !tar xzf dailymail_stories.tgz
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Then, save the data into the CSV file that can be loaded into tflite_model_maker library. The code is based on the logic used to load this data in tensorflow_datasets. We can't use tensorflow_dataset directly since it doesn't contain urls which are used in this colab. Since it takes a long time to process the data into...
#@title Save the highlights and urls to the CSV file #@markdown Load the highlights from the stories of CNN / Daily Mail, map urls with highlights, and save them to the CSV file. CNN_FRACTION = 0.05 #@param {type:"number"} DAILYMAIL_FRACTION = 0.05 #@param {type:"number"} import csv import hashlib import os import te...
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Build the text Searcher model Create a text Searcher model by loading a dataset, creating a model with the data and exporting the TFLite model. Step 1. Load the dataset Model Maker takes the text dataset and the corresponding metadata of each text string (such as urls in this example) in the CSV format. It embeds the t...
!wget -O universal_sentence_encoder.tflite https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/searcher/text_to_image_blogpost/text_embedder.tflite
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Create a searcher.TextDataLoader instance and use data_loader.load_from_csv method to load the dataset. It takes ~10 minutes for this step since it generates the embedding feature vector for each text one by one. You can try to upload your own CSV file and load it to build the customized model as well. Specify the name...
data_loader = searcher.TextDataLoader.create("universal_sentence_encoder.tflite", l2_normalize=True) data_loader.load_from_csv("cnn_dailymail.csv", text_column="highlights", metadata_column="urls")
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
For image use cases, you can create a searcher.ImageDataLoader instance and then use data_loader.load_from_folder to load images from the folder. The searcher.ImageDataLoader instance needs to be created by a TFLite embedder model because it will be leveraged to encode queries to feature vectors and be exported with th...
scann_options = searcher.ScaNNOptions( distance_measure="dot_product", tree=searcher.Tree(num_leaves=140, num_leaves_to_search=4), score_ah=searcher.ScoreAH(dimensions_per_block=1, anisotropic_quantization_threshold=0.2)) model = searcher.Searcher.create_from_data(data_loader, scann_options)
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
In the above example, we define the following options: * distance_measure: we use "dot_product" to measure the distance between two embedding vectors. Note that we actually compute the negative dot product value to preserve the notion that "smaller is closer". tree: the dataset is divided the dataset into 140 partiti...
model.export( export_filename="searcher.tflite", userinfo="", export_format=searcher.ExportFormat.TFLITE)
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Test the TFLite model on your query You can test the exported TFLite model using custom query text. To query text using the Searcher model, initialize the model and run a search with text phrase, as follows:
from tflite_support.task import text # Initializes a TextSearcher object. searcher = text.TextSearcher.create_from_file("searcher.tflite") # Searches the input query. results = searcher.search("The Airline Quality Rankings Report looks at the 14 largest U.S. airlines.") print(results)
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
tensorflow/tensorflow
apache-2.0
Write to a file Create a new file overwriting any previous file with the same name, write text, then close the file:
new_file_path = 'hello_world.txt' with open(new_file_path, 'w') as new_file: new_file.write('hello world!')
jup_notebooks/data-science-ipython-notebooks-master/python-data/files.ipynb
steinam/teacher
mit
Read and Write UTF-8
import codecs with codecs.open("hello_world_new.txt", "a", "utf-8") as new_file: with codecs.open("hello_world.txt", "r", "utf-8") as old_file: for line in old_file: new_file.write(line + '\n')
jup_notebooks/data-science-ipython-notebooks-master/python-data/files.ipynb
steinam/teacher
mit
With the above input problem dictionary for water we now create an AquaChemistry object and call run on it passing in the dictionary to get a result. We use ExactEigensolver first as a reference.
solver = AquaChemistry() result = solver.run(aqua_chemistry_dict)
community/aqua/chemistry/h2o.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
The run method returns a result dictionary. Some notable fields include 'energy' which is the computed ground state energy.
print('Ground state energy: {}'.format(result['energy']))
community/aqua/chemistry/h2o.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
There is also a 'printable' field containing a complete ready to print readable result
for line in result['printable']: print(line)
community/aqua/chemistry/h2o.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
We update the dictionary, for VQE with UCCSD, and run the computation again.
aqua_chemistry_dict['algorithm']['name'] = 'VQE' aqua_chemistry_dict['optimizer'] = {'name': 'COBYLA', 'maxiter': 25000} aqua_chemistry_dict['variational_form'] = {'name': 'UCCSD'} aqua_chemistry_dict['initial_state'] = {'name': 'HartreeFock'} solver = AquaChemistry() result = solver.run(aqua_chemistry_dict) print('G...
community/aqua/chemistry/h2o.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Here are some example values for x and y. I assume that there are no repeated values in x.
xs = np.arange(10, 14) ys = np.arange(20, 25) print(xs, ys)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
indices is the list of indices I'll choose from at random:
n = len(xs) m = len(ys) indices = np.arange(n)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Now I'll make an array to hold the values of y:
array = np.tile(ys, (n, 1)) print(array)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And shuffle the rows independently
[np.random.shuffle(array[i]) for i in range(n)] print(array)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
I'll keep track of how many unused ys there are in each row
counts = np.full_like(xs, m) print(counts)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Now I'll choose a row, using the counts as weights
weights = np.array(counts, dtype=float) weights /= np.sum(weights) print(weights)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
i is the row I chose, which corresponds to a value of x.
i = np.random.choice(indices, p=weights) print(i)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Now I decrement the counter associated with i, assemble a pair by choosing a value of x and a value of y. I also clobber the array value I used, which is not necessary, but helps with visualization.
counts[i] -= 1 pair = xs[i], array[i, counts[i]] array[i, counts[i]] = -1 print(pair)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
We can check that the counts got decremented
print(counts)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And one of the values in array got used
print(array)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
The next time through is almost the same, except that when we assemble the weights, we give zero weight to the index we just used.
weights = np.array(counts, dtype=float) weights[i] = 0 weights /= np.sum(weights) print(weights)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Everything else is the same
i = np.random.choice(indices, p=weights) counts[i] -= 1 pair = xs[i], array[i, counts[i]] array[i, counts[i]] = -1 print(pair) print(counts) print(array)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Now we can wrap all that up in a function, using a special value for i during the first iteration.
def generate_pairs(xs, ys): n = len(xs) m = len(ys) indices = np.arange(n) array = np.tile(ys, (n, 1)) [np.random.shuffle(array[i]) for i in range(n)] counts = np.full_like(xs, m) i = -1 for _ in range(n * m): weights = np.array(counts, dtype=float) if i != -1:...
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
And here's how it works:
for pairs in generate_pairs(xs, ys): print(pairs)
shuffle_pairs.ipynb
AllenDowney/ProbablyOverthinkingIt
mit
Propagating a single particle The simulation can now be used to propagate a cosmic ray, which is called candidate. We create a 100 EeV proton and propagate it using the simulation. The propagation stops when the energy drops below the minimum energy requirement that was specified. The possible propagation distances are...
cosmicray = Candidate(nucleusId(1, 1), 200 * EeV, Vector3d(100 * Mpc, 0, 0)) sim.run(cosmicray) print(cosmicray) print('Propagated distance', cosmicray.getTrajectoryLength() / Mpc, 'Mpc')
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
Defining an observer To define an observer within the simulation we create a Observer object. The convention of 1D simulations is that cosmic rays, starting from positive coordinates, propagate in the negative direction until the reach the observer at 0. Only the x-coordinate is used in the three-vectors that represent...
# add an observer obs = Observer() obs.add(ObserverPoint()) # observer at x = 0 sim.add(obs) print(obs)
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
Defining the output file We want to save the propagated cosmic rays to an output file. Plain text output is provided by the TextOutput module. For the type of information being stored we can use one of five presets: Event1D, Event3D, Trajectory1D, Trajectory3D and Everything. We can also fine tune with enable(XXXColumn...
# trajectory output output1 = TextOutput('trajectories.txt', Output.Trajectory1D) #sim.add(output1) # generates a lot of output #output1.disable(Output.RedshiftColumn) # don't save the current redshift #output1.disableAll() # disable everything to start from scratch #output1.enable(Output.CurrentEnergyColumn) # cu...
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
If in the example above output1 is added to the module list, it is called on every propagation step to write out the cosmic ray information. To save only cosmic rays that reach our observer, we add an output to the observer that we previously defined. This time we are satisfied with the output type Event1D.
# event output output2 = TextOutput('events.txt', Output.Event1D) obs.onDetection(output2) #sim.run(cosmicray) #output2.close()
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
Similary, the output could be linked to the MinimumEnergy module to save those cosmic rays that fall below the minimum energy, and so on. Note: If we want to use the CRPropa output file from within the same script that runs the simulation, the output module should be explicitly closed after the simulation run in order...
# cosmic ray source source = Source() source.add(SourcePosition(100 * Mpc)) source.add(SourceParticleType(nucleusId(1, 1))) source.add(SourcePowerLawSpectrum(1 * EeV, 200 * EeV, -1)) print(source)
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
Running the simulation Finally we run the simulation to inject and propagate 10000 cosmic rays. An optional progress bar can show the progress of the simulation.
sim.setShowProgress(True) # switch on the progress bar sim.run(source, 10000)
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
(Optional) Plotting This is not part of CRPropa, but since we're at it we can plot the energy spectrum of detected particles to observe the GZK suppression. The plotting is done here using matplotlib, but of course you can use whatever plotting tool you prefer.
%matplotlib inline import matplotlib.pyplot as plt import numpy as np output2.close() # close output file before loading data = np.genfromtxt('events.txt', names=True) print('Number of events', len(data)) logE0 = np.log10(data['E0']) + 18 logE = np.log10(data['E']) + 18 plt.figure(figsize=(10, 7)) h1 = plt.hist(lo...
doc/pages/example_notebooks/basics/basics.v4.ipynb
lukasmerten/CRPropa3
gpl-3.0
Field Description C/A = Control Area (A002) UNIT = Remote Unit for a station (R051) SCP = Subunit Channel Position represents an specific address for a device (02-00-00) STATION = Represents the station name the device is located at LINENAME = Represents all train lines that can be boarded at this stati...
turnstile = {} # looping through all files in data dir starting with MTA_Turnstile for filename in os.listdir('data'): if filename.startswith('MTA_Turnstile'): # reading file and writing each row in a dict with open(os.path.join('data', filename), newline='') as csvfile: mtareader = cs...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 2 Let's turn this into a time series. For each key (basically the control area, unit, device address and station of a specific turnstile), have a list again, but let the list be comprised of just the point in time and the cumulative count of entries. This basically means keeping only the date, time, and entr...
import numpy as np import datetime from dateutil.parser import parse # With respect to the solutions I converted the cumulative entries in the number of entries in the period # That's ok I think since it is required below to do so... turnstile_timeseries = {} # looping through each key in dict, parsing the date and ...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 3 These counts are cumulative every n hours. We want total daily entries. Now make it that we again have the same keys, but now we have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day.
# In the solutions there's a check for abnormal values, I added it in the exercises below # because I found out about the problem later in the analysis turnstile_daily = {} # looping through each key in the timeseries, tracking if the date change while cumulating partial counts for key in turnstile_timeseries: va...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 4 We will plot the daily time series for a turnstile. In ipython notebook, add this to the beginning of your next cell: %matplotlib inline This will make your matplotlib graphs integrate nicely with the notebook. To plot the time series, import matplotlib with import matplotlib.pyplot as plt Take the ...
import matplotlib.pyplot as plt %matplotlib inline # using list comprehension, there are other ways such as dict.keys() and dict.items() dates = [el[0] for el in turnstile_daily[test]] counts = [el[1] for el in turnstile_daily[test]] fig = plt.figure(figsize=(14, 5)) ax = plt.axes() ax.plot(dates, counts) plt.grid('o...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 5 So far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. There are some ControlArea/Unit/Station groups that have a single turnstile, but most have multiple turnstilea-- same value for the C/A, UNIT and STATION columns, different values fo...
temp = {} # for each key I form the new key and check if it's already in the new dict # I append the date in this temp dict to make it easier to sum the values # then I create a new dict with the required keys for key in turnstile_daily: new_key = list(key[0:2]) + list(key[-1:]) for el in turnstile_daily[key]:...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 6 Similarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station.
temp = {} # for each key I form the new key and check if it's already in the new dict # I append the date in this temp dict to make it easier to sum the values # then I create a new dict with the required keys for key in turnstile_daily: new_key = key[-1] for el in turnstile_daily[key]: # setting singl...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 7 Plot the time series for a station
test_station = '59 ST' dates = [el[0] for el in station[test_station]] counts = [el[1] for el in station[test_station]] fig = plt.figure(figsize=(14, 5)) ax = plt.axes() ax.plot(dates, counts) plt.grid('on');
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 8 Make one list of counts for one week for one station. Monday's count, Tuesday's count, etc. so it's a list of 7 counts. Make the same list for another week, and another week, and another week. plt.plot(week_count_list) for every week_count_list you created this way. You should get a rainbow plot of weekly c...
fig = plt.figure(figsize=(16, 6)) ax = plt.axes() n = len(station[test_station]) # creating a list with all the counts for the station all_counts = [el[1] for el in station[test_station]] # splitting counts every 7 values to get weekly data for i in range(int(np.floor(n/7))): ax.plot(all_counts[i*7: 7 + i*7]) ax.se...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 9 Over multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate
total_ridership = {} # just looping through keys and summing all elements inside the dict for key in station: for el in station[key]: if key in total_ridership: total_ridership[key] += el[1] else: total_ridership[key] = el[1] import operator sorted(total_ridership.items(), ...
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
Exercise 10 Make a single list of these total ridership values and plot it with plt.hist(total_ridership_counts) to get an idea about the distribution of total ridership among different stations. This should show you that most stations have a small traffic, and the histogram bins for large traffic volumes have small ...
fig = plt.figure(figsize=(16, 10)) ax = plt.axes() ax.hist(list(total_ridership.values())); fig = plt.figure(figsize=(16, 10)) ax = plt.axes() ax.bar(range(len(total_ridership)), sorted(list(total_ridership.values())));
EDA/EDA_MTA_Exercises.ipynb
aleph314/K2
gpl-3.0
DATA After downloading both files, I created a consolidated sheet on excel. This was difficult because the UN and World Banks did not use the same names for the same country. For example, "PDR Korea" in the UN file was classified as "the people's republic of Korea" in the World Bank. There were around 25 instances of t...
#after combining both files, I uploaded the excel document to my dropbox at the link below xls_file = pd.ExcelFile('https://dl.dropboxusercontent.com/u/16846867/UN-WB%202010%20PG%20vs%20YU.xlsx') xls_file #Here is the data I will analyze. Population growth is an average from 1985-2005 and youth unemployment data is f...
MBA_S16/Jonathan Broch - Demographics & War.ipynb
NYUDataBootcamp/Projects
mit
The Grand Finale And now back to what I thought was interesting. Here is the scatter plot with countries that have had thousands of death in the past years due to armed conflict as well as some of the largest economies in the world highlighted in RED Afghanistan, Iraq, Yemen, Sudan, Syria, and Libya in BLUE The United ...
fig, ax = plt.subplots() Data.plot.scatter(ax=ax, x="Population Growth", y="Youth Unemployment", figsize=(20,10), alpha=.5, color = 'white') Data.iloc[199:201].plot.scatter(ax=ax, x="Population Growth", y="Youth Unemployment", figsize=(20,10), alpha=.9, color = 'red', s=100) Data.iloc[172:173].plot.scatter(ax=ax, x="Po...
MBA_S16/Jonathan Broch - Demographics & War.ipynb
NYUDataBootcamp/Projects
mit
Fanfiction Story Analysis Performance benchmarking and prediction The success of a story is typically judged by the number of reviews, favorites, or followers it recieves. Here, we will try to predict how successful a story will be given select observable features, as well as develop a way to benchmark existing stories...
# examines distribution of number of words df_online['reviews'].fillna(0).plot.hist(normed=True, bins=np.arange(0, 50, 1), alpha=0.5, histtype='step', linewidth='2') df_online['favs'].fillna(0).plot.hist(normed=True, bins=np.arange(0, ...
jupyter_notebooks/story_performance.ipynb
lily-tian/fanfictionstatistics
mit
As expected, reviews, favorites, and follows all have heavily right-skewing distributions. However, there are also differences. A story is mostly likely to have 1 or 2 reviews, not 0. A story is mostly likely to have 0 favorites, but otherwise the favorites distribution looks very similar to reviews. Follows is the one...
df_online.columns.values # creates regressand variables df_online['ratedM'] = [row == 'M' for row in df_online['rated']] df_online['age'] = [cyear - int(row) for row in df_online['pub_year']] df_online['fansize'] = [fandom[row] for row in df_online['fandom']] df_online['complete'] = [row == 'Complete' for row in df_on...
jupyter_notebooks/story_performance.ipynb
lily-tian/fanfictionstatistics
mit
Multicollinearity
# displays correlation matrix df_active.corr() # creates design_matrix X = df_active X['intercept'] = 1 # displays variance inflation factor vif_results = pd.DataFrame() vif_results['VIF Factor'] = [vif(X.values, i) for i in range(X.shape[1])] vif_results['features'] = X.columns vif_results
jupyter_notebooks/story_performance.ipynb
lily-tian/fanfictionstatistics
mit
Results indicate there is some correlation between two of the independent variables: 'fa' and 'fs', implying one of them may not be necessary in the model. Nonlinearity We know from earlier distributions that some of the variables are heavily right-skewed. We created some scatter plots to confirm that the assumption of...
# runs OLS regression formula = 'st ~ fa + fs + cc + age' reg = smf.ols(data=df_active, formula=formula).fit() print(reg.summary())
jupyter_notebooks/story_performance.ipynb
lily-tian/fanfictionstatistics
mit
The log transformations helped increase the fit from and R-squared of ~0.05 to ~0.20. From these results, we can see that: A 1% change in number of authors favorited is associated with a ~15% change in the number of stories written. A 1% change in number of stories favorited is associated with a ~4% change in the numb...
# runs OLS regression formula = 'st ~ fa + cc + age' reg = smf.ols(data=df_active, formula=formula).fit() print(reg.summary())
jupyter_notebooks/story_performance.ipynb
lily-tian/fanfictionstatistics
mit
Without 'fs', we lost some information but not much: A 1% change in number of authors favorited is associated with a ~20% change in the number of stories written. Being in a community is associated with a ~0.7 increase in the number of stories written. One more year on the site is associated with a ~3% change in the n...
def graph(formula, x_range): y = np.array(x_range) x = formula(y) plt.plot(y,x) graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1)))), range(2,100,1)) graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[2])), range(2,100,1)) plt.show() ages = [0...
jupyter_notebooks/story_performance.ipynb
lily-tian/fanfictionstatistics
mit
Essential Libraries and Tools NumPy
import numpy as np x = np.array([[1,2,3],[4,5,6]]) print("x:\n{}".format(x))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
SciPy
from scipy import sparse # Create a 2D NumPy array with a diagonal of ones, and zeros everywhere else (aka an identity matrix). eye = np.eye(4) print("NumPy array:\n{}".format(eye)) # Convert the NumPy array to a SciPy sparse matrix in CSR format. # The CSR format stores a sparse m × n matrix M in row form using th...
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
Usually it isn't possible to create dense representations of sparse data (they won't fit in memory), so we need to create sparse representations directly. Here is a way to create the same sparse matrix as before using the COO format:
data = np.ones(4) row_indices = np.arange(4) col_indices = np.arange(4) eye_coo = sparse.coo_matrix((data, (row_indices, col_indices))) print("COO representation:\n{}".format(eye_coo))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
More details on SciPy sparse matrices can be found in the SciPy Lecture Notes. matplotlib
# %matplotlib inline -- the default, just displays the plot in the browser. # %matplotlib notebook -- provides an interactive environment for the plot. import matplotlib.pyplot as plt # Generate a sequnce of numbers from -10 to 10 with 100 steps (points) in between. x = np.linspace(-10, 10, 100) # Create a second arra...
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
pandas Here is a small example of creating a pandas DataFrame using a Python dictionary.
import pandas as pd from IPython.display import display # Create a simple dataset of people data = {'Name': ["John", "Anna", "Peter", "Linda"], 'Location' : ["New York", "Paris", "Berlin", "London"], 'Age' : [24, 13, 53, 33] } data_pandas = pd.DataFrame(data) # IPython.display allows for "prett...
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
There are several possible ways to query this table. Here is one example:
# Select all rows that have an age column greater than 30: display(data_pandas[data_pandas.Age > 30])
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
mglearn The mglearn package is a library of utility functions written specifically for this book, so that the code listings don't become too cluttered with details of plotting and data loading. The mglearn library can be found at the author's Github repository, and can be installed with the command pip install mglearn....
# Make sure your dependencies are similar to the ones in the book. import sys print("Python version: {}".format(sys.version)) import pandas as pd print("pandas version: {}".format(pd.__version__)) import matplotlib print("matplotlib version: {}".format(matplotlib.__version__)) import numpy as np print("NumPy versio...
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit