markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Let’s visualize the results. The script plotcount.py reads in a data file and plots the 10 most frequently occurring words as a text-based bar plot:
run $code/plotcount.py $repo_path/isles.dat ascii
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
plotcount.py can also show the plot graphically
run $code/plotcount.py $repo_path/isles.dat show
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
plotcount.py can also create the plot as an image file (e.g. a PNG file)
run $code/plotcount.py $repo_path/isles.dat $repo_path/isles.png
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Import the objects necessary to display the generated png file in this notebook.
from IPython.display import Image Image(filename=os.path.join(repo_path, 'isles.png'))
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Finally, let’s test Zipf’s law for these books The most frequently-occurring word occurs approximately twice as often as the second most frequent word. This is Zipf’s Law.
run $code/zipf_test.py $repo_path/abyss.dat $repo_path/isles.dat
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
What we really want is an executable description of our pipeline that allows software to do the tricky part for us: figuring out what steps need to be rerun. Create a file, called Makefile, with the following contents. Python's built-in format is used to create the contents of the Makefile.
makefile_contents = """ # Count words. {repo_path}/isles.dat : {data}/books/isles.txt {tab_char}python {code}/wordcount.py {data}/books/isles.txt {repo_path}/isles.dat """.format(code=code, data=data, repo_path=repo_path, tab_char=TAB_CHAR)
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Write the contents to a file named Makefile.
with open('Makefile', 'w') as fh: fh.write(makefile_contents)
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Let’s first sure we start from scratch and delete the .dat and .png files we created earlier: Run rm in shell.
!rm $repo_path/*.dat $repo_path/*.png
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Run make in shell. By default, Make prints out the actions it executes:
!make
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Let’s see if we got what we expected. Run head in shell.
!head -5 $repo_path/isles.dat
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Simple function that just add two numbers:
#Define a Python function def add(a: float, b: float) -> float: '''Calculates sum of two arguments''' return a + b
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
Convert the function to a pipeline operation
add_op = components.create_component_from_func(add)
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
#Advanced function #Demonstrates imports, helper functions and multiple outputs from typing import NamedTuple def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]): '''Divides two numbers and calculate the quotient and remainder''' #Imports inside a component function: import numpy as np #This function demonstrates how to use nested functions inside a component function: def divmod_helper(dividend, divisor): return np.divmod(dividend, divisor) (quotient, remainder) = divmod_helper(dividend, divisor) from tensorflow.python.lib.io import file_io import json # Exports a sample tensorboard: metadata = { 'outputs' : [{ 'type': 'tensorboard', 'source': 'gs://ml-pipeline-dataset/tensorboard-train', }] } # Exports two sample metrics: metrics = { 'metrics': [{ 'name': 'quotient', 'numberValue': float(quotient), },{ 'name': 'remainder', 'numberValue': float(remainder), }]} from collections import namedtuple divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics']) return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
Test running the python function directly
my_divmod(100, 7)
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
Convert the function to a pipeline operation You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
divmod_op = components.create_component_from_func(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
Define the pipeline Pipeline function has to be decorated with the @dsl.pipeline decorator
import kfp.deprecated.dsl as dsl @dsl.pipeline( name='calculation-pipeline', description='A toy pipeline that performs arithmetic calculations.' ) def calc_pipeline( a=7, b=8, c=17, ): #Passing pipeline parameter and a constant value as operation arguments add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance. #Passing a task output reference as operation arguments #For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax divmod_task = divmod_op(add_task.output, b) #For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax result_task = add_op(divmod_task.outputs['quotient'], c)
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
Submit the pipeline for execution
#Specify pipeline argument values arguments = {'a': 7, 'b': 8} #Submit a pipeline run kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments) # Run the pipeline on a separate Kubeflow Cluster instead # (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks) # kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments) #vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
samples/core/lightweight_component/lightweight_component.ipynb
kubeflow/pipelines
apache-2.0
Funções
def mapLibSVM(row): return (row[5],Vectors.dense(row[:3])) df = spark.read \ .format("csv") \ .option("header", "true") \ .option("inferSchema", "true") \ .load("datasets/iris.data")
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Convertendo a saída de categórica para numérica
indexer = StringIndexer(inputCol="label", outputCol="labelIndex") indexer = indexer.fit(df).transform(df) indexer.show() dfLabeled = indexer.rdd.map(mapLibSVM).toDF(["label", "features"]) dfLabeled.show() train, test = dfLabeled.randomSplit([0.9, 0.1], seed=12345)
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Definição do Modelo Logístico
lr = LogisticRegression(labelCol="label", maxIter=15)
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Cross-Validation - TrainValidationSplit e CrossValidator
paramGrid = ParamGridBuilder()\ .addGrid(lr.regParam, [0.1, 0.001]) \ .build() tvs = TrainValidationSplit(estimator=lr, estimatorParamMaps=paramGrid, evaluator=MulticlassClassificationEvaluator(), trainRatio=0.8) cval = CrossValidator(estimator=lr, estimatorParamMaps=paramGrid, evaluator=MulticlassClassificationEvaluator(), numFolds=10)
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Treino do Modelo e Predição do Teste
result_tvs = tvs.fit(train).transform(test) result_cval = cval.fit(train).transform(test) preds_tvs = result_tvs.select(["prediction", "label"]) preds_cval = result_cval.select(["prediction", "label"])
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Avaliação dos Modelos
# Instânciação dos Objetos de Métrics metrics_tvs = MulticlassMetrics(preds_tvs.rdd) metrics_cval = MulticlassMetrics(preds_cval.rdd) # Estatísticas Gerais para o Método TrainValidationSplit print("Summary Stats") print("F1 Score = %s" % metrics_tvs.fMeasure()) print("Accuracy = %s" % metrics_tvs.accuracy) print("Weighted recall = %s" % metrics_tvs.weightedRecall) print("Weighted precision = %s" % metrics_tvs.weightedPrecision) print("Weighted F(1) Score = %s" % metrics_tvs.weightedFMeasure()) print("Weighted F(0.5) Score = %s" % metrics_tvs.weightedFMeasure(beta=0.5)) print("Weighted false positive rate = %s" % metrics_tvs.weightedFalsePositiveRate) # Estatísticas Gerais para o Método TrainValidationSplit print("Summary Stats") print("F1 Score = %s" % metrics_cval.fMeasure()) print("Accuracy = %s" % metrics_cval.accuracy) print("Weighted recall = %s" % metrics_cval.weightedRecall) print("Weighted precision = %s" % metrics_cval.weightedPrecision) print("Weighted F(1) Score = %s" % metrics_cval.weightedFMeasure()) print("Weighted F(0.5) Score = %s" % metrics_cval.weightedFMeasure(beta=0.5)) print("Weighted false positive rate = %s" % metrics_cval.weightedFalsePositiveRate)
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Conclusão: Uma vez que ambos os modelos de CrossValidation usam o mesmo modelo de predição (a Regressão Logística), e contando com o fato de que o dataset é relativamente pequeno, é natural que ambos os métodos de CrossValidation encontrem o mesmo (ou aproximadamente igual) valor ótimo para os hyperparâmetros testados. Por esse motivo, após descobrirem esse valor de hiperparâmetros, os dois modelos irão demonstrar resultados bastante similiares quando avaliados sobre o Conjunto de Treino (que também é o mesmo para os dois modelos). Random Forest Use o exercício anterior como base, mas agora utilizando pyspark.ml.classification.RandomForestClassifier. Use Pipeline e CrossValidator para avaliar o modelo gerado. Bibliotecas
from pyspark.ml.classification import RandomForestClassifier
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Definição do Modelo de Árvores Randômicas
rf = RandomForestClassifier(labelCol="label", featuresCol="features")
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Cross-Validation - CrossValidator
paramGrid = ParamGridBuilder()\ .addGrid(rf.numTrees, [1, 100]) \ .build() cval = CrossValidator(estimator=rf, estimatorParamMaps=paramGrid, evaluator=MulticlassClassificationEvaluator(), numFolds=10)
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Treino do Modelo e Predição do Teste
results = cval.fit(train).transform(test) predictions = results.select(["prediction", "label"])
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Avaliação do Modelo
# Instânciação dos Objetos de Métrics metrics = MulticlassMetrics(predictions.rdd) # Estatísticas Gerais para o Método TrainValidationSplit print("Summary Stats") print("F1 Score = %s" % metrics.fMeasure()) print("Accuracy = %s" % metrics.accuracy) print("Weighted recall = %s" % metrics.weightedRecall) print("Weighted precision = %s" % metrics.weightedPrecision) print("Weighted F(1) Score = %s" % metrics.weightedFMeasure()) print("Weighted F(0.5) Score = %s" % metrics.weightedFMeasure(beta=0.5)) print("Weighted false positive rate = %s" % metrics.weightedFalsePositiveRate)
2019/12-spark/14-spark-mllib-classification/mllibClass_OtacilioBezerra.ipynb
InsightLab/data-science-cookbook
mit
Learn the model structure using PC
est = PC(data=samples) estimated_model = est.estimate(variant="stable", max_cond_vars=4) get_f1_score(estimated_model, model) est = PC(data=samples) estimated_model = est.estimate(variant="orig", max_cond_vars=4) get_f1_score(estimated_model, model)
examples/Structure Learning in Bayesian Networks.ipynb
pgmpy/pgmpy
mit
Learn the model structure using Hill-Climb Search
scoring_method = K2Score(data=samples) est = HillClimbSearch(data=samples) estimated_model = est.estimate( scoring_method=scoring_method, max_indegree=4, max_iter=int(1e4) ) get_f1_score(estimated_model, model)
examples/Structure Learning in Bayesian Networks.ipynb
pgmpy/pgmpy
mit
The Geobase knowledge base Geobase is a small knowledge base about the geography of the United States. It contains (almost) all the information needed to answer queries in the Geo880 dataset, including facts about: states: capital, area, population, major cities, neighboring states, highest and lowest points and elevations cities: containing state and population rivers: length and states traversed mountains: containing state and height roads: states traversed lakes: area, states traversed SippyCup contains a class called GeobaseReader (in geobase.py) which facilitates working with Geobase in Python. It reads and parses the Geobase Prolog file, and creates a set of tuples representing its content. Let's take a look.
from geobase import GeobaseReader reader = GeobaseReader() unaries = [str(t) for t in reader.tuples if len(t) == 2] print('\nSome unaries:\n ' + '\n '.join(unaries[:10])) binaries = [str(t) for t in reader.tuples if len(t) == 3] print('\nSome binaries:\n ' + '\n '.join(binaries[:10]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Some observations here: Unaries are pairs consisting of a unary predicate (a type) and an entity. Binaries are triples consisting of binary predicate (a relation) and two entities (or an entity and a numeric or string value). Entities are named by unique identifiers of the form /type/name. This is a GeobaseReader convention; these identifiers are not used in the original Prolog file. Some entities have the generic type place because they occur in the Prolog file only as the highest or lowest point in a state, and it's hard to reliably assign such points to one of the more specific types. The original Prolog file is inconsistent about units. For example, the area of states is expressed in square miles, but the area of lakes is expressed in square kilometers. GeobaseReader converts everything to SI units: meters and square meters. Semantic representation <a id="geoquery-semantic-representation"></a> GeobaseReader merely reads the data in Geobase into a set of tuples. It doesn't provide any facility for querying that data. That's where GraphKB and GraphKBExecutor come in. GraphKB is a graph-structured knowledge base, with indexing for fast lookups. GraphKBExecutor defines a representation for formal queries against that knowledge base, and supports query execution. The formal query language defined by GraphKBExecutor will serve as our semantic representation for the geography domain. The GraphKB class A GraphKB is a generic graph-structured knowledge base, or equivalently, a set of relational pairs and triples, with indexing for fast lookups. It represents a knowledge base as set of tuples, each either: a pair, consisting of a unary relation and an element which belongs to it, or a triple consisting of a binary relation and a pair of elements which belong to it. For example, we can construct a GraphKB representing facts about The Simpsons:
from graph_kb import GraphKB simpsons_tuples = [ # unaries ('male', 'homer'), ('female', 'marge'), ('male', 'bart'), ('female', 'lisa'), ('female', 'maggie'), ('adult', 'homer'), ('adult', 'marge'), ('child', 'bart'), ('child', 'lisa'), ('child', 'maggie'), # binaries ('has_age', 'homer', 36), ('has_age', 'marge', 34), ('has_age', 'bart', 10), ('has_age', 'lisa', 8), ('has_age', 'maggie', 1), ('has_brother', 'lisa', 'bart'), ('has_brother', 'maggie', 'bart'), ('has_sister', 'bart', 'maggie'), ('has_sister', 'bart', 'lisa'), ('has_sister', 'lisa', 'maggie'), ('has_sister', 'maggie', 'lisa'), ('has_father', 'bart', 'homer'), ('has_father', 'lisa', 'homer'), ('has_father', 'maggie', 'homer'), ('has_mother', 'bart', 'marge'), ('has_mother', 'lisa', 'marge'), ('has_mother', 'maggie', 'marge'), ] simpsons_kb = GraphKB(simpsons_tuples)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
The GraphKB object now contains three indexes: unaries[U]: all entities belonging to unary relation U binaries_fwd[B][E]: all entities X such that (E, X) belongs to binary relation B binaries_rev[B][E]: all entities X such that (X, E) belongs to binary relation B For example:
simpsons_kb.unaries['child'] simpsons_kb.binaries_fwd['has_sister']['lisa'] simpsons_kb.binaries_rev['has_sister']['lisa']
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
The GraphKBExecutor class A GraphKBExecutor executes formal queries against a GraphKB and returns their denotations. Queries are represented by Python tuples, and can be nested. Denotations are also represented by Python tuples, but are conceptually sets (possibly empty). The elements of these tuples are always sorted in canonical order, so that they can be reliably compared for set equality. The query language defined by GraphKBExecutor is perhaps most easily explained by example:
queries = [ 'bart', 'male', ('has_sister', 'lisa'), # who has sister lisa? ('lisa', 'has_sister'), # lisa has sister who, i.e., who is a sister of lisa? ('lisa', 'has_brother'), # lisa has brother who, i.e., who is a brother of lisa? ('.and', 'male', 'child'), ('.or', 'male', 'adult'), ('.not', 'child'), ('.any',), # anything ('.any', 'has_sister'), # anything has sister who, i.e., who is a sister of anything? ('.and', 'child', ('.not', ('.any', 'has_sister'))), ('.count', ('bart', 'has_sister')), ('has_age', ('.gt', 21)), ('has_age', ('.lt', 2)), ('has_age', ('.eq', 10)), ('.max', 'has_age', 'female'), ('.min', 'has_age', ('bart', 'has_sister')), ('.max', 'has_age', '.any'), ('.argmax', 'has_age', 'female'), ('.argmin', 'has_age', ('bart', 'has_sister')), ('.argmax', 'has_age', '.any'), ] executor = simpsons_kb.executor() for query in queries: print() print('Q ', query) print('D ', executor.execute(query))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Note that the query (R E) denotes entities having relation R to entity E, whereas the query (E R) denotes entities to which entity E has relation R. For a more detailed understanding of the style of semantic representation defined by GraphKBExecutor, take a look at the source code. Using GraphKBExecutor with Geobase
geobase = GraphKB(reader.tuples) executor = geobase.executor() queries = [ ('/state/texas', 'capital'), # capital of texas ('.and', 'river', ('traverses', '/state/utah')), # rivers that traverse utah ('.argmax', 'height', 'mountain'), # tallest mountain ] for query in queries: print() print(query) print(executor.execute(query))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Grammar engineering It's time to start developing a grammar for the geography domain. As in Unit 2, the performance metric we'll focus on during grammar engineering is oracle accuracy (the proportion of examples for which any parse is correct), not accuracy (the proportion of examples for which the first parse is correct). Remember that oracle accuracy is an upper bound on accuracy, and is a measure of the expressive power of the grammar: does it have the rules it needs to generate the correct parse? The gap between oracle accuracy and accuracy, on the other hand, reflects the ability of the scoring model to bring the correct parse to the top of the candidate list. <!-- (TODO: rewrite.) --> As always, we're going to take a data-driven approach to grammar engineering. We want to introduce rules which will enable us to handle the lexical items and syntactic structures that we actually observe in the Geo880 training data. To that end, let's count the words that appear among the 600 training examples. (We do not examine the test data!)
from collections import defaultdict from operator import itemgetter from geo880 import geo880_train_examples words = [word for example in geo880_train_examples for word in example.input.split()] counts = defaultdict(int) for word in words: counts[word] += 1 counts = sorted([(count, word) for word, count in counts.items()], reverse=True) print('There were %d tokens of %d types:\n' % (len(words), len(counts))) print(', '.join(['%s (%d)' % (word, count) for count, word in counts[:50]] + ['...']))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
There are at least four major categories of words here: - Words that refer to entities, such as "texas", "mississippi", "usa", and "austin". - Words that refer to types, such as "state", "river", and "cities". - Words that refer to relations, such as "in", "borders", "capital", and "long". - Other function words, such as "the", "what", "how", and "are". One might make finer distinctions, but this seems like a reasonable starting point. Note that these categories do not always correspond to traditional syntactic categories. While the entities are typically proper nouns, and the types are typically common nouns, the relations include prepositions, verbs, nouns, and adjectives. The design of our grammar will roughly follow this schema. The major categories will include $Entity, $Type, $Collection, $Relation, and $Optional. Optionals In Unit 2, our grammar engineering process didn't really start cooking until we introduced optionals. This time around, let's begin with the optionals. We'll define as $Optional every word in the Geo880 training data which does not plainly refer to an entity, type, or relation. And we'll let any query be preceded or followed by a sequence of one or more $Optionals.
from parsing import Grammar, Rule optional_words = [ 'the', '?', 'what', 'is', 'in', 'of', 'how', 'many', 'are', 'which', 'that', 'with', 'has', 'major', 'does', 'have', 'where', 'me', 'there', 'give', 'name', 'all', 'a', 'by', 'you', 'to', 'tell', 'other', 'it', 'do', 'whose', 'show', 'one', 'on', 'for', 'can', 'whats', 'urban', 'them', 'list', 'exist', 'each', 'could', 'about' ] rules_optionals = [ Rule('$ROOT', '?$Optionals $Query ?$Optionals', lambda sems: sems[1]), Rule('$Optionals', '$Optional ?$Optionals'), ] + [Rule('$Optional', word) for word in optional_words]
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Because $Query has not yet been defined, we won't be able to parse anything yet. Entities and collections Our grammar will need to be able to recognize names of entities, such as "utah". There are hundreds of entities in Geobase, and we don't want to have to introduce a grammar rule for each entity. Instead, we'll define a new annotator, GeobaseAnnotator, which simply annotates phrases which exactly match names in Geobase.
from annotator import Annotator, NumberAnnotator class GeobaseAnnotator(Annotator): def __init__(self, geobase): self.geobase = geobase def annotate(self, tokens): phrase = ' '.join(tokens) places = self.geobase.binaries_rev['name'][phrase] return [('$Entity', place) for place in places]
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Now a couple of rules that will enable us to parse inputs that simply name locations, such as "utah". (TODO: explain rationale for $Collection and $Query.)
rules_collection_entity = [ Rule('$Query', '$Collection', lambda sems: sems[0]), Rule('$Collection', '$Entity', lambda sems: sems[0]), ] rules = rules_optionals + rules_collection_entity
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Now let's make a grammar.
annotators = [NumberAnnotator(), GeobaseAnnotator(geobase)] grammar = Grammar(rules=rules, annotators=annotators)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Let's try to parse some inputs which just name locations.
parses = grammar.parse_input('what is utah') for parse in parses[:1]: print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Great, it worked. Now let's run an evaluation on the Geo880 training examples.
from experiment import sample_wins_and_losses from geoquery import GeoQueryDomain from metrics import DenotationOracleAccuracyMetric from scoring import Model domain = GeoQueryDomain() model = Model(grammar=grammar, executor=executor.execute) metric = DenotationOracleAccuracyMetric() sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
We don't yet have a single win: denotation oracle accuracy remains stuck at zero. However, the average number of parses is slightly greater than zero, meaning that there are a few examples which our grammar can parse (though not correctly). It would be interesting to know which examples. There's a utility function in experiment.py which will give you the visibility you need. See if you can figure out what to do. <!-- 'where is san diego ?' is parsed as '/city/san_diego_ca' --> Types (TODO: the words in the training data include lots of words for types. Let's write down some lexical rules defining the category $Type, guided as usual by the words we actually see in the training data. We'll also make $Type a kind of $Collection.)
rules_types = [ Rule('$Collection', '$Type', lambda sems: sems[0]), Rule('$Type', 'state', 'state'), Rule('$Type', 'states', 'state'), Rule('$Type', 'city', 'city'), Rule('$Type', 'cities', 'city'), Rule('$Type', 'big cities', 'city'), Rule('$Type', 'towns', 'city'), Rule('$Type', 'river', 'river'), Rule('$Type', 'rivers', 'river'), Rule('$Type', 'mountain', 'mountain'), Rule('$Type', 'mountains', 'mountain'), Rule('$Type', 'mount', 'mountain'), Rule('$Type', 'peak', 'mountain'), Rule('$Type', 'road', 'road'), Rule('$Type', 'roads', 'road'), Rule('$Type', 'lake', 'lake'), Rule('$Type', 'lakes', 'lake'), Rule('$Type', 'country', 'country'), Rule('$Type', 'countries', 'country'), ]
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
We should now be able to parse inputs denoting types, such as "name the lakes":
rules = rules_optionals + rules_collection_entity + rules_types grammar = Grammar(rules=rules, annotators=annotators) parses = grammar.parse_input('name the lakes') for parse in parses[:1]: print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
It worked. Let's evaluate on the Geo880 training data again.
model = Model(grammar=grammar, executor=executor.execute) sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Liftoff! We have two wins, and denotation oracle accuracy is greater than zero! Just barely. Relations and joins In order to really make this bird fly, we're going to have to handle relations. In particular, we'd like to be able to parse queries which combine a relation with an entity or collection, such as "what is the capital of vermont". As usual, we'll adopt a data-driven approach. The training examples include lots of words and phrases which refer to relations, both "forward" relations (like "traverses") and "reverse" relations (like "traversed by"). Guided by the training data, we'll write lexical rules which define the categories $FwdRelation and $RevRelation. Then we'll add rules that allow either a $FwdRelation or a $RevRelation to be promoted to a generic $Relation, with semantic functions which ensure that the semantics are constructed with the proper orientation. Finally, we'll define a rule for joining a $Relation (such as "capital of") with a $Collection (such as "vermont") to yield another $Collection (such as "capital of vermont"). <!-- (TODO: Give a fuller explanation of what's going on with the semantics.) -->
rules_relations = [ Rule('$Collection', '$Relation ?$Optionals $Collection', lambda sems: sems[0](sems[2])), Rule('$Relation', '$FwdRelation', lambda sems: (lambda arg: (sems[0], arg))), Rule('$Relation', '$RevRelation', lambda sems: (lambda arg: (arg, sems[0]))), Rule('$FwdRelation', '$FwdBordersRelation', 'borders'), Rule('$FwdBordersRelation', 'border'), Rule('$FwdBordersRelation', 'bordering'), Rule('$FwdBordersRelation', 'borders'), Rule('$FwdBordersRelation', 'neighbor'), Rule('$FwdBordersRelation', 'neighboring'), Rule('$FwdBordersRelation', 'surrounding'), Rule('$FwdBordersRelation', 'next to'), Rule('$FwdRelation', '$FwdTraversesRelation', 'traverses'), Rule('$FwdTraversesRelation', 'cross ?over'), Rule('$FwdTraversesRelation', 'flow through'), Rule('$FwdTraversesRelation', 'flowing through'), Rule('$FwdTraversesRelation', 'flows through'), Rule('$FwdTraversesRelation', 'go through'), Rule('$FwdTraversesRelation', 'goes through'), Rule('$FwdTraversesRelation', 'in'), Rule('$FwdTraversesRelation', 'pass through'), Rule('$FwdTraversesRelation', 'passes through'), Rule('$FwdTraversesRelation', 'run through'), Rule('$FwdTraversesRelation', 'running through'), Rule('$FwdTraversesRelation', 'runs through'), Rule('$FwdTraversesRelation', 'traverse'), Rule('$FwdTraversesRelation', 'traverses'), Rule('$RevRelation', '$RevTraversesRelation', 'traverses'), Rule('$RevTraversesRelation', 'has'), Rule('$RevTraversesRelation', 'have'), # 'how many states have major rivers' Rule('$RevTraversesRelation', 'lie on'), Rule('$RevTraversesRelation', 'next to'), Rule('$RevTraversesRelation', 'traversed by'), Rule('$RevTraversesRelation', 'washed by'), Rule('$FwdRelation', '$FwdContainsRelation', 'contains'), # 'how many states have a city named springfield' Rule('$FwdContainsRelation', 'has'), Rule('$FwdContainsRelation', 'have'), Rule('$RevRelation', '$RevContainsRelation', 'contains'), Rule('$RevContainsRelation', 'contained by'), Rule('$RevContainsRelation', 'in'), Rule('$RevContainsRelation', 'found in'), Rule('$RevContainsRelation', 'located in'), Rule('$RevContainsRelation', 'of'), Rule('$RevRelation', '$RevCapitalRelation', 'capital'), Rule('$RevCapitalRelation', 'capital'), Rule('$RevCapitalRelation', 'capitals'), Rule('$RevRelation', '$RevHighestPointRelation', 'highest_point'), Rule('$RevHighestPointRelation', 'high point'), Rule('$RevHighestPointRelation', 'high points'), Rule('$RevHighestPointRelation', 'highest point'), Rule('$RevHighestPointRelation', 'highest points'), Rule('$RevRelation', '$RevLowestPointRelation', 'lowest_point'), Rule('$RevLowestPointRelation', 'low point'), Rule('$RevLowestPointRelation', 'low points'), Rule('$RevLowestPointRelation', 'lowest point'), Rule('$RevLowestPointRelation', 'lowest points'), Rule('$RevLowestPointRelation', 'lowest spot'), Rule('$RevRelation', '$RevHighestElevationRelation', 'highest_elevation'), Rule('$RevHighestElevationRelation', '?highest elevation'), Rule('$RevRelation', '$RevHeightRelation', 'height'), Rule('$RevHeightRelation', 'elevation'), Rule('$RevHeightRelation', 'height'), Rule('$RevHeightRelation', 'high'), Rule('$RevHeightRelation', 'tall'), Rule('$RevRelation', '$RevAreaRelation', 'area'), Rule('$RevAreaRelation', 'area'), Rule('$RevAreaRelation', 'big'), Rule('$RevAreaRelation', 'large'), Rule('$RevAreaRelation', 'size'), Rule('$RevRelation', '$RevPopulationRelation', 'population'), Rule('$RevPopulationRelation', 'big'), Rule('$RevPopulationRelation', 'large'), Rule('$RevPopulationRelation', 'populated'), Rule('$RevPopulationRelation', 'population'), Rule('$RevPopulationRelation', 'populations'), Rule('$RevPopulationRelation', 'populous'), Rule('$RevPopulationRelation', 'size'), Rule('$RevRelation', '$RevLengthRelation', 'length'), Rule('$RevLengthRelation', 'length'), Rule('$RevLengthRelation', 'long'), ]
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
We should now be able to parse "what is the capital of vermont". Let's see:
rules = rules_optionals + rules_collection_entity + rules_types + rules_relations grammar = Grammar(rules=rules, annotators=annotators) parses = grammar.parse_input('what is the capital of vermont ?') for parse in parses[:1]: print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Montpelier! I always forget that one. OK, let's evaluate our progress on the Geo880 training data.
model = Model(grammar=grammar, executor=executor.execute) sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Hot diggity, it's working. Denotation oracle accuracy is over 12%, double digits. We have 75 wins, and they're what we expect: queries that simply combine a relation and an entity (or collection). Intersections
rules_intersection = [ Rule('$Collection', '$Collection $Collection', lambda sems: ('.and', sems[0], sems[1])), Rule('$Collection', '$Collection $Optional $Collection', lambda sems: ('.and', sems[0], sems[2])), Rule('$Collection', '$Collection $Optional $Optional $Collection', lambda sems: ('.and', sems[0], sems[3])), ] rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection grammar = Grammar(rules=rules, annotators=annotators) parses = grammar.parse_input('states bordering california') for parse in parses[:1]: print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Let's evaluate the impact on the Geo880 training examples.
model = Model(grammar=grammar, executor=executor.execute) sample_wins_and_losses(domain=domain, model=model, metric=metric, seed=1)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Great, denotation oracle accuracy has more than doubled, from 12% to 28%. And the wins now include intersections like "which states border new york". The losses, however, are clearly dominated by one category of error. Superlatives Many of the losses involve superlatives, such as "biggest" or "shortest". Let's remedy that. As usual, we let the training examples guide us in adding lexical rules.
rules_superlatives = [ Rule('$Collection', '$Superlative ?$Optionals $Collection', lambda sems: sems[0] + (sems[2],)), Rule('$Collection', '$Collection ?$Optionals $Superlative', lambda sems: sems[2] + (sems[0],)), Rule('$Superlative', 'largest', ('.argmax', 'area')), Rule('$Superlative', 'largest', ('.argmax', 'population')), Rule('$Superlative', 'biggest', ('.argmax', 'area')), Rule('$Superlative', 'biggest', ('.argmax', 'population')), Rule('$Superlative', 'smallest', ('.argmin', 'area')), Rule('$Superlative', 'smallest', ('.argmin', 'population')), Rule('$Superlative', 'longest', ('.argmax', 'length')), Rule('$Superlative', 'shortest', ('.argmin', 'length')), Rule('$Superlative', 'tallest', ('.argmax', 'height')), Rule('$Superlative', 'highest', ('.argmax', 'height')), Rule('$Superlative', '$MostLeast $RevRelation', lambda sems: (sems[0], sems[1])), Rule('$MostLeast', 'most', '.argmax'), Rule('$MostLeast', 'least', '.argmin'), Rule('$MostLeast', 'lowest', '.argmin'), Rule('$MostLeast', 'greatest', '.argmax'), Rule('$MostLeast', 'highest', '.argmax'), ]
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Now we should be able to parse "tallest mountain":
rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection + rules_superlatives grammar = Grammar(rules=rules, annotators=annotators) parses = grammar.parse_input('tallest mountain') for parse in parses[:1]: print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Wow, superlatives make a big difference. Denotation oracle accuracy has surged from 28% to 42%. Reverse joins
def reverse(relation_sem): """TODO""" # relation_sem is a lambda function which takes an arg and forms a pair, # either (rel, arg) or (arg, rel). We want to swap the order of the pair. def apply_and_swap(arg): pair = relation_sem(arg) return (pair[1], pair[0]) return apply_and_swap rules_reverse_joins = [ Rule('$Collection', '$Collection ?$Optionals $Relation', lambda sems: reverse(sems[2])(sems[0])), ] rules = rules_optionals + rules_collection_entity + rules_types + rules_relations + rules_intersection + rules_superlatives + rules_reverse_joins grammar = Grammar(rules=rules, annotators=annotators) parses = grammar.parse_input('which states does the rio grande cross') for parse in parses[:1]: print('\n'.join([str(parse.semantics), str(executor.execute(parse.semantics))]))
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
This time the gain in denotation oracle accuracy was more modest, from 42% to 47%. Still, we are making good progress. However, note that a substantial gap has opened between accuracy and oracle accuracy. This indicates that we could benefit from adding a scoring model. Feature engineering Through an iterative process of grammar engineering, we've managed to increase denotation oracle accuracy of 47%. But we've been ignoring denotation accuracy, which now lags far behind, at 25%. This represents an opportunity. In order to figure out how best to fix the problem, we need to do some error analysis. Let's look for some specific examples where denotation accuracy is 0, even though denotation oracle accuracy is 1. In other words, let's look for some examples where we have a correct parse, but it's not ranked at the top. We should be able to find some cases like that among the first ten examples of the Geo880 training data.
from experiment import evaluate_model from metrics import denotation_match_metrics evaluate_model(model=model, examples=geo880_train_examples[:10], metrics=denotation_match_metrics(), print_examples=True)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Take a look through that output. Over the ten examples, we achieved denotation oracle accuracy of 60%, but denotation accuracy of just 40%. In other words, there were two examples where we generated a correct parse, but failed to rank it at the top. Take a closer look at those two cases. The first case is "what state has the shortest river ?". The top parse has semantics ('.and', 'state', ('.argmin', 'length', 'river')), which means something like "states that are the shortest river". That's not right. In fact, there's no such thing: the denotation is empty. The second case is "what is the highest mountain in alaska ?". The top parse has semantics ('.argmax', 'height', ('.and', 'mountain', '/state/alaska')), which means "the highest mountain which is alaska". Again, there's no such thing: the denotation is empty. So in both of the cases where we put the wrong parse at the top, the top parse had nonsensical semantics with an empty denotation. In fact, if you scroll through the output above, you will see that there are a lot of candidate parses with empty denotations. Seems like we could make a big improvement just by downweighting parses with empty denotations. This is easy to do.
def empty_denotation_feature(parse): features = defaultdict(float) if parse.denotation == (): features['empty_denotation'] += 1.0 return features weights = {'empty_denotation': -1.0} model = Model(grammar=grammar, feature_fn=empty_denotation_feature, weights=weights, executor=executor.execute)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Let's evaluate the impact of using our new empty_denotation feature on the Geo880 training examples.
from experiment import evaluate_model from metrics import denotation_match_metrics evaluate_model(model=model, examples=geo880_train_examples, metrics=denotation_match_metrics(), print_examples=False)
sippycup-unit-3.ipynb
sloanesturz/cs224u-final-project
gpl-2.0
Vertex client library: AutoML text entity extraction model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to create text entity extraction models and do batch prediction using Google Cloud's AutoML. Dataset The dataset used for this tutorial is the NCBI Disease Research Abstracts dataset from National Center for Biotechnology Information. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. Objective In this tutorial, you create an AutoML text entity extraction model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Make a batch prediction. There is one key difference between using batch prediction and using online prediction: Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library.
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
REGION = "us-central1" # @param {type: "string"}
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
# If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment.
import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Vertex constants Setup up the following constants for Vertex: API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
# API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
AutoML constants Set constants unique to AutoML datasets and training: Dataset Schemas: Tells the Dataset resource service which type of dataset it is. Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated). Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
# Text Dataset type DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml" # Text Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_extraction_io_format_1.0.0.yaml" # Text Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_extraction_1.0.0.yaml"
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Hardware Accelerators Set the hardware accelerators (e.g., GPU), if any, for prediction. Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100 Otherwise specify (None, None) to use a container image to run on a CPU.
if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Container (Docker) image For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine Type Next, set the machine type to use for prediction. Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start creating your own AutoML text entity extraction model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. Job Service for batch prediction and custom training.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["job"] = create_job_client() for client in clients.items(): print(client)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create Dataset resource instance Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following: Uses the dataset client service. Creates an Vertex Dataset resource (aip.Dataset), with the following parameters: display_name: The human-readable name you choose to give it. metadata_schema_uri: The schema for the dataset type. Calls the client dataset service method create_dataset, with the following parameters: parent: The Vertex location root path for your Database, Model and Endpoint resources. dataset: The Vertex dataset object instance you created. The method returns an operation object. An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning. You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method: | Method | Description | | ----------- | ----------- | | result() | Waits for the operation to complete and returns a result object in JSON format. | | running() | Returns True/False on whether the operation is still running. | | done() | Returns True/False on whether the operation is completed. | | canceled() | Returns True/False on whether the operation was canceled. | | cancel() | Cancels the operation (this may take up to 30 seconds). |
TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset( display_name=name, metadata_schema_uri=schema, labels=labels ) operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("biomedical-" + TIMESTAMP, DATA_SCHEMA)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Now save the unique dataset identifier for the Dataset resource instance you created.
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Data preparation The Vertex Dataset resource for text has a couple of requirements for your text entity extraction data. Text examples must be stored in a JSONL file. Unlike text classification and sentiment analysis, a CSV index file is not supported. The examples must be either inline text or reference text files that are in Cloud Storage buckets. JSONL For text entity extraction, the JSONL file has a few requirements: Each data item is a separate JSON object, on a separate line. The key/value pair text_segment_annotations is a list of character start/end positions in the text per entity with the corresponding label. display_name: The label. start_offset/end_offset: The character offsets of the start/end of the entity. The key/value pair text_content is the text. {'text_segment_annotations': [{'end_offset': value, 'start_offset': value, 'display_name': label}, ...], 'text_content': text} Note: The dictionary key fields may alternatively be in camelCase. For example, 'display_name' can also be 'displayName'. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/ucaip_ten_dataset.jsonl"
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Quick peek at your data You will use a version of the NCBI Biomedical dataset that is stored in a public Cloud Storage bucket, using a JSONL index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import data Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following: Uses the Dataset client. Calls the client method import_data, with the following parameters: name: The human readable name you give to the Dataset resource (e.g., biomedical). import_configs: The import configuration. import_configs: A Python list containing a dictionary, with the key/value entries: gcs_sources: A list of URIs to the paths of the one or more index files. import_schema_uri: The schema identifying the labeling type. The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
def import_data(dataset, gcs_sources, schema): config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}] print("dataset:", dataset_id) start_time = time.time() try: operation = clients["dataset"].import_data( name=dataset_id, import_configs=config ) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print( "after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled(), ) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train the model Now train an AutoML text entity extraction model using your Vertex Dataset resource. To train the model, do the following steps: Create an Vertex training pipeline for the Dataset resource. Execute the pipeline to start the training. Create a training pipeline You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of: Being reusable for subsequent training jobs. Can be containerized and ran as a batch job. Can be distributed. All the steps are associated with the same pipeline job for tracking progress. Use this helper function create_pipeline, which takes the following parameters: pipeline_name: A human readable name for the pipeline job. model_name: A human readable name for the model. dataset: The Vertex fully qualified dataset identifier. schema: The dataset labeling (annotation) training schema. task: A dictionary describing the requirements for the training job. The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters: parent: The Vertex location root path for your Dataset, Model and Endpoint resources. training_pipeline: the full specification for the pipeline training job. Let's look now deeper into the minimal requirements for constructing a training_pipeline specification: display_name: A human readable name for the pipeline job. training_task_definition: The dataset labeling (annotation) training schema. training_task_inputs: A dictionary describing the requirements for the training job. model_to_upload: A human readable name for the model. input_data_config: The dataset specification. dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split("/")[-1] input_config = { "dataset_id": dataset_id, "fraction_split": { "training_fraction": 0.8, "validation_fraction": 0.1, "test_fraction": 0.1, }, } training_pipeline = { "display_name": pipeline_name, "training_task_definition": schema, "training_task_inputs": task, "input_data_config": input_config, "model_to_upload": {"display_name": model_name}, } try: pipeline = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline ) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields you need to specify are: multi_label: Whether True/False this is a multi-label (vs single) classification. budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. model_type: The type of deployed model: CLOUD: For deploying to Google Cloud. disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget. Finally, you create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
PIPE_NAME = "biomedical_pipe-" + TIMESTAMP MODEL_NAME = "biomedical_model-" + TIMESTAMP task = json_format.ParseDict( { "multi_label": False, "budget_milli_node_hours": 8000, "model_type": "CLOUD", "disable_early_stopping": False, }, Value(), ) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Now save the unique identifier of the training pipeline you created.
# The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split("/")[-1] print(pipeline_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter: name: The Vertex fully qualified pipeline identifier. When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
def get_training_pipeline(name, silent=False): response = clients["pipeline"].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deployment Training the above model may take upwards of 120 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Model information Now that your model is trained, you can get some information on your model. Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slices Use this helper function list_model_evaluations, which takes the following parameter: name: The Vertex fully qualified model identifier for the Model resource. This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric. For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (confusionMatrix and confidenceMetrics) you will print the result.
def list_model_evaluations(name): response = clients["model"].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): print(metric) print("confusionMatrix", metrics["confusionMatrix"]) print("confidenceMetrics", metrics["confidenceMetrics"]) return evaluation.name last_evaluation = list_model_evaluations(model_to_deploy_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Model deployment for batch prediction Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction. For online prediction, you: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. Make online prediction requests to the Endpoint resource. For batch-prediction, you: Create a batch prediction job. The job service will provision resources for the batch prediction request. The results of the batch prediction request are returned to the caller. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction request Now do a batch prediction to your deployed model. Make test items You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
test_item_1 = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described' test_item_2 = "Analysis of alkaptonuria (AKU) mutations and polymorphisms reveals that the CCC sequence motif is a mutational hot spot in the homogentisate 1,2 dioxygenase gene (HGO). We recently showed that alkaptonuria ( AKU ) is caused by loss-of-function mutations in the homogentisate 1 , 2 dioxygenase gene ( HGO ) . Herein we describe haplotype and mutational analyses of HGO in seven new AKU pedigrees . These analyses identified two novel single-nucleotide polymorphisms ( INV4 + 31A-- > G and INV11 + 18A-- > G ) and six novel AKU mutations ( INV1-1G-- > A , W60G , Y62C , A122D , P230T , and D291E ) , which further illustrates the remarkable allelic heterogeneity found in AKU . Reexamination of all 29 mutations and polymorphisms thus far described in HGO shows that these nucleotide changes are not randomly distributed ; the CCC sequence motif and its inverted complement , GGG , are preferentially mutated . These analyses also demonstrated that the nucleotide substitutions in HGO do not involve CpG dinucleotides , which illustrates important differences between HGO and other genes for the occurrence of mutation at specific short-sequence motifs . Because the CCC sequence motifs comprise a significant proportion ( 34 . 5 % ) of all mutated bases that have been observed in HGO , we conclude that the CCC triplet is a mutational hot spot in HGO ."
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs: content: The Cloud Storage path to the file with the text item. mime_type: The content type. In our example, it is an text file. For example: {'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}
import json import tensorflow as tf gcs_test_item_1 = BUCKET_NAME + "/test1.txt" with tf.io.gfile.GFile(gcs_test_item_1, "w") as f: f.write(test_item_1 + "\n") gcs_test_item_2 = BUCKET_NAME + "/test2.txt" with tf.io.gfile.GFile(gcs_test_item_2, "w") as f: f.write(test_item_2 + "\n") gcs_input_uri = BUCKET_NAME + "/test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: data = {"content": gcs_test_item_1, "mime_type": "text/plain"} f.write(json.dumps(data) + "\n") data = {"content": gcs_test_item_2, "mime_type": "text/plain"} f.write(json.dumps(data) + "\n") print(gcs_input_uri) ! gsutil cat $gcs_input_uri
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Compute instance scaling You have several choices on scaling the compute instances for handling your batch prediction requests: Single Instance: The batch prediction requests are processed on a single compute instance. Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one. Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them. Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions. The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
MIN_NODES = 1 MAX_NODES = 1
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch prediction request Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters: display_name: The human readable name for the prediction job. model_name: The Vertex fully qualified identifier for the Model resource. gcs_source_uri: The Cloud Storage path to the input file -- which you created above. gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to. parameters: Additional filtering parameters for serving prediction results. The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters: parent: The Vertex location root path for Dataset, Model and Pipeline resources. batch_prediction_job: The specification for the batch prediction job. Let's now dive into the specification for the batch_prediction_job: display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. dedicated_resources: The compute resources to provision for the batch prediction job. machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated. starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES. max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES. model_parameters: Additional filtering parameters for serving prediction results. Note, text models do not support additional parameters. input_config: The input source and format type for the instances to predict. instances_format: The format of the batch prediction request file: jsonl only supported. gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests. output_config: The output destination and format for the predictions. prediction_format: The format of the batch prediction response file: jsonl only supported. gcs_destination: The output destination for the predictions. dedicated_resources: The compute resources to provision for the batch prediction job. machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated. starting_replica_count: The number of compute instances to initially provision. max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. This call is an asychronous operation. You will print from the response object a few select fields, including: name: The Vertex fully qualified identifier assigned to the batch prediction job. display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. generate_explanations: Whether True/False explanations were provided with the predictions (explainability). state: The state of the prediction job (pending, running, etc). Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.
BATCH_MODEL = "biomedical_batch-" + TIMESTAMP def create_batch_prediction_job( display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None, ): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } batch_prediction_job = { "display_name": display_name, # Format: 'projects/{project}/locations/{location}/models/{model_id}' "model": model_name, "model_parameters": json_format.ParseDict(parameters, Value()), "input_config": { "instances_format": IN_FORMAT, "gcs_source": {"uris": [gcs_source_uri]}, }, "output_config": { "predictions_format": OUT_FORMAT, "gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix}, }, "dedicated_resources": { "machine_spec": machine_spec, "starting_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, }, } response = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", response.labels) return response IN_FORMAT = "jsonl" OUT_FORMAT = "jsonl" # [jsonl] response = create_batch_prediction_job( BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None )
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Now get the unique identifier for the batch prediction job you created.
# The full unique ID for the batch job batch_job_id = response.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get information on a batch prediction job Use this helper function get_batch_prediction_job, with the following paramter: job_name: The Vertex fully qualified identifier for the batch prediction job. The helper function calls the job client service's get_batch_prediction_job method, with the following paramter: name: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id The helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.
def get_batch_prediction_job(job_name, silent=False): response = clients["job"].get_batch_prediction_job(name=job_name) if silent: return response.output_config.gcs_destination.output_uri_prefix, response.state print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: # not all data types support explanations print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" error:", response.error) gcs_destination = response.output_config.gcs_destination print(" gcs_destination") print(" output_uri_prefix:", gcs_destination.output_uri_prefix) return gcs_destination.output_uri_prefix, response.state predictions, state = get_batch_prediction_job(batch_job_id)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the predictions When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED. Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called predictions*.jsonl. Now display (cat) the contents. You will see multiple JSON objects, one for each prediction. The first field text_snippet is the text file you did the prediction on, and the second field annotations is the prediction, which is further broken down into: text_extraction: The extracted entity from the text. display_name: The predicted label for the extraction entity. score: The confidence level between 0 and 1 in the prediction. startOffset: The character offset in the text of the start of the extracted entity. endOffset: The character offset in the text of the end of the extracted entity.
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: predictions, state = get_batch_prediction_job(batch_job_id, True) if state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", state) if state == aip.JobState.JOB_STATE_FAILED: raise Exception("Batch Job Failed") else: folder = get_latest_predictions(predictions) ! gsutil ls $folder/prediction*.jsonl ! gsutil cat $folder/prediction*.jsonl break time.sleep(60)
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket
delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME
notebooks/community/gapic/automl/showcase_automl_text_entity_extraction_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create growler application with name NotebookServer
app = growler.App("NotebookServer")
examples/ExampleNotebook_1.ipynb
pyGrowler/Growler
apache-2.0
Add a general purpose method which prints ip address and the USER-AGENT header
@app.use def print_client_info(req, res): ip = req.ip reqpath = req.path print("[{ip}] {path}".format(ip=ip, path=reqpath)) print(" >", req.headers['USER-AGENT']) print(flush=True)
examples/ExampleNotebook_1.ipynb
pyGrowler/Growler
apache-2.0
Next, add a route matching any GET requests for the root (/) of the site. This uses a simple global variable to count the number times this page has been accessed, and return text to the client
i = 0 @app.get("/") def index(req, res): global i res.send_text("It Works! (%d)" % i) i += 1
examples/ExampleNotebook_1.ipynb
pyGrowler/Growler
apache-2.0
We can see the tree of middleware all requests will pass through - Notice the router object that was implicitly created which will match all requests.
app.print_middleware_tree()
examples/ExampleNotebook_1.ipynb
pyGrowler/Growler
apache-2.0
Use the helper method to create the asyncio server listening on port 9000.
app.create_server_and_run_forever(host='127.0.0.1', port=9000)
examples/ExampleNotebook_1.ipynb
pyGrowler/Growler
apache-2.0
First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
gameStake = 50 cards = range(10)
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
class Player: # create here two local variables to store a unique ID for each player and the player's current 'pot' of money PN=0 Pot=0# [FILL IN YOUR VARIABLES HERE] # in the __init__() function, use the two input variables to initialize the ID and starting pot of each player def __init__(self, inputID, startingPot): self.PN=inputID self.Pot=startingPot# [CREATE YOUR INITIALIZATIONS HERE] # create a function for playing the game. This function starts by taking an input for the dealer's card # and picking a random number from the 'cards' list for the player's card def play(self, dealerCard): # we use the random.choice() function to select a random item from a list playerCard = random.choice(cards) # here we should have a conditional that tests the player's card value against the dealer card # and returns a statement saying whether the player won or lost the hand # before returning the statement, make sure to either add or subtract the stake from the player's pot so that # the 'pot' variable tracks the player's money if playerCard < dealerCard: self.Pot=self.Pot-gameStake print 'player'+str(self.PN)+' Lose,'+str(playerCard)+' vs '+str(dealerCard)# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE] else: self.Pot=self.Pot+gameStake print 'player'+str(self.PN)+' Win,'+str(playerCard)+' vs '+str(dealerCard)# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE] # create an accessor function to return the current value of the player's pot def returnPot(self): return self.Pot# [FILL IN THE RETURN STATEMENT] # create an accessor function to return the player's ID def returnID(self): return self.PN# [FILL IN THE RETURN STATEMENT]
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
def playHand(players): for player in players: dealerCard = random.choice(cards) player.play(dealerCard)#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
def checkBalances(players): for player in players: print 'player '+str(player.returnID())+ ' has $ '+str(player.returnPot())+ ' left'#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
players = []
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
for i in range(5): players.append(Player(i, 500))
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
for i in range(10): print '' print 'start game ' + str(i) playHand(players)
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0
Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.
print '' print 'game results:' checkBalances(players)
notebooks/week-2/04 - Lab 2 Assignment.ipynb
yuhao0531/dmc
apache-2.0