repo
stringclasses 17
values | instance_id
stringlengths 15
32
| base_commit
stringlengths 40
40
| patch
stringlengths 1
101M
| test_patch
stringclasses 1
value | problem_statement
stringlengths 28
29.2k
| hints_text
stringlengths 0
30.7k
| created_at
stringdate 2011-11-02 23:26:02
2023-08-23 21:42:04
| version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|
JohnSnowLabs/spark-nlp
|
JohnSnowLabs__spark-nlp-13798
|
e586d5647be4e574643d00f70fb0c3da194f5330
|
diff --git a/python/sparknlp/annotator/classifier_dl/bert_for_zero_shot_classification.py b/python/sparknlp/annotator/classifier_dl/bert_for_zero_shot_classification.py
--- a/python/sparknlp/annotator/classifier_dl/bert_for_zero_shot_classification.py
+++ b/python/sparknlp/annotator/classifier_dl/bert_for_zero_shot_classification.py
@@ -28,6 +28,9 @@ class BertForZeroShotClassification(AnnotatorModel,
number of potential classes, they can be chosen at runtime. It usually means it's slower but it is much more
flexible.
+ Note that the model will loop through all provided labels. So the more labels you have, the
+ longer this process will take.
+
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
pair and passed to the pretrained model.
diff --git a/python/sparknlp/annotator/classifier_dl/distil_bert_for_zero_shot_classification.py b/python/sparknlp/annotator/classifier_dl/distil_bert_for_zero_shot_classification.py
--- a/python/sparknlp/annotator/classifier_dl/distil_bert_for_zero_shot_classification.py
+++ b/python/sparknlp/annotator/classifier_dl/distil_bert_for_zero_shot_classification.py
@@ -28,6 +28,9 @@ class DistilBertForZeroShotClassification(AnnotatorModel,
number of potential classes, they can be chosen at runtime. It usually means it's slower but it is much more
flexible.
+ Note that the model will loop through all provided labels. So the more labels you have, the
+ longer this process will take.
+
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
pair and passed to the pretrained model.
diff --git a/python/sparknlp/annotator/classifier_dl/roberta_bert_for_zero_shot_classification.py b/python/sparknlp/annotator/classifier_dl/roberta_bert_for_zero_shot_classification.py
--- a/python/sparknlp/annotator/classifier_dl/roberta_bert_for_zero_shot_classification.py
+++ b/python/sparknlp/annotator/classifier_dl/roberta_bert_for_zero_shot_classification.py
@@ -27,6 +27,9 @@ class RoBertaForZeroShotClassification(AnnotatorModel,
number of potential classes, they can be chosen at runtime. It usually means it's slower but it is much more
flexible.
+ Note that the model will loop through all provided labels. So the more labels you have, the
+ longer this process will take.
+
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
pair and passed to the pretrained model.
diff --git a/python/sparknlp/base/document_assembler.py b/python/sparknlp/base/document_assembler.py
--- a/python/sparknlp/base/document_assembler.py
+++ b/python/sparknlp/base/document_assembler.py
@@ -24,10 +24,10 @@ class DocumentAssembler(AnnotatorTransformer):
"""Prepares data into a format that is processable by Spark NLP.
This is the entry point for every Spark NLP pipeline. The
- `DocumentAssembler` can read either a ``String`` column or an
- ``Array[String]``. Additionally, :meth:`.setCleanupMode` can be used to
- pre-process the text (Default: ``disabled``). For possible options please
- refer the parameters section.
+ `DocumentAssembler` reads ``String`` columns. Additionally,
+ :meth:`.setCleanupMode` can be used to pre-process the
+ text (Default: ``disabled``). For possible options please refer the
+ parameters section.
For more extended examples on document pre-processing see the
`Examples <https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/annotation/text/english/document-assembler/Loading_Documents_With_DocumentAssembler.ipynb>`__.
|
DocumentAssembler on array<string>
### Is there an existing issue for this?
- [X] I have searched the existing issues and did not find a match.
### Who can help?
_No response_
### What are you working on?
I don't know why but my question was deleted. Therefore, I will repeat again.
I am working with a dataframe that I need to lemmatize. There, the input is an array of strings. I am trying to use DocumentAssembler for array of strings. The documentation says: "The DocumentAssembler can read either a String column or an Array[String])". But it doesn't work like that for me. Can you explain what I'm doing wrong? Or is the documentation out of date?
### Current Behavior
I am getting an error
```
AnalysisException: [CANNOT_UP_CAST_DATATYPE] Cannot up cast input from "ARRAY<STRING>" to "STRING".
The type path of the target object is:
- root class: "java.lang.String"
You can either add an explicit cast to the input data or choose a higher precision type of the field in the target object
```
### Expected Behavior
-
### Steps To Reproduce
When I do a simple example:
```
data = spark.createDataFrame([[["Spark NLP is an open-source text processing library."]]]).toDF("text")
documentAssembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
result = documentAssembler.transform(data)
result.select("document").show(truncate=False
```
### Spark NLP version and Apache Spark
sparknlp.version() == '4.4.0'
spark.version == '3.4.0'
### Type of Spark Application
_No response_
### Java Version
_No response_
### Java Home Directory
_No response_
### Setup and installation
_No response_
### Operating System and Version
_No response_
### Link to your project (if available)
_No response_
### Additional Information
_No response_
|
2023-05-16T14:39:56Z
|
[]
|
[]
| ||||
JohnSnowLabs/spark-nlp
|
JohnSnowLabs__spark-nlp-13873
|
d732eaaf323f3605701c5d1b2b1ec5f4aae39615
|
diff --git a/python/docs/conf.py b/python/docs/conf.py
--- a/python/docs/conf.py
+++ b/python/docs/conf.py
@@ -23,7 +23,7 @@
author = "John Snow Labs"
# The full version, including alpha/beta/rc tags
-release = "4.4.4"
+release = "5.0.0"
pyspark_version = "3.2.3"
# -- General configuration ---------------------------------------------------
diff --git a/python/setup.py b/python/setup.py
--- a/python/setup.py
+++ b/python/setup.py
@@ -41,7 +41,7 @@
# project code, see
# https://packaging.python.org/en/latest/single_source_version.html
- version='4.4.4', # Required
+ version='5.0.0', # Required
# This is a one-line description or tagline of what your project does. This
# corresponds to the 'Summary' metadata field:
diff --git a/python/sparknlp/__init__.py b/python/sparknlp/__init__.py
--- a/python/sparknlp/__init__.py
+++ b/python/sparknlp/__init__.py
@@ -128,7 +128,7 @@ def start(gpu=False,
The initiated Spark session.
"""
- current_version = "4.4.4"
+ current_version = "5.0.0"
if params is None:
params = {}
@@ -298,4 +298,4 @@ def version():
str
The current Spark NLP version.
"""
- return '4.4.4'
+ return '5.0.0'
diff --git a/python/sparknlp/annotator/embeddings/__init__.py b/python/sparknlp/annotator/embeddings/__init__.py
--- a/python/sparknlp/annotator/embeddings/__init__.py
+++ b/python/sparknlp/annotator/embeddings/__init__.py
@@ -22,6 +22,8 @@
from sparknlp.annotator.embeddings.distil_bert_embeddings import *
from sparknlp.annotator.embeddings.doc2vec import *
from sparknlp.annotator.embeddings.elmo_embeddings import *
+from sparknlp.annotator.embeddings.e5_embeddings import *
+from sparknlp.annotator.embeddings.instructor_embeddings import *
from sparknlp.annotator.embeddings.longformer_embeddings import *
from sparknlp.annotator.embeddings.roberta_embeddings import *
from sparknlp.annotator.embeddings.roberta_sentence_embeddings import *
diff --git a/python/sparknlp/annotator/embeddings/e5_embeddings.py b/python/sparknlp/annotator/embeddings/e5_embeddings.py
new file mode 100644
--- /dev/null
+++ b/python/sparknlp/annotator/embeddings/e5_embeddings.py
@@ -0,0 +1,191 @@
+# Copyright 2017-2022 John Snow Labs
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Contains classes for E5Embeddings."""
+
+from sparknlp.common import *
+
+
+class E5Embeddings(AnnotatorModel,
+ HasEmbeddingsProperties,
+ HasCaseSensitiveProperties,
+ HasStorageRef,
+ HasBatchedAnnotate,
+ HasMaxSentenceLengthLimit):
+ """Sentence embeddings using E5.
+
+ E5, a weakly supervised text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.)
+ Pretrained models can be loaded with :meth:`.pretrained` of the companion
+ object:
+
+ >>> embeddings = E5Embeddings.pretrained() \\
+ ... .setInputCols(["document"]) \\
+ ... .setOutputCol("e5_embeddings")
+
+
+ The default model is ``"e5_small"``, if no name is provided.
+
+ For available pretrained models please see the
+ `Models Hub <https://sparknlp.org/models?q=E5>`__.
+
+
+ ====================== ======================
+ Input Annotation types Output Annotation type
+ ====================== ======================
+ ``DOCUMENT`` ``SENTENCE_EMBEDDINGS``
+ ====================== ======================
+
+ Parameters
+ ----------
+ batchSize
+ Size of every batch , by default 8
+ dimension
+ Number of embedding dimensions, by default 768
+ caseSensitive
+ Whether to ignore case in tokens for embeddings matching, by default False
+ maxSentenceLength
+ Max sentence length to process, by default 512
+ configProtoBytes
+ ConfigProto from tensorflow, serialized into byte array.
+
+ References
+ ----------
+ `Text Embeddings by Weakly-Supervised Contrastive Pre-training <https://arxiv.org/pdf/2212.03533>`__
+
+ https://github.com/microsoft/unilm/tree/master/e5
+
+ **Paper abstract**
+
+ *This paper presents E5, a family of state-of-the-art text embeddings that transfer
+ well to a wide range of tasks. The model is trained in a contrastive manner with
+ weak supervision signals from our curated large-scale text pair dataset (called
+ CCPairs). E5 can be readily used as a general-purpose embedding model for any
+ tasks requiring a single-vector representation of texts such as retrieval, clustering,
+ and classification, achieving strong performance in both zero-shot and fine-tuned
+ settings. We conduct extensive evaluations on 56 datasets from the BEIR and
+ MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms
+ the strong BM25 baseline on the BEIR retrieval benchmark without using any
+ labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark,
+ beating existing embedding models with 40× more parameters.*
+
+ Examples
+ --------
+ >>> import sparknlp
+ >>> from sparknlp.base import *
+ >>> from sparknlp.annotator import *
+ >>> from pyspark.ml import Pipeline
+ >>> documentAssembler = DocumentAssembler() \\
+ ... .setInputCol("text") \\
+ ... .setOutputCol("document")
+ >>> embeddings = E5Embeddings.pretrained() \\
+ ... .setInputCols(["document"]) \\
+ ... .setOutputCol("e5_embeddings")
+ >>> embeddingsFinisher = EmbeddingsFinisher() \\
+ ... .setInputCols(["e5_embeddings"]) \\
+ ... .setOutputCols("finished_embeddings") \\
+ ... .setOutputAsVector(True)
+ >>> pipeline = Pipeline().setStages([
+ ... documentAssembler,
+ ... embeddings,
+ ... embeddingsFinisher
+ ... ])
+ >>> data = spark.createDataFrame([["query: how much protein should a female eat",
+ ... "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day." + \
+ ... "But, as you can see from this chart, you'll need to increase that if you're expecting or training for a" + \
+ ... "marathon. Check out the chart below to see how much protein you should be eating each day.",
+ ... ]]).toDF("text")
+ >>> result = pipeline.fit(data).transform(data)
+ >>> result.selectExpr("explode(finished_embeddings) as result").show(5, 80)
+ +--------------------------------------------------------------------------------+
+ | result|
+ +--------------------------------------------------------------------------------+
+ |[[8.0190285E-4, -0.005974853, -0.072875895, 0.007944068, 0.026059335, -0.0080...|
+ |[[0.050514214, 0.010061974, -0.04340176, -0.020937217, 0.05170225, 0.01157857...|
+ +--------------------------------------------------------------------------------+
+ """
+
+ name = "E5Embeddings"
+
+ inputAnnotatorTypes = [AnnotatorType.DOCUMENT]
+
+ outputAnnotatorType = AnnotatorType.SENTENCE_EMBEDDINGS
+ configProtoBytes = Param(Params._dummy(),
+ "configProtoBytes",
+ "ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()",
+ TypeConverters.toListInt)
+
+
+ def setConfigProtoBytes(self, b):
+ """Sets configProto from tensorflow, serialized into byte array.
+
+ Parameters
+ ----------
+ b : List[int]
+ ConfigProto from tensorflow, serialized into byte array
+ """
+ return self._set(configProtoBytes=b)
+
+ @keyword_only
+ def __init__(self, classname="com.johnsnowlabs.nlp.embeddings.E5Embeddings", java_model=None):
+ super(E5Embeddings, self).__init__(
+ classname=classname,
+ java_model=java_model
+ )
+ self._setDefault(
+ dimension=768,
+ batchSize=8,
+ maxSentenceLength=512,
+ caseSensitive=False,
+ )
+
+ @staticmethod
+ def loadSavedModel(folder, spark_session):
+ """Loads a locally saved model.
+
+ Parameters
+ ----------
+ folder : str
+ Folder of the saved model
+ spark_session : pyspark.sql.SparkSession
+ The current SparkSession
+
+ Returns
+ -------
+ E5Embeddings
+ The restored model
+ """
+ from sparknlp.internal import _E5Loader
+ jModel = _E5Loader(folder, spark_session._jsparkSession)._java_obj
+ return E5Embeddings(java_model=jModel)
+
+ @staticmethod
+ def pretrained(name="e5_small", lang="en", remote_loc=None):
+ """Downloads and loads a pretrained model.
+
+ Parameters
+ ----------
+ name : str, optional
+ Name of the pretrained model, by default "e5_small"
+ lang : str, optional
+ Language of the pretrained model, by default "en"
+ remote_loc : str, optional
+ Optional remote address of the resource, by default None. Will use
+ Spark NLPs repositories otherwise.
+
+ Returns
+ -------
+ E5Embeddings
+ The restored model
+ """
+ from sparknlp.pretrained import ResourceDownloader
+ return ResourceDownloader.downloadModel(E5Embeddings, name, lang, remote_loc)
diff --git a/python/sparknlp/annotator/embeddings/instructor_embeddings.py b/python/sparknlp/annotator/embeddings/instructor_embeddings.py
new file mode 100755
--- /dev/null
+++ b/python/sparknlp/annotator/embeddings/instructor_embeddings.py
@@ -0,0 +1,204 @@
+# Copyright 2017-2022 John Snow Labs
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Contains classes for BertEmbeddings."""
+
+from sparknlp.common import *
+
+
+class InstructorEmbeddings(AnnotatorModel,
+ HasEmbeddingsProperties,
+ HasCaseSensitiveProperties,
+ HasStorageRef,
+ HasBatchedAnnotate,
+ HasMaxSentenceLengthLimit):
+ """Sentence embeddings using INSTRUCTOR.
+
+ Instructor👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) by simply providing the task instruction, without any finetuning. Instructor👨 achieves sota on 70 diverse embedding tasks!
+
+ Pretrained models can be loaded with :meth:`.pretrained` of the companion
+ object:
+
+ >>> embeddings = InstructorEmbeddings.pretrained() \\
+ ... .setInputCols(["document"]) \\
+ ... .setInstruction("Represent the Medicine sentence for clustering: ") \\
+ ... .setOutputCol("instructor_embeddings")
+
+
+ The default model is ``"instructor_base"``, if no name is provided.
+
+ For available pretrained models please see the
+ `Models Hub <https://sparknlp.org/models?q=Instructor>`__.
+
+
+ ====================== ======================
+ Input Annotation types Output Annotation type
+ ====================== ======================
+ ``DOCUMENT`` ``SENTENCE_EMBEDDINGS``
+ ====================== ======================
+
+ Parameters
+ ----------
+ batchSize
+ Size of every batch , by default 8
+ dimension
+ Number of embedding dimensions, by default 768
+ caseSensitive
+ Whether to ignore case in tokens for embeddings matching, by default False
+ instruction
+ Set transformer instruction, e.g. 'summarize:'
+ maxSentenceLength
+ Max sentence length to process, by default 128
+ configProtoBytes
+ ConfigProto from tensorflow, serialized into byte array.
+
+ References
+ ----------
+ `One Embedder, Any Task: Instruction-Finetuned Text Embeddings <https://arxiv.org/abs/2212.09741>`__
+
+ https://github.com/HKUNLP/instructor-embedding/
+
+ **Paper abstract**
+
+ *We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions:
+ every text input is embedded together with instructions explaining the use case (e.g., task and
+ domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a
+ single embedder that can generate text embeddings tailored to different downstream tasks and domains,
+ without any further training. We first annotate instructions for 330 diverse tasks and train INSTRUCTOR
+ on this multitask mixture with a contrastive loss. We evaluate INSTRUCTOR on 70 embedding evaluation tasks
+ (66 of which are unseen during training), ranging from classification and information retrieval to semantic
+ textual similarity and text generation evaluation. INSTRUCTOR, while having an order of magnitude fewer
+ parameters than the previous best model, achieves state-of-the-art performance, with an average improvement
+ of 3.4% compared to the previous best results on the 70 diverse datasets. Our analysis suggests that
+ INSTRUCTOR is robust to changes in instructions, and that instruction finetuning mitigates the challenge of
+ training a single model on diverse datasets. Our model, code, and data are available at this https
+ URL <https://instructor-embedding.github.io/>.*
+
+ Examples
+ --------
+ >>> import sparknlp
+ >>> from sparknlp.base import *
+ >>> from sparknlp.annotator import *
+ >>> from pyspark.ml import Pipeline
+ >>> documentAssembler = DocumentAssembler() \\
+ ... .setInputCol("text") \\
+ ... .setOutputCol("document")
+ >>> embeddings = InstructorEmbeddings.pretrained() \\
+ ... .setInputCols(["document"]) \\
+ ... .setInstruction("Represent the Medicine sentence for clustering: ") \\
+ ... .setOutputCol("instructor_embeddings")
+ >>> embeddingsFinisher = EmbeddingsFinisher() \\
+ ... .setInputCols(["instructor_embeddings"]) \\
+ ... .setOutputCols("finished_embeddings") \\
+ ... .setOutputAsVector(True)
+ >>> pipeline = Pipeline().setStages([
+ ... documentAssembler,
+ ... embeddings,
+ ... embeddingsFinisher
+ ... ])
+ >>> data = spark.createDataFrame([["Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity"]]).toDF("text")
+ >>> result = pipeline.fit(data).transform(data)
+ >>> result.selectExpr("explode(finished_embeddings) as result").show(5, 80)
+ +--------------------------------------------------------------------------------+
+ | result|
+ +--------------------------------------------------------------------------------+
+ |[-2.3497989177703857,0.480538547039032,-0.3238905668258667,-1.612930893898010...|
+ +--------------------------------------------------------------------------------+
+ """
+
+ name = "InstructorEmbeddings"
+
+ inputAnnotatorTypes = [AnnotatorType.DOCUMENT]
+
+ outputAnnotatorType = AnnotatorType.SENTENCE_EMBEDDINGS
+ instruction = Param(Params._dummy(), "instruction", "Set transformer instruction, e.g. 'summarize:'",
+ typeConverter=TypeConverters.toString)
+ configProtoBytes = Param(Params._dummy(),
+ "configProtoBytes",
+ "ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()",
+ TypeConverters.toListInt)
+
+ def setInstruction(self, value):
+ """ Sets transformer instruction, e.g. 'summarize:'.
+
+ Parameters
+ ----------
+ value : str
+ """
+ return self._set(instruction=value)
+
+ def setConfigProtoBytes(self, b):
+ """Sets configProto from tensorflow, serialized into byte array.
+
+ Parameters
+ ----------
+ b : List[int]
+ ConfigProto from tensorflow, serialized into byte array
+ """
+ return self._set(configProtoBytes=b)
+
+ @keyword_only
+ def __init__(self, classname="com.johnsnowlabs.nlp.embeddings.InstructorEmbeddings", java_model=None):
+ super(InstructorEmbeddings, self).__init__(
+ classname=classname,
+ java_model=java_model
+ )
+ self._setDefault(
+ dimension=768,
+ batchSize=8,
+ maxSentenceLength=128,
+ caseSensitive=False,
+ instruction="",
+ )
+
+ @staticmethod
+ def loadSavedModel(folder, spark_session):
+ """Loads a locally saved model.
+
+ Parameters
+ ----------
+ folder : str
+ Folder of the saved model
+ spark_session : pyspark.sql.SparkSession
+ The current SparkSession
+
+ Returns
+ -------
+ InstructorEmbeddings
+ The restored model
+ """
+ from sparknlp.internal import _InstructorLoader
+ jModel = _InstructorLoader(folder, spark_session._jsparkSession)._java_obj
+ return InstructorEmbeddings(java_model=jModel)
+
+ @staticmethod
+ def pretrained(name="instructor_base", lang="en", remote_loc=None):
+ """Downloads and loads a pretrained model.
+
+ Parameters
+ ----------
+ name : str, optional
+ Name of the pretrained model, by default "instructor_base"
+ lang : str, optional
+ Language of the pretrained model, by default "en"
+ remote_loc : str, optional
+ Optional remote address of the resource, by default None. Will use
+ Spark NLPs repositories otherwise.
+
+ Returns
+ -------
+ InstructorEmbeddings
+ The restored model
+ """
+ from sparknlp.pretrained import ResourceDownloader
+ return ResourceDownloader.downloadModel(InstructorEmbeddings, name, lang, remote_loc)
diff --git a/python/sparknlp/annotator/similarity/__init__.py b/python/sparknlp/annotator/similarity/__init__.py
new file mode 100644
diff --git a/python/sparknlp/annotator/similarity/document_similarity_ranker.py b/python/sparknlp/annotator/similarity/document_similarity_ranker.py
new file mode 100644
--- /dev/null
+++ b/python/sparknlp/annotator/similarity/document_similarity_ranker.py
@@ -0,0 +1,232 @@
+# Copyright 2017-2023 John Snow Labs
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Contains classes for DocumentSimilarityRanker."""
+
+from sparknlp.common import *
+from pyspark import keyword_only
+from pyspark.ml.param import TypeConverters, Params, Param
+from sparknlp.internal import AnnotatorTransformer
+
+
+class DocumentSimilarityRankerApproach(AnnotatorApproach, HasEnableCachingProperties):
+ inputAnnotatorTypes = [AnnotatorType.SENTENCE_EMBEDDINGS]
+
+ outputAnnotatorType = AnnotatorType.DOC_SIMILARITY_RANKINGS
+
+ similarityMethod = Param(Params._dummy(),
+ "similarityMethod",
+ "The similarity method used to calculate the neighbours. (Default: 'brp', "
+ "Bucketed Random Projection for Euclidean Distance)",
+ typeConverter=TypeConverters.toString)
+
+ numberOfNeighbours = Param(Params._dummy(),
+ "numberOfNeighbours",
+ "The number of neighbours the model will return (Default:`10`)",
+ typeConverter=TypeConverters.toInt)
+
+ bucketLength = Param(Params._dummy(),
+ "bucketLength",
+ "The bucket length that controls the average size of hash buckets. "
+ "A larger bucket length (i.e., fewer buckets) increases the probability of features "
+ "being hashed to the same bucket (increasing the numbers of true and false positives).",
+ typeConverter=TypeConverters.toFloat)
+
+ numHashTables = Param(Params._dummy(),
+ "numHashTables",
+ "number of hash tables, where increasing number of hash tables lowers the "
+ "false negative rate,and decreasing it improves the running performance.",
+ typeConverter=TypeConverters.toInt)
+
+ visibleDistances = Param(Params._dummy(),
+ "visibleDistances",
+ "Whether to set visibleDistances in ranking output (Default: `false`).",
+ typeConverter=TypeConverters.toBoolean)
+
+ identityRanking = Param(Params._dummy(),
+ "identityRanking",
+ "Whether to include identity in ranking result set. Useful for debug. (Default: `false`).",
+ typeConverter=TypeConverters.toBoolean)
+
+ def setSimilarityMethod(self, value):
+ """Sets the similarity method used to calculate the neighbours.
+ (Default: `"brp"`, Bucketed Random Projection for Euclidean Distance)
+
+ Parameters
+ ----------
+ value : str
+ the similarity method to calculate the neighbours.
+ """
+ return self._set(similarityMethod=value)
+
+ def setNumberOfNeighbours(self, value):
+ """Sets The number of neighbours the model will return for each document(Default:`"10"`).
+
+ Parameters
+ ----------
+ value : str
+ the number of neighbours the model will return for each document.
+ """
+ return self._set(numberOfNeighbours=value)
+
+ def setBucketLength(self, value):
+ """Sets the bucket length that controls the average size of hash buckets (Default:`"2.0"`).
+
+ Parameters
+ ----------
+ value : float
+ Sets the bucket length that controls the average size of hash buckets.
+ """
+ return self._set(bucketLength=value)
+
+ def setNumHashTables(self, value):
+ """Sets the number of hash tables.
+
+ Parameters
+ ----------
+ value : int
+ Sets the number of hash tables.
+ """
+ return self._set(numHashTables=value)
+
+ def setVisibleDistances(self, value):
+ """Sets the document distances visible in the result set.
+
+ Parameters
+ ----------
+ value : bool
+ Sets the document distances visible in the result set.
+ Default('False')
+ """
+ return self._set(visibleDistances=value)
+
+ def setIdentityRanking(self, value):
+ """Sets the document identity ranking inclusive in the result set.
+
+ Parameters
+ ----------
+ value : bool
+ Sets the document identity ranking inclusive in the result set.
+ Useful for debugging.
+ Default('False').
+ """
+ return self._set(identityRanking=value)
+
+ @keyword_only
+ def __init__(self):
+ super(DocumentSimilarityRankerApproach, self)\
+ .__init__(classname="com.johnsnowlabs.nlp.annotators.similarity.DocumentSimilarityRankerApproach")
+ self._setDefault(
+ similarityMethod="brp",
+ numberOfNeighbours=10,
+ bucketLength=2.0,
+ numHashTables=3,
+ visibleDistances=False,
+ identityRanking=False
+ )
+
+ def _create_model(self, java_model):
+ return DocumentSimilarityRankerModel(java_model=java_model)
+
+
+class DocumentSimilarityRankerModel(AnnotatorModel, HasEmbeddingsProperties):
+
+ name = "DocumentSimilarityRankerModel"
+ inputAnnotatorTypes = [AnnotatorType.SENTENCE_EMBEDDINGS]
+ outputAnnotatorType = AnnotatorType.DOC_SIMILARITY_RANKINGS
+
+ def __init__(self, classname="com.johnsnowlabs.nlp.annotators.similarity.DocumentSimilarityRankerModel",
+ java_model=None):
+ super(DocumentSimilarityRankerModel, self).__init__(
+ classname=classname,
+ java_model=java_model
+ )
+
+
+class DocumentSimilarityRankerFinisher(AnnotatorTransformer):
+
+ inputCols = Param(Params._dummy(),
+ "inputCols",
+ "name of input annotation cols containing document similarity ranker results",
+ typeConverter=TypeConverters.toListString)
+ outputCols = Param(Params._dummy(),
+ "outputCols",
+ "output DocumentSimilarityRankerFinisher output cols",
+ typeConverter=TypeConverters.toListString)
+ extractNearestNeighbor = Param(Params._dummy(), "extractNearestNeighbor",
+ "whether to extract the nearest neighbor document",
+ typeConverter=TypeConverters.toBoolean)
+
+ name = "DocumentSimilarityRankerFinisher"
+
+ @keyword_only
+ def __init__(self):
+ super(DocumentSimilarityRankerFinisher, self).__init__(classname="com.johnsnowlabs.nlp.finisher.DocumentSimilarityRankerFinisher")
+ self._setDefault(
+ extractNearestNeighbor=False
+ )
+
+ @keyword_only
+ def setParams(self):
+ kwargs = self._input_kwargs
+ return self._set(**kwargs)
+
+ def setInputCols(self, *value):
+ """Sets name of input annotation columns containing embeddings.
+
+ Parameters
+ ----------
+ *value : str
+ Input columns for the annotator
+ """
+
+ if len(value) == 1 and type(value[0]) == list:
+ return self._set(inputCols=value[0])
+ else:
+ return self._set(inputCols=list(value))
+
+ def setOutputCols(self, *value):
+ """Sets names of finished output columns.
+
+ Parameters
+ ----------
+ *value : List[str]
+ Input columns for the annotator
+ """
+
+ if len(value) == 1 and type(value[0]) == list:
+ return self._set(outputCols=value[0])
+ else:
+ return self._set(outputCols=list(value))
+
+ def setExtractNearestNeighbor(self, value):
+ """Sets whether to extract the nearest neighbor document, by default False.
+
+ Parameters
+ ----------
+ value : bool
+ Whether to extract the nearest neighbor document
+ """
+
+ return self._set(extractNearestNeighbor=value)
+
+ def getInputCols(self):
+ """Gets input columns name of annotations."""
+ return self.getOrDefault(self.inputCols)
+
+ def getOutputCols(self):
+ """Gets output columns name of annotations."""
+ if len(self.getOrDefault(self.outputCols)) == 0:
+ return ["finished_" + input_col for input_col in self.getInputCols()]
+ else:
+ return self.getOrDefault(self.outputCols)
\ No newline at end of file
diff --git a/python/sparknlp/common/annotator_type.py b/python/sparknlp/common/annotator_type.py
--- a/python/sparknlp/common/annotator_type.py
+++ b/python/sparknlp/common/annotator_type.py
@@ -35,3 +35,4 @@ class AnnotatorType(object):
NODE = "node"
TABLE = "table"
DUMMY = "dummy"
+ DOC_SIMILARITY_RANKINGS = "doc_similarity_rankings"
diff --git a/python/sparknlp/internal/__init__.py b/python/sparknlp/internal/__init__.py
--- a/python/sparknlp/internal/__init__.py
+++ b/python/sparknlp/internal/__init__.py
@@ -143,6 +143,11 @@ def __init__(self, path, jspark):
super(_ElmoLoader, self).__init__("com.johnsnowlabs.nlp.embeddings.ElmoEmbeddings.loadSavedModel", path, jspark)
+class _E5Loader(ExtendedJavaWrapper):
+ def __init__(self, path, jspark):
+ super(_E5Loader, self).__init__("com.johnsnowlabs.nlp.embeddings.E5Embeddings.loadSavedModel", path, jspark)
+
+
class _GPT2Loader(ExtendedJavaWrapper):
def __init__(self, path, jspark):
super(_GPT2Loader, self).__init__(
@@ -529,3 +534,8 @@ def __init__(self, path, jspark):
super(_RoBertaForZeroShotClassification, self).__init__(
"com.johnsnowlabs.nlp.annotators.classifier.dl.RoBertaForZeroShotClassification.loadSavedModel", path,
jspark)
+
+
+class _InstructorLoader(ExtendedJavaWrapper):
+ def __init__(self, path, jspark):
+ super(_InstructorLoader, self).__init__("com.johnsnowlabs.nlp.embeddings.InstructorEmbeddings.loadSavedModel", path, jspark)
\ No newline at end of file
|
BART Summarization max tokens?
### Is there an existing issue for this?
- [X] I have searched the existing issues and did not find a match.
### What are you working on?
I am trying to summarize potentially long texts with distilbart_xsum_12_6.
### Current Behavior
Currently I get an error on long texts:
```
TFInvalidArgumentException: {{function_node __inference_pruned_56247}} {{function_node __inference_pruned_56247}} indices[1054] = 1056 is not in [0, 1026)
```
(full error at the bottom)
### Expected Behavior
Maybe not *expected* behavior, but I always assumed something was going on under the hood to handle texts longer than a particular model's max context length. Does no such mechanism exist for BART, or in general?
### Steps To Reproduce
A rough example:
```
from sparknlp.base import *
from sparknlp.annotator import *
documentAssembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
bart = BartTransformer.pretrained("distilbart_xsum_12_6") \
.setTask("summarize:") \
.setMaxOutputLength(250) \
.setInputCols(["document"]) \
.setOutputCol("summary")
nlp_pipeline = Pipeline(stages=[
documentAssembler,
bart
])
pipeline_model = nlp_pipeline.fit(spark.createDataFrame([['']]).toDF('text'))
sentences = [
[" ".join(["word"]*1027)]
for i in range(3)
]
data = spark.createDataFrame(sentences).toDF("text")
data.show()
display(pipeline_model.transform(data))
```
### Spark NLP version and Apache Spark
sparknlp 4.4.0
spark 3.3.2
****
```
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 104.0 failed 4 times, most recent failure: Lost task 1.3 in stage 104.0 (TID 1844) (10.139.64.7 executor 15): org.tensorflow.exceptions.TFInvalidArgumentException: {{function_node __inference_pruned_56247}} {{function_node __inference_pruned_56247}} indices[1054] = 1056 is not in [0, 1026)
[[{{node encoder/embed_positions/embedding_lookup}}]]
[[StatefulPartitionedCall_1/StatefulPartitionedCall/StatefulPartitionedCall]]
at org.tensorflow.internal.c_api.AbstractTF_Status.throwExceptionIfNotOK(AbstractTF_Status.java:87)
at org.tensorflow.Session.run(Session.java:850)
at org.tensorflow.Session.access$300(Session.java:82)
at org.tensorflow.Session$Runner.runHelper(Session.java:552)
at org.tensorflow.Session$Runner.runNoInit(Session.java:499)
at org.tensorflow.Session$Runner.run(Session.java:495)
at com.johnsnowlabs.ml.ai.Bart.tag(Bart.scala:169)
at com.johnsnowlabs.ml.ai.Bart.$anonfun$predict$1(Bart.scala:763)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:293)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:293)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:290)
at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
at com.johnsnowlabs.ml.ai.Bart.predict(Bart.scala:749)
at com.johnsnowlabs.nlp.annotators.seq2seq.BartTransformer.batchAnnotate(BartTransformer.scala:487)
at com.johnsnowlabs.nlp.HasBatchedAnnotate.$anonfun$batchProcess$1(HasBatchedAnnotate.scala:59)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage4.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:761)
at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:82)
at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$1(Collector.scala:208)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:55)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:174)
at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:142)
at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:125)
at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:142)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.Task.run(Task.scala:97)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:904)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1713)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:907)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:761)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3377)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3309)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3300)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3300)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1429)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1429)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1429)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3589)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3527)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3515)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:51)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$runJob$1(DAGScheduler.scala:1178)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1166)
at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2737)
at org.apache.spark.sql.execution.collect.Collector.$anonfun$runSparkJobs$1(Collector.scala:349)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:293)
at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:377)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:128)
at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:135)
at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:123)
at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:111)
at org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:93)
at org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$computeResult$1(ResultCacheManager.scala:537)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.execution.qrc.ResultCacheManager.collectResult$1(ResultCacheManager.scala:529)
at org.apache.spark.sql.execution.qrc.ResultCacheManager.computeResult(ResultCacheManager.scala:549)
at org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$getOrComputeResultInternal$1(ResultCacheManager.scala:402)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.execution.qrc.ResultCacheManager.getOrComputeResultInternal(ResultCacheManager.scala:395)
at org.apache.spark.sql.execution.qrc.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:289)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeCollectResult$1(SparkPlan.scala:506)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.execution.SparkPlan.executeCollectResult(SparkPlan.scala:503)
at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:3453)
at org.apache.spark.sql.Dataset.$anonfun$collectResult$1(Dataset.scala:3444)
at org.apache.spark.sql.Dataset.$anonfun$withAction$3(Dataset.scala:4368)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:809)
at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4366)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:227)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:410)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:172)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1038)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:122)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:360)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4366)
at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:3443)
at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation0(OutputAggregator.scala:267)
at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation(OutputAggregator.scala:101)
at com.databricks.backend.daemon.driver.PythonDriverLocalBase.generateTableResult(PythonDriverLocalBase.scala:723)
at com.databricks.backend.daemon.driver.JupyterDriverLocal.computeListResultsItem(JupyterDriverLocal.scala:839)
at com.databricks.backend.daemon.driver.JupyterDriverLocal$JupyterEntryPoint.addCustomDisplayData(JupyterDriverLocal.scala:258)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:306)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.tensorflow.exceptions.TFInvalidArgumentException: {{function_node __inference_pruned_56247}} {{function_node __inference_pruned_56247}} indices[1054] = 1056 is not in [0, 1026)
[[{{node encoder/embed_positions/embedding_lookup}}]]
[[StatefulPartitionedCall_1/StatefulPartitionedCall/StatefulPartitionedCall]]
at org.tensorflow.internal.c_api.AbstractTF_Status.throwExceptionIfNotOK(AbstractTF_Status.java:87)
at org.tensorflow.Session.run(Session.java:850)
at org.tensorflow.Session.access$300(Session.java:82)
at org.tensorflow.Session$Runner.runHelper(Session.java:552)
at org.tensorflow.Session$Runner.runNoInit(Session.java:499)
at org.tensorflow.Session$Runner.run(Session.java:495)
at com.johnsnowlabs.ml.ai.Bart.tag(Bart.scala:169)
at com.johnsnowlabs.ml.ai.Bart.$anonfun$predict$1(Bart.scala:763)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:293)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:293)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:290)
at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)
at com.johnsnowlabs.ml.ai.Bart.predict(Bart.scala:749)
at com.johnsnowlabs.nlp.annotators.seq2seq.BartTransformer.batchAnnotate(BartTransformer.scala:487)
at com.johnsnowlabs.nlp.HasBatchedAnnotate.$anonfun$batchProcess$1(HasBatchedAnnotate.scala:59)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage4.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:761)
at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:82)
at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$1(Collector.scala:208)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:75)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:55)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:174)
at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:142)
at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:125)
at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:142)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.Task.run(Task.scala:97)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:904)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1713)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:907)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:761)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
```
|
Hi @clabornd
Could you please update your Spark NLP to `spark-nlp==4.4.3`? We have introduced optimizations for both speed and memory with some code enhancements/bug-fixes:
https://colab.research.google.com/drive/1KucyhiPBc5Eivkiyx94_VFaba8K1bvoV?usp=sharing
Thanks for the fast response, I tried upgrading to `spark-nlp==4.4.3` and the issue persists. I'm running this on Databricks if that's relevant. I tested with runtimes 13.0 and 12.2 and also with a single node machine to no avail, error looks to be the same.
```
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 27) (10.139.64.6 executor driver): org.tensorflow.exceptions.TFInvalidArgumentException: indices[1024] = 1026 is not in [0, 1026)
[[{{function_node __inference_encoder_serving_912071}}{{node encoder/embed_positions/embedding_lookup}}]]
at org.tensorflow.internal.c_api.AbstractTF_Status.throwExceptionIfNotOK(AbstractTF_Status.java:87)
...
```
You are welcome. I cannot reproduce this issue (on any platform). It seems you just updated the PyPI package (which is just empty APIs). The actual logic of the library is in the Maven package. Could you please follow this instruction and make sure the actual `spark-nlp` Maven dependency is also `4.4.3`?
https://github.com/JohnSnowLabs/spark-nlp#databricks-cluster
You can also share a screenshot from your `Library` tab in Cluster configuration in case everything is `4.4.3` and still not working. (mine is 4.4.3 and it works)
Sorry didn't mention I also updated the Maven package. My Libraries tab looks like:

Also tried uninstalling everything except the sparknlp PyPI/Maven packages.
Cluster config in case its useful:
```
{
"autoscale": {
"min_workers": 1,
"max_workers": 8
},
"cluster_name": "memory-odbc",
"spark_version": "13.0.x-scala2.12",
"spark_conf": {
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.kryoserializer.buffer.max": "2000M",
"spark.sql.broadcastTimeout": "40000",
"spark.databricks.delta.preview.enabled": "true"
},
"azure_attributes": {
"first_on_demand": 1,
"availability": "ON_DEMAND_AZURE",
"spot_bid_max_price": -1
},
"node_type_id": "Standard_DS13_v2",
"driver_node_type_id": "Standard_DS13_v2",
"ssh_public_keys": [],
"custom_tags": {},
"spark_env_vars": {},
"autotermination_minutes": 120,
"enable_elastic_disk": true,
"cluster_source": "UI",
"init_scripts": [],
"enable_local_disk_encryption": false,
"runtime_engine": "STANDARD",
"cluster_id": "0526-205224-3pve3hjz"
}
```
Ok I am seeing that error on the Colab notebook as well now, it should pop up if you change `display()` to an action: `pipeline_model.transform(data).show()` instead of `display(...)`. (The `display()` in Databricks is different than `IPython.display.display`)
The text length that triggers this is 1025 tokens, and lots of BART versions have max context length of 1024, is this not just an issue with max context length?
> The text length that triggers this is 1025 tokens, and lots of BART versions have max context length of 1024, is this not just an issue with max context length?
I see now. This is actually a bug and we should truncate anything longer than 1024 internally since there is no `setMaxInputLength` to throw an error to users like we do with BERT (the limit is 1024)
I thought we were doing that internally. This is a bug and will be fixed in the next release.
|
2023-07-01T13:08:34Z
|
[]
|
[]
| |||
JohnSnowLabs/spark-nlp
|
JohnSnowLabs__spark-nlp-13912
|
2b2f93c6922d822d208500ca412e3e5b524d53ad
|
diff --git a/python/docs/conf.py b/python/docs/conf.py
--- a/python/docs/conf.py
+++ b/python/docs/conf.py
@@ -23,7 +23,7 @@
author = "John Snow Labs"
# The full version, including alpha/beta/rc tags
-release = "5.0.1"
+release = "5.0.2"
pyspark_version = "3.2.3"
# -- General configuration ---------------------------------------------------
diff --git a/python/setup.py b/python/setup.py
--- a/python/setup.py
+++ b/python/setup.py
@@ -41,7 +41,7 @@
# project code, see
# https://packaging.python.org/en/latest/single_source_version.html
- version='5.0.1', # Required
+ version='5.0.2', # Required
# This is a one-line description or tagline of what your project does. This
# corresponds to the 'Summary' metadata field:
diff --git a/python/sparknlp/__init__.py b/python/sparknlp/__init__.py
--- a/python/sparknlp/__init__.py
+++ b/python/sparknlp/__init__.py
@@ -128,7 +128,7 @@ def start(gpu=False,
The initiated Spark session.
"""
- current_version = "5.0.1"
+ current_version = "5.0.2"
if params is None:
params = {}
@@ -309,4 +309,4 @@ def version():
str
The current Spark NLP version.
"""
- return '5.0.1'
+ return '5.0.2'
diff --git a/python/sparknlp/annotator/classifier_dl/__init__.py b/python/sparknlp/annotator/classifier_dl/__init__.py
--- a/python/sparknlp/annotator/classifier_dl/__init__.py
+++ b/python/sparknlp/annotator/classifier_dl/__init__.py
@@ -46,3 +46,4 @@
from sparknlp.annotator.classifier_dl.bert_for_zero_shot_classification import *
from sparknlp.annotator.classifier_dl.distil_bert_for_zero_shot_classification import *
from sparknlp.annotator.classifier_dl.roberta_bert_for_zero_shot_classification import *
+from sparknlp.annotator.classifier_dl.xlm_roberta_for_zero_shot_classification import *
\ No newline at end of file
diff --git a/python/sparknlp/annotator/classifier_dl/xlm_roberta_for_zero_shot_classification.py b/python/sparknlp/annotator/classifier_dl/xlm_roberta_for_zero_shot_classification.py
new file mode 100644
--- /dev/null
+++ b/python/sparknlp/annotator/classifier_dl/xlm_roberta_for_zero_shot_classification.py
@@ -0,0 +1,225 @@
+# Copyright 2017-2023 John Snow Labs
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""Contains classes for XlmRoBertaForZeroShotClassification."""
+
+from sparknlp.common import *
+
+
+class XlmRoBertaForZeroShotClassification(AnnotatorModel,
+ HasCaseSensitiveProperties,
+ HasBatchedAnnotate,
+ HasClassifierActivationProperties,
+ HasCandidateLabelsProperties,
+ HasEngine):
+ """XlmRoBertaForZeroShotClassification using a `ModelForSequenceClassification` trained on NLI (natural language
+ inference) tasks. Equivalent of `XlmRoBertaForSequenceClassification` models, but these models don't require a hardcoded
+ number of potential classes, they can be chosen at runtime. It usually means it's slower but it is much more
+ flexible.
+
+ Note that the model will loop through all provided labels. So the more labels you have, the
+ longer this process will take.
+
+ Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
+ pair and passed to the pretrained model.
+
+ Pretrained models can be loaded with :meth:`.pretrained` of the companion
+ object:
+
+ >>> sequenceClassifier = XlmRoBertaForZeroShotClassification.pretrained() \\
+ ... .setInputCols(["token", "document"]) \\
+ ... .setOutputCol("label")
+
+ The default model is ``"xlm_roberta_large_zero_shot_classifier_xnli_anli"``, if no name is
+ provided.
+
+ For available pretrained models please see the `Models Hub
+ <https://sparknlp.orgtask=Text+Classification>`__.
+
+ To see which models are compatible and how to import them see
+ `Import Transformers into Spark NLP 🚀
+ <https://github.com/JohnSnowLabs/spark-nlp/discussions/5669>`_.
+
+ ====================== ======================
+ Input Annotation types Output Annotation type
+ ====================== ======================
+ ``DOCUMENT, TOKEN`` ``CATEGORY``
+ ====================== ======================
+
+ Parameters
+ ----------
+ batchSize
+ Batch size. Large values allows faster processing but requires more
+ memory, by default 8
+ caseSensitive
+ Whether to ignore case in tokens for embeddings matching, by default
+ True
+ configProtoBytes
+ ConfigProto from tensorflow, serialized into byte array.
+ maxSentenceLength
+ Max sentence length to process, by default 128
+ coalesceSentences
+ Instead of 1 class per sentence (if inputCols is `sentence`) output 1
+ class per document by averaging probabilities in all sentences, by
+ default False
+ activation
+ Whether to calculate logits via Softmax or Sigmoid, by default
+ `"softmax"`.
+
+ Examples
+ --------
+ >>> import sparknlp
+ >>> from sparknlp.base import *
+ >>> from sparknlp.annotator import *
+ >>> from pyspark.ml import Pipeline
+ >>> documentAssembler = DocumentAssembler() \\
+ ... .setInputCol("text") \\
+ ... .setOutputCol("document")
+ >>> tokenizer = Tokenizer() \\
+ ... .setInputCols(["document"]) \\
+ ... .setOutputCol("token")
+ >>> sequenceClassifier = XlmRoBertaForZeroShotClassification.pretrained() \\
+ ... .setInputCols(["token", "document"]) \\
+ ... .setOutputCol("label") \\
+ ... .setCaseSensitive(True)
+ >>> pipeline = Pipeline().setStages([
+ ... documentAssembler,
+ ... tokenizer,
+ ... sequenceClassifier
+ ... ])
+ >>> data = spark.createDataFrame([["I loved this movie when I was a child.", "It was pretty boring."]]).toDF("text")
+ >>> result = pipeline.fit(data).transform(data)
+ >>> result.select("label.result").show(truncate=False)
+ +------+
+ |result|
+ +------+
+ |[pos] |
+ |[neg] |
+ +------+
+ """
+ name = "XlmRoBertaForZeroShotClassification"
+
+ inputAnnotatorTypes = [AnnotatorType.DOCUMENT, AnnotatorType.TOKEN]
+
+ outputAnnotatorType = AnnotatorType.CATEGORY
+
+ maxSentenceLength = Param(Params._dummy(),
+ "maxSentenceLength",
+ "Max sentence length to process",
+ typeConverter=TypeConverters.toInt)
+
+ configProtoBytes = Param(Params._dummy(),
+ "configProtoBytes",
+ "ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()",
+ TypeConverters.toListInt)
+
+ coalesceSentences = Param(Params._dummy(), "coalesceSentences",
+ "Instead of 1 class per sentence (if inputCols is '''sentence''') output 1 class per document by averaging probabilities in all sentences.",
+ TypeConverters.toBoolean)
+
+ def getClasses(self):
+ """
+ Returns labels used to train this model
+ """
+ return self._call_java("getClasses")
+
+ def setConfigProtoBytes(self, b):
+ """Sets configProto from tensorflow, serialized into byte array.
+
+ Parameters
+ ----------
+ b : List[int]
+ ConfigProto from tensorflow, serialized into byte array
+ """
+ return self._set(configProtoBytes=b)
+
+ def setMaxSentenceLength(self, value):
+ """Sets max sentence length to process, by default 128.
+
+ Parameters
+ ----------
+ value : int
+ Max sentence length to process
+ """
+ return self._set(maxSentenceLength=value)
+
+ def setCoalesceSentences(self, value):
+ """Instead of 1 class per sentence (if inputCols is '''sentence''') output 1 class per document by averaging
+ probabilities in all sentences. Due to max sequence length limit in almost all transformer models such as XlmRoBerta
+ (512 tokens), this parameter helps to feed all the sentences into the model and averaging all the probabilities
+ for the entire document instead of probabilities per sentence. (Default: true)
+
+ Parameters
+ ----------
+ value : bool
+ If the output of all sentences will be averaged to one output
+ """
+ return self._set(coalesceSentences=value)
+
+ @keyword_only
+ def __init__(self, classname="com.johnsnowlabs.nlp.annotators.classifier.dl.XlmRoBertaForZeroShotClassification",
+ java_model=None):
+ super(XlmRoBertaForZeroShotClassification, self).__init__(
+ classname=classname,
+ java_model=java_model
+ )
+ self._setDefault(
+ batchSize=8,
+ maxSentenceLength=128,
+ caseSensitive=True,
+ coalesceSentences=False,
+ activation="softmax"
+ )
+
+ @staticmethod
+ def loadSavedModel(folder, spark_session):
+ """Loads a locally saved model.
+
+ Parameters
+ ----------
+ folder : str
+ Folder of the saved model
+ spark_session : pyspark.sql.SparkSession
+ The current SparkSession
+
+ Returns
+ -------
+ XlmRoBertaForZeroShotClassification
+ The restored model
+ """
+ from sparknlp.internal import _XlmRoBertaForZeroShotClassification
+ jModel = _XlmRoBertaForZeroShotClassification(folder, spark_session._jsparkSession)._java_obj
+ return XlmRoBertaForZeroShotClassification(java_model=jModel)
+
+ @staticmethod
+ def pretrained(name="xlm_roberta_large_zero_shot_classifier_xnli_anli", lang="xx", remote_loc=None):
+ """Downloads and loads a pretrained model.
+
+ Parameters
+ ----------
+ name : str, optional
+ Name of the pretrained model, by default
+ "xlm_roberta_large_zero_shot_classifier_xnli_anli"
+ lang : str, optional
+ Language of the pretrained model, by default "en"
+ remote_loc : str, optional
+ Optional remote address of the resource, by default None. Will use
+ Spark NLPs repositories otherwise.
+
+ Returns
+ -------
+ XlmRoBertaForZeroShotClassification
+ The restored model
+ """
+ from sparknlp.pretrained import ResourceDownloader
+ return ResourceDownloader.downloadModel(XlmRoBertaForZeroShotClassification, name, lang, remote_loc)
diff --git a/python/sparknlp/internal/__init__.py b/python/sparknlp/internal/__init__.py
--- a/python/sparknlp/internal/__init__.py
+++ b/python/sparknlp/internal/__init__.py
@@ -536,6 +536,12 @@ def __init__(self, path, jspark):
jspark)
+class _XlmRoBertaForZeroShotClassification(ExtendedJavaWrapper):
+ def __init__(self, path, jspark):
+ super(_XlmRoBertaForZeroShotClassification, self).__init__(
+ "com.johnsnowlabs.nlp.annotators.classifier.dl.XlmRoBertaForZeroShotClassification.loadSavedModel", path,
+ jspark)
+
class _InstructorLoader(ExtendedJavaWrapper):
def __init__(self, path, jspark):
super(_InstructorLoader, self).__init__("com.johnsnowlabs.nlp.embeddings.InstructorEmbeddings.loadSavedModel", path, jspark)
\ No newline at end of file
|
The summarization model(s) are not giving any result
Hello,
Sorry for posting this under the "documentation" label, but I thought it is more appropriate than the "bug". The problem I am facing is currently of **two** types.
1️⃣ Model is **not downloading** at all
2️⃣ Model can be downloaded but **cannot do the inference**.
Let me brief you about them.
___
Before going through any of the problems, I would provide some code which I use as a starter to drive the context:
```python
# Using the colab for doing these testing
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
# Start the server
import sparknlp
spark = sparknlp.start(gpu=True)
print("Spark NLP version: {}".format(sparknlp.version()))
print("Apache Spark version: {}".format(spark.version))
>>>Spark NLP version: 5.0.1
>>> Apache Spark version: 3.2.3
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
from sparknlp.annotator import *
```
Now, let me show the error-prone code.
# 1️⃣ Model is not downloading at all
📝 The model page here: [Page](https://sparknlp.org/2023/05/09/bart_large_cnn_en.html)
**Model Name**: `bart_large_cnn`
```python
documentAssembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("documents")
bart = BartTransformer.pretrained("bart_large_cnn") \
.setTask("summarize:") \
.setMaxOutputLength(200) \
.setInputCols(["documents"]) \
.setOutputCol("summaries")
pipeline = Pipeline().setStages([documentAssembler, bart])
```
### **Throws** this error:
*(error in short)*
```
raise Py4JNetworkError(
py4j.protocol.Py4JNetworkError: Error while sending or receiving
[OK!]
```
*(error in long)*
```
bart_large_cnn download started this may take some time.
Approximate size to download 1 GB
[ — ]----------------------------------------
Exception occurred during processing of request from ('127.0.0.1', 45808)
Traceback (most recent call last):
File "/usr/lib/python3.10/socketserver.py", line 316, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python3.10/socketserver.py", line 347, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python3.10/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.10/socketserver.py", line 747, in __init__
self.handle()
File "/usr/local/lib/python3.10/dist-packages/pyspark/accumulators.py", line 262, in handle
poll(accum_updates)
File "/usr/local/lib/python3.10/dist-packages/pyspark/accumulators.py", line 235, in poll
if func():
File "/usr/local/lib/python3.10/dist-packages/pyspark/accumulators.py", line 239, in accum_updates
num_updates = read_int(self.rfile)
File "/usr/local/lib/python3.10/dist-packages/pyspark/serializers.py", line 564, in read_int
raise EOFError
EOFError
----------------------------------------
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/py4j/clientserver.py", line 516, in send_command
raise Py4JNetworkError("Answer from Java side is empty")
py4j.protocol.Py4JNetworkError: Answer from Java side is empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/py4j/java_gateway.py", line 1038, in send_command
response = connection.send_command(command)
File "/usr/local/lib/python3.10/dist-packages/py4j/clientserver.py", line 539, in send_command
raise Py4JNetworkError(
py4j.protocol.Py4JNetworkError: Error while sending or receiving
[OK!]
---------------------------------------------------------------------------
Py4JError Traceback (most recent call last)
[<ipython-input-12-6cb90d39d472>](https://localhost:8080/#) in <cell line: 1>()
----> 1 bart = BartTransformer.pretrained("bart_large_cnn") \
2 .setTask("summarize:") \
3 .setMaxOutputLength(200) \
4 .setInputCols(["documents"]) \
5 .setOutputCol("summaries")
8 frames
[/usr/local/lib/python3.10/dist-packages/py4j/protocol.py](https://localhost:8080/#) in get_return_value(answer, gateway_client, target_id, name)
332 format(target_id, ".", name, value))
333 else:
--> 334 raise Py4JError(
335 "An error occurred while calling {0}{1}{2}".
336 format(target_id, ".", name))
Py4JError: An error occurred while calling z:com.johnsnowlabs.nlp.pretrained.PythonResourceDownloader.downloadModel
```
# 2️⃣ Model is downloaded but can't run inference
I have seen this problem in **these two** models.
📝 **Model-1** page here: [Page-1](https://sparknlp.org/2023/04/09/distilbart_cnn_6_6_en.html)
📝 **Model-2** page here: [Page-2](https://sparknlp.org/2023/05/11/distilbart_xsum_12_6_en.html)
The code:
```python
documentAssembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("documents")
bart = BartTransformer.pretrained("distilbart_cnn_6_6") \
.setTask("summarize:") \
.setInputCols(["documents"]) \
.setOutputCol("summaries") \
.setMaxOutputLength(128) \
.setTemperature(.2) \
.setDoSample(True)
pipeline = Pipeline().setStages([documentAssembler, bart])
```
***After the successful*** download I run the inference like below:
```python
data = spark.createDataFrame([["A LONG PARAGRAPH"]]).toDF("text")
result = pipeline.fit(data).transform(data)
summary = []
for row in result.select("summaries").collect():
summary.append(row["summaries"][0]["result"])
```
And it gives this error:
```
Py4JJavaError Traceback (most recent call last)
[<ipython-input-11-c4bd6196dafa>](https://localhost:8080/#) in <cell line: 5>()
3 result = pipeline.fit(data).transform(data)
4 summary = []
----> 5 for row in result.select("summaries").collect():
6 summary.append(row["summaries"][0]["result"])
7
3 frames
[/usr/local/lib/python3.10/dist-packages/py4j/protocol.py](https://localhost:8080/#) in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o313.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 8) (6fce83427708 executor driver): org.tensorflow.exceptions.TFInvalidArgumentException: 2 root error(s) found.
(0) INVALID_ARGUMENT: required broadcastable shapes
[[{{function_node __inference_decoder_cached_serving_808693}}{{node decoder/layers.0/encoder_attn/add}}]]
[[StatefulPartitionedCall/_2389]]
(1) INVALID_ARGUMENT: required broadcastable shapes
[[{{function_node __inference_decoder_cached_serving_808693}}{{node decoder/layers.0/encoder_attn/add}}]]
0 successful operations.
0 derived errors ignored.
```
It's a long error... but it will give the context. Seems like a problem in `.collect()` and also I tried using the `.select("summaries").first()` and still gives **some** other related error.
___
Thank you 🙏🏻
|
Hi, thanks for reporting! @prabod I think this might be related to your feature?
Hie @prabod ! Any update on this?
I have found other models too which are causing this issue 😢
Thanks!
@AayushSameerShah I tried to reproduce this issue, but I cannot reproduce it on Colab and locally via the GPU.
You can check this notebook:
- it uses the latest PySpark and Spark NLP (so no issue here)
- uses one of the models you mentioned (so no issue here)
- it downloads it (so no issue here)
- and it uses the GPU (just in case, so no issue here)
https://colab.research.google.com/drive/1XyZ6ibezz275QCo9zRlbHeHd7fLqfuWU?usp=sharing
I am afraid we might need the actual `issue template` with all the details and required versions so we can try to reproduce it. (you chose a `documentation` template which does not really ask for extra information)
issue template: https://github.com/JohnSnowLabs/spark-nlp/issues/new?assignees=maziyarpanahi&labels=question&projects=&template=bug_report.yml
Hie @maziyarpanahi 👋🏻
Thanks for the code and solution... I have tried the notebook that you've provided.
There are some problems still, on the first run **the inference** can be done, but when we change the parameters like `DoSample` it gives the same `collectionError`.
I know I should have used the "issue" template, I apologize for this.
Here I have created a colab that is linearly runnable and **is able to reproduce the error**. It is annotated so that should be easy to follow around.
Link: https://colab.research.google.com/drive/1xsYGjHcPxnh4e5UZxYj-hLT7BCFYP_0H?usp=sharing
Please let me know if it doesn't work.
Thank you so much! 🤗
@maziyarpanahi Hope you were able to catch that error 🤗
Hi @AayushSameerShah
yes, your notebook was very helpful and we are debugging it currently.
|
2023-08-02T13:04:43Z
|
[]
|
[]
| |||
apache/mxnet
|
apache__mxnet-1318
|
800b4eb4e36e61e19cffbdc1f6e2e440911cc615
|
diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -345,8 +345,8 @@ def __init__(self, learning_rate=0.002,
decay_factor=(1 - 1e-8),
wd=0.,
rescale_grad=1, clip_gradient=None,
- lr_scheduler=None):
- super(Adam, self).__init__(rescale_grad)
+ lr_scheduler=None, arg_names=None):
+ super(Adam, self).__init__(rescale_grad, arg_names)
self.lr = learning_rate
self.beta1 = beta1
self.beta2 = beta2
|
Running Adam optimizer error
Hi,
When I change the optimizer to Adam. For example,
``` python
model = mx.model.FeedForward(
ctx = [mx.gpu(0)],
num_epoch = 60,
symbol = network,
optimizer = 'adam',
initializer = mx.init.Xavier(factor_type="in", magnitude=2.34))
model.fit(X= data_train)
```
I get the following error message:
``` python
File "<ipython-input-6-3732cc3a7cd0>", line 2, in <module>
X = data_train)
File "/home/rick/anaconda/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/model.py", line 641, in fit
**(self.kwargs))
File "/home/rick/anaconda/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/optimizer.py", line 52, in create_optimizer
**kwargs)
TypeError: __init__() got an unexpected keyword argument 'arg_names'
```
Isn't this the correct way of setting the optimizer?
|
Seems the `__init__` function of the Adam optimizer has no `arg_names` keyword.
Does the following revision help? In https://github.com/dmlc/mxnet/blob/master/python/mxnet/optimizer.py#L348, add `arg_names = None` after `lr_schedule = None` and change https://github.com/dmlc/mxnet/blob/master/python/mxnet/optimizer.py#L349 to `super(Adam, self).__init__(rescale_grad, arg_names)`
Now I get the following error instead:
``` python
File "<ipython-input-3-9b6e62614107>", line 2, in <module>
X = data_train)
File "/home/rick/anaconda/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/model.py", line 641, in fit
**(self.kwargs))
File "/home/rick/anaconda/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/optimizer.py", line 52, in create_optimizer
**kwargs)
File "/home/rick/anaconda/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/optimizer.py", line 342, in __init__
super(Adam, self).__init__(rescale_grad, arg_names)
TypeError: __init__() takes at most 2 arguments (3 given)
```
@rfarouni In my computer this revision solves the problem. Is it possible that you need to re-install the python package? Or update the rest part of MXNet to the latest version?
```
lr_scheduler=None, arg_names=None):
super(Adam, self).__init__(rescale_grad, arg_names)
```
This is definitely an issue that should be fixed.
But you don't have to hotfix the mxnet code to use the Adam optimizer.
It's also possible to provide an optimizer object instead of a string:
``` python
model = mx.model.FeedForward(
ctx = [mx.gpu(0)],
num_epoch = 60,
symbol = network,
optimizer = mx.optimizer.Adam(),
initializer = mx.init.Xavier(factor_type="in", magnitude=2.34))
model.fit(X=data_train)
```
After fixing the code and re-installing the python package, it worked. I think I tried providing an optimizer object but that didn't work.
@rfarouni I think it's worthwhile to make a pr for this issue. Would you like to pr?
|
2016-01-19T18:52:00Z
|
[]
|
[]
| |||
apache/mxnet
|
apache__mxnet-1661
|
771a2d6a0db5b770bb086f5cd570356df49b810d
|
diff --git a/amalgamation/amalgamation.py b/amalgamation/amalgamation.py
--- a/amalgamation/amalgamation.py
+++ b/amalgamation/amalgamation.py
@@ -7,7 +7,7 @@
'glog/logging.h', 'io/azure_filesys.h', 'io/hdfs_filesys.h', 'io/s3_filesys.h',
'kvstore_dist.h', 'mach/clock.h', 'mach/mach.h',
'malloc.h', 'mkl.h', 'mkl_cblas.h', 'mkl_vsl.h', 'mkl_vsl_functions.h',
- 'nvml.h', 'opencv2/opencv.hpp', 'sys/stat.h', 'sys/types.h'
+ 'nvml.h', 'opencv2/opencv.hpp', 'sys/stat.h', 'sys/types.h', 'cuda.h', 'cuda_fp16.h'
]
if len(sys.argv) < 4:
|
Issue in amalgamation
~/mxnet/amalgamation$ make
g++ -std=c++11 -Wno-unknown-pragmas -Wall -I/opt/OpenBLAS -fPIC -o mxnet_predict-all.o -c mxnet_predict-all.cc
mxnet_predict-all.cc:31:18: fatal error: cuda.h: No such file or directory
#include <cuda.h>
^
compilation terminated.
make: **\* [mxnet_predict-all.o] Error 1
I have built OpenBlas 0.2.16 . OpenBLAS root: /opt/OpenBLAS
|
2016-03-18T13:14:11Z
|
[]
|
[]
| ||||
apache/mxnet
|
apache__mxnet-613
|
52793a151fba87291cb5db1bd98769365d994917
|
diff --git a/python/mxnet/kvstore.py b/python/mxnet/kvstore.py
--- a/python/mxnet/kvstore.py
+++ b/python/mxnet/kvstore.py
@@ -310,7 +310,7 @@ def _set_updater(self, updater):
>>> def update(key, input, stored):
... print "update on key: %d" % key
... stored += input * 2
- >>> kv.set_updater(update)
+ >>> kv._set_updater(update)
>>> kv.pull(3, out=a)
>>> print a.asnumpy()
[[ 4. 4. 4.]
|
python doc error in KVstore example
in the KVstore example:
``` python
>>> def update(key, input, stored):
>>> print "update on key: %d" % key
>>> stored += input * 2
>>> kv.set_updater(update)
```
where `kv.set_updater` should be
`kv._set_updater(update)`
|
yeah, we try to replace it by `set_optimizer`, which is probably more straightforward for optimization algorothms. can you please push a PR to fix the example. thanks
|
2015-11-18T03:05:10Z
|
[]
|
[]
| |||
apache/mxnet
|
apache__mxnet-627
|
e304fc05e9e8ef429bfd2d87a8aa029caba2bdb6
|
diff --git a/example/cifar10/cifar10.py b/example/cifar10/cifar10.py
deleted file mode 100644
--- a/example/cifar10/cifar10.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# pylint: skip-file
-import sys, os
-# code to automatically download dataset
-curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
-sys.path.append(os.path.join(curr_path, "../../tests/python/common"))
-import get_data
-import mxnet as mx
-import numpy as np
-import logging
-
-logger = logging.getLogger()
-logger.setLevel(logging.DEBUG)
-
-# Basic Conv + BN + ReLU factory
-def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), act_type="relu"):
- conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad)
- bn = mx.symbol.BatchNorm(data=conv)
- act = mx.symbol.Activation(data = bn, act_type=act_type)
- return act
-
-# A Simple Downsampling Factory
-def DownsampleFactory(data, ch_3x3):
- # conv 3x3
- conv = ConvFactory(data=data, kernel=(3, 3), stride=(2, 2), num_filter=ch_3x3, pad=(1, 1))
- # pool
- pool = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pool_type='max')
- # concat
- concat = mx.symbol.Concat(*[conv, pool])
- return concat
-
-# A Simple module
-def SimpleFactory(data, ch_1x1, ch_3x3):
- # 1x1
- conv1x1 = ConvFactory(data=data, kernel=(1, 1), pad=(0, 0), num_filter=ch_1x1)
- # 3x3
- conv3x3 = ConvFactory(data=data, kernel=(3, 3), pad=(1, 1), num_filter=ch_3x3)
- #concat
- concat = mx.symbol.Concat(*[conv1x1, conv3x3])
- return concat
-
-
-
-data = mx.symbol.Variable(name="data")
-conv1 = ConvFactory(data=data, kernel=(3,3), pad=(1,1), num_filter=96, act_type="relu")
-in3a = SimpleFactory(conv1, 32, 32)
-in3b = SimpleFactory(in3a, 32, 48)
-in3c = DownsampleFactory(in3b, 80)
-in4a = SimpleFactory(in3c, 112, 48)
-in4b = SimpleFactory(in4a, 96, 64)
-in4c = SimpleFactory(in4b, 80, 80)
-in4d = SimpleFactory(in4c, 48, 96)
-in4e = DownsampleFactory(in4d, 96)
-in5a = SimpleFactory(in4e, 176, 160)
-in5b = SimpleFactory(in5a, 176, 160)
-pool = mx.symbol.Pooling(data=in5b, pool_type="avg", kernel=(7,7), name="global_pool")
-flatten = mx.symbol.Flatten(data=pool, name="flatten1")
-fc = mx.symbol.FullyConnected(data=flatten, num_hidden=10, name="fc1")
-softmax = mx.symbol.SoftmaxOutput(data=fc, name="loss")
-
-#########################################################
-
-get_data.GetCifar10()
-batch_size = 128
-num_epoch = 10
-num_gpus = 1
-
-train_dataiter = mx.io.ImageRecordIter(
- path_imgrec="data/cifar/train.rec",
- mean_img="data/cifar/cifar_mean.bin",
- rand_crop=True,
- rand_mirror=True,
- data_shape=(3,28,28),
- batch_size=batch_size,
- preprocess_threads=1,
- label_name='loss_label')
-test_dataiter = mx.io.ImageRecordIter(
- path_imgrec="data/cifar/test.rec",
- mean_img="data/cifar/cifar_mean.bin",
- rand_crop=False,
- rand_mirror=False,
- data_shape=(3,28,28),
- batch_size=batch_size,
- preprocess_threads=1,
- label_name='loss_label')
-
-def test_cifar():
- logging.basicConfig(level=logging.DEBUG)
- gpus = [mx.gpu(i) for i in range(num_gpus)]
- model = mx.model.FeedForward(ctx=gpus, symbol=softmax, num_epoch=num_epoch,
- learning_rate=0.05, momentum=0.9, wd=0.0001,
- initializer=mx.init.Uniform(0.07))
-
- model.fit(X=train_dataiter, eval_data=test_dataiter,
- batch_end_callback=mx.callback.Speedometer(batch_size))
-
-if __name__ == "__main__":
- test_cifar()
diff --git a/example/image-classification/alexnet.py b/example/image-classification/symbol_alexnet.py
similarity index 100%
rename from example/image-classification/alexnet.py
rename to example/image-classification/symbol_alexnet.py
diff --git a/example/image-classification/googlenet.py b/example/image-classification/symbol_googlenet.py
similarity index 100%
rename from example/image-classification/googlenet.py
rename to example/image-classification/symbol_googlenet.py
diff --git a/example/image-classification/inception-bn-28-small.py b/example/image-classification/symbol_inception-bn-28-small.py
similarity index 100%
rename from example/image-classification/inception-bn-28-small.py
rename to example/image-classification/symbol_inception-bn-28-small.py
diff --git a/example/image-classification/inception-bn-full.py b/example/image-classification/symbol_inception-bn-full.py
similarity index 100%
rename from example/image-classification/inception-bn-full.py
rename to example/image-classification/symbol_inception-bn-full.py
diff --git a/example/image-classification/inception-bn.py b/example/image-classification/symbol_inception-bn.py
similarity index 100%
rename from example/image-classification/inception-bn.py
rename to example/image-classification/symbol_inception-bn.py
diff --git a/example/image-classification/vgg.py b/example/image-classification/symbol_vgg.py
similarity index 100%
rename from example/image-classification/vgg.py
rename to example/image-classification/symbol_vgg.py
diff --git a/example/image-classification/train_cifar10.py b/example/image-classification/train_cifar10.py
--- a/example/image-classification/train_cifar10.py
+++ b/example/image-classification/train_cifar10.py
@@ -17,13 +17,17 @@
help='the batch size')
parser.add_argument('--lr', type=float, default=.05,
help='the initial learning rate')
+parser.add_argument('--lr-factor', type=float, default=1,
+ help='times the lr with a factor for every lr-factor-epoch epoch')
+parser.add_argument('--lr-factor-epoch', type=float, default=1,
+ help='the number of epoch to factor the lr, could be .5')
parser.add_argument('--model-prefix', type=str,
help='the prefix of the model to load/save')
parser.add_argument('--num-epochs', type=int, default=20,
help='the number of training epochs')
parser.add_argument('--load-epoch', type=int,
help="load the model on an epoch using the model-prefix")
-parser.add_argument('--kv-type', type=str, default='local',
+parser.add_argument('--kv-store', type=str, default='local',
help='the kvstore type')
args = parser.parse_args()
@@ -41,7 +45,7 @@ def _download(data_dir):
# network
import importlib
-net = importlib.import_module(args.network).get_symbol(10)
+net = importlib.import_module('symbol_' + args.network).get_symbol(10)
# data
def get_iterator(args, kv):
diff --git a/example/image-classification/train_imagenet.py b/example/image-classification/train_imagenet.py
--- a/example/image-classification/train_imagenet.py
+++ b/example/image-classification/train_imagenet.py
@@ -23,7 +23,7 @@
help='the batch size')
parser.add_argument('--gpus', type=str, default='0',
help='the gpus will be used, e.g "0,1,2,3"')
-parser.add_argument('--kv-type', type=str, default='local',
+parser.add_argument('--kv-store', type=str, default='local',
help='the kvstore type')
parser.add_argument('--num-examples', type=int, default=1281167,
help='the number of training examples')
@@ -33,7 +33,7 @@
# network
import importlib
-net = importlib.import_module(args.network).get_symbol(args.num_classes)
+net = importlib.import_module('symbol_' + args.network).get_symbol(args.num_classes)
# data
def get_iterator(args, kv):
diff --git a/example/image-classification/train_mnist.py b/example/image-classification/train_mnist.py
--- a/example/image-classification/train_mnist.py
+++ b/example/image-classification/train_mnist.py
@@ -24,8 +24,12 @@
help='the number of training epochs')
parser.add_argument('--load-epoch', type=int,
help="load the model on an epoch using the model-prefix")
-parser.add_argument('--kv-type', type=str, default='local',
+parser.add_argument('--kv-store', type=str, default='local',
help='the kvstore type')
+parser.add_argument('--lr-factor', type=float, default=1,
+ help='times the lr with a factor for every lr-factor-epoch epoch')
+parser.add_argument('--lr-factor-epoch', type=float, default=1,
+ help='the number of epoch to factor the lr, could be .5')
args = parser.parse_args()
def _download(data_dir):
diff --git a/example/image-classification/train_model.py b/example/image-classification/train_model.py
--- a/example/image-classification/train_model.py
+++ b/example/image-classification/train_model.py
@@ -4,7 +4,7 @@
def fit(args, network, data_loader):
# kvstore
- kv = mx.kvstore.create(args.kv_type)
+ kv = mx.kvstore.create(args.kv_store)
# logging
head = '%(asctime)-15s Node[' + str(kv.rank) + '] %(message)s'
@@ -15,11 +15,11 @@ def fit(args, network, data_loader):
model_prefix = args.model_prefix
if model_prefix is not None:
model_prefix += "-%d" % (kv.rank)
- load_model = {}
+ model_args = {}
if args.load_epoch is not None:
assert model_prefix is not None
tmp = mx.model.FeedForward.load(model_prefix, args.load_epoch)
- load_model = {'arg_params' : tmp.arg_params,
+ model_args = {'arg_params' : tmp.arg_params,
'aux_params' : tmp.aux_params,
'begin_epoch' : args.load_epoch}
# save model?
@@ -29,9 +29,20 @@ def fit(args, network, data_loader):
(train, val) = data_loader(args, kv)
# train
- devs = mx.cpu()
- if args.gpus is not None:
- devs = [mx.gpu(int(i)) for i in args.gpus.split(',')]
+ devs = mx.cpu() if args.gpus is None else [
+ mx.gpu(int(i)) for i in args.gpus.split(',')]
+
+ epoch_size = args.num_examples / args.batch_size
+
+ if args.kv_store == 'dist_sync':
+ epoch_size /= kv.num_workers
+ model_args['epoch_size'] = epoch_size
+
+ if 'lr_factor' in args and args.lr_factor < 1:
+ model_args['lr_scheduler'] = mx.lr_scheduler.FactorScheduler(
+ step = max(int(epoch_size * args.lr_factor_epoch), 1),
+ factor = args.lr_factor)
+
model = mx.model.FeedForward(
ctx = devs,
symbol = network,
@@ -39,7 +50,8 @@ def fit(args, network, data_loader):
learning_rate = args.lr,
momentum = 0.9,
wd = 0.00001,
- **load_model)
+ **model_args)
+
model.fit(
X = train,
eval_data = val,
diff --git a/example/imagenet/alexnet.py b/example/imagenet/alexnet.py
deleted file mode 100644
--- a/example/imagenet/alexnet.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# pylint: skip-file
-from data import ilsvrc12_iterator
-import mxnet as mx
-import logging
-
-## define alexnet
-input_data = mx.symbol.Variable(name="data")
-# stage 1
-conv1 = mx.symbol.Convolution(
- data=input_data, kernel=(11, 11), stride=(4, 4), num_filter=96)
-relu1 = mx.symbol.Activation(data=conv1, act_type="relu")
-pool1 = mx.symbol.Pooling(
- data=relu1, pool_type="max", kernel=(3, 3), stride=(2,2))
-lrn1 = mx.symbol.LRN(data=pool1, alpha=0.0001, beta=0.75, knorm=1, nsize=5)
-# stage 2
-conv2 = mx.symbol.Convolution(
- data=lrn1, kernel=(5, 5), pad=(2, 2), num_filter=256)
-relu2 = mx.symbol.Activation(data=conv2, act_type="relu")
-pool2 = mx.symbol.Pooling(data=relu2, kernel=(3, 3), stride=(2, 2), pool_type="max")
-lrn2 = mx.symbol.LRN(data=pool2, alpha=0.0001, beta=0.75, knorm=1, nsize=5)
-# stage 3
-conv3 = mx.symbol.Convolution(
- data=lrn2, kernel=(3, 3), pad=(1, 1), num_filter=384)
-relu3 = mx.symbol.Activation(data=conv3, act_type="relu")
-conv4 = mx.symbol.Convolution(
- data=relu3, kernel=(3, 3), pad=(1, 1), num_filter=384)
-relu4 = mx.symbol.Activation(data=conv4, act_type="relu")
-conv5 = mx.symbol.Convolution(
- data=relu4, kernel=(3, 3), pad=(1, 1), num_filter=256)
-relu5 = mx.symbol.Activation(data=conv5, act_type="relu")
-pool3 = mx.symbol.Pooling(data=relu5, kernel=(3, 3), stride=(2, 2), pool_type="max")
-# stage 4
-flatten = mx.symbol.Flatten(data=pool3)
-fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=4096)
-relu6 = mx.symbol.Activation(data=fc1, act_type="relu")
-dropout1 = mx.symbol.Dropout(data=relu6, p=0.5)
-# stage 5
-fc2 = mx.symbol.FullyConnected(data=dropout1, num_hidden=4096)
-relu7 = mx.symbol.Activation(data=fc2, act_type="relu")
-dropout2 = mx.symbol.Dropout(data=relu7, p=0.5)
-# stage 6
-fc3 = mx.symbol.FullyConnected(data=dropout2, num_hidden=1000)
-softmax = mx.symbol.SoftmaxOutput(data=fc3, name='softmax')
-
-
-## data
-batch_size = 256
-train, val = ilsvrc12_iterator(batch_size=batch_size, input_shape=(3,224,224))
-
-## train
-num_gpus = 4
-gpus = [mx.gpu(i) for i in range(num_gpus)]
-model = mx.model.FeedForward(
- ctx = gpus,
- symbol = softmax,
- num_epoch = 20,
- learning_rate = 0.01,
- momentum = 0.9,
- wd = 0.00001)
-logging.basicConfig(level = logging.DEBUG)
-model.fit(X = train, eval_data = val,
- batch_end_callback = mx.callback.Speedometer(batch_size=batch_size))
diff --git a/example/imagenet/data.py b/example/imagenet/data.py
deleted file mode 100644
--- a/example/imagenet/data.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# pylint: skip-file
-""" data iterator for imagnet"""
-import sys
-sys.path.insert(0, "../../python/")
-import mxnet as mx
-
-def ilsvrc12_iterator(batch_size, input_shape):
- """return train and val iterators for imagenet"""
- train_dataiter = mx.io.ImageRecordIter(
- path_imgrec = "data/train.rec",
- mean_img = "data/mean.bin",
- rand_crop = True,
- rand_mirror = True,
- prefetch_buffer = 4,
- preprocess_threads = 4,
- data_shape = input_shape,
- batch_size = batch_size)
- val_dataiter = mx.io.ImageRecordIter(
- path_imgrec = "data/val.rec",
- mean_img = "data/mean.bin",
- rand_crop = False,
- rand_mirror = False,
- prefetch_buffer = 4,
- preprocess_threads = 4,
- data_shape = input_shape,
- batch_size = batch_size)
-
- return (train_dataiter, val_dataiter)
diff --git a/example/imagenet/inception-full.py b/example/imagenet/inception-full.py
deleted file mode 100644
--- a/example/imagenet/inception-full.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# pylint: skip-file
-import sys
-sys.path.insert(0, "../mxnet/python")
-import mxnet as mx
-import logging
-from data import ilsvrc12_iterator
-
-
-logger = logging.getLogger()
-logger.setLevel(logging.DEBUG)
-
-def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''):
- conv = mx.symbol.Convolution(data=data, workspace=512, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix))
- bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix))
- act = mx.symbol.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix))
- return act
-
-def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name):
- # 1x1
- c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name))
- # 3x3 reduce + 3x3
- c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
- c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name))
- # double 3x3 reduce + double 3x3
- cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
- cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name))
- cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name))
- # pool + proj
- pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name)))
- cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name))
- # concat
- concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name)
- return concat
-
-def InceptionFactoryB(data, num_3x3red, num_3x3, num_d3x3red, num_d3x3, name):
- # 3x3 reduce + 3x3
- c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
- c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_3x3' % name))
- # double 3x3 reduce + double 3x3
- cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
- cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(1, 1), name=('%s_double_3x3_0' % name))
- cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_double_3x3_1' % name))
- # pool + proj
- pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pool_type="max", name=('max_pool_%s_pool' % name))
- # concat
- concat = mx.symbol.Concat(*[c3x3, cd3x3, pooling], name='ch_concat_%s_chconcat' % name)
- return concat
-
-def inception(nhidden, grad_scale):
- # data
- data = mx.symbol.Variable(name="data")
- # stage 1
- conv1 = ConvFactory(data=data, num_filter=96, kernel=(7, 7), stride=(2, 2), pad=(3, 3), name='conv1')
- pool1 = mx.symbol.Pooling(data=conv1, kernel=(3, 3), stride=(2, 2), name='pool1', pool_type='max')
- # stage 2
- conv2red = ConvFactory(data=pool1, num_filter=128, kernel=(1, 1), stride=(1, 1), name='conv2red')
- conv2 = ConvFactory(data=conv2red, num_filter=288, kernel=(3, 3), stride=(1, 1), pad=(1, 1), name='conv2')
- pool2 = mx.symbol.Pooling(data=conv2, kernel=(3, 3), stride=(2, 2), name='pool2', pool_type='max')
- # stage 2
- in3a = InceptionFactoryA(pool2, 96, 96, 96, 96, 144, "avg", 48, '3a')
- in3b = InceptionFactoryA(in3a, 96, 96, 144, 96, 144, "avg", 96, '3b')
- in3c = InceptionFactoryB(in3b, 192, 240, 96, 144, '3c')
- # stage 3
- in4a = InceptionFactoryA(in3c, 224, 64, 96, 96, 128, "avg", 128, '4a')
- in4b = InceptionFactoryA(in4a, 192, 96, 128, 96, 128, "avg", 128, '4b')
- in4c = InceptionFactoryA(in4b, 160, 128, 160, 128, 160, "avg", 128, '4c')
- in4d = InceptionFactoryA(in4c, 96, 128, 192, 160, 96, "avg", 128, '4d')
- in4e = InceptionFactoryB(in4d, 128, 192, 192, 256, '4e')
- # stage 4
- in5a = InceptionFactoryA(in4e, 352, 192, 320, 160, 224, "avg", 128, '5a')
- in5b = InceptionFactoryA(in5a, 352, 192, 320, 192, 224, "max", 128, '5b')
- # global avg pooling
- avg = mx.symbol.Pooling(data=in5b, kernel=(7, 7), stride=(1, 1), name="global_pool", pool_type='avg')
- # linear classifier
- flatten = mx.symbol.Flatten(data=avg, name='flatten')
- fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=nhidden, name='fc1')
- softmax = mx.symbol.SoftmaxOutput(data=fc1, name='softmax')
- return softmax
-
-softmax = inception(21841, 1.0)
-
-batch_size = 64
-num_gpu = 4
-gpus = [mx.gpu(i) for i in range(num_gpu)]
-input_shape = (3, 224, 224)
-
-train = ilsvrc12_iterator(batch_size=batch_size, input_shape=(3,224,224))
-
-model_prefix = "model/Inception-Full"
-num_round = 10
-
-logging.info("This script is used to train ImageNet fullset over 21841 classes.")
-logging.info("For noraml 1000 classes problem, please use inception.py")
-
-model = mx.model.FeedForward(ctx=gpus, symbol=softmax, num_epoch=num_round,
- learning_rate=0.05, momentum=0.9, wd=0.00001)
-
-model.fit(X=train,
- eval_metric="acc",
- batch_end_callback=[mx.callback.Speedometer(batch_size), mx.callback.log_train_metric(100)],
- epoch_end_callback=mx.callback.do_checkpoint(model_prefix))
diff --git a/example/imagenet/inception.py b/example/imagenet/inception.py
deleted file mode 100644
--- a/example/imagenet/inception.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# pylint: skip-file
-import sys
-import mxnet as mx
-import logging
-from data import ilsvrc12_iterator
-
-
-logger = logging.getLogger()
-logger.setLevel(logging.DEBUG)
-
-def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''):
- conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix))
- bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix))
- act = mx.symbol.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix))
- return act
-
-def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name):
- # 1x1
- c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name))
- # 3x3 reduce + 3x3
- c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
- c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name))
- # double 3x3 reduce + double 3x3
- cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
- cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name))
- cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name))
- # pool + proj
- pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name)))
- cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name))
- # concat
- concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name)
- return concat
-
-def InceptionFactoryB(data, num_3x3red, num_3x3, num_d3x3red, num_d3x3, name):
- # 3x3 reduce + 3x3
- c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
- c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_3x3' % name))
- # double 3x3 reduce + double 3x3
- cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
- cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(1, 1), name=('%s_double_3x3_0' % name))
- cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_double_3x3_1' % name))
- # pool + proj
- pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pool_type="max", name=('max_pool_%s_pool' % name))
- # concat
- concat = mx.symbol.Concat(*[c3x3, cd3x3, pooling], name='ch_concat_%s_chconcat' % name)
- return concat
-
-def inception(nhidden, grad_scale):
- # data
- data = mx.symbol.Variable(name="data")
- # stage 1
- conv1 = ConvFactory(data=data, num_filter=64, kernel=(7, 7), stride=(2, 2), pad=(3, 3), name='conv1')
- pool1 = mx.symbol.Pooling(data=conv1, kernel=(3, 3), stride=(2, 2), name='pool1', pool_type='max')
- # stage 2
- conv2red = ConvFactory(data=pool1, num_filter=64, kernel=(1, 1), stride=(1, 1), name='conv2red')
- conv2 = ConvFactory(data=conv2red, num_filter=192, kernel=(3, 3), stride=(1, 1), pad=(1, 1), name='conv2')
- pool2 = mx.symbol.Pooling(data=conv2, kernel=(3, 3), stride=(2, 2), name='pool2', pool_type='max')
- # stage 2
- in3a = InceptionFactoryA(pool2, 64, 64, 64, 64, 96, "avg", 32, '3a')
- in3b = InceptionFactoryA(in3a, 64, 64, 96, 64, 96, "avg", 64, '3b')
- in3c = InceptionFactoryB(in3b, 128, 160, 64, 96, '3c')
- # stage 3
- in4a = InceptionFactoryA(in3c, 224, 64, 96, 96, 128, "avg", 128, '4a')
- in4b = InceptionFactoryA(in4a, 192, 96, 128, 96, 128, "avg", 128, '4b')
- in4c = InceptionFactoryA(in4b, 160, 128, 160, 128, 160, "avg", 128, '4c')
- in4d = InceptionFactoryA(in4c, 96, 128, 192, 160, 192, "avg", 128, '4d')
- in4e = InceptionFactoryB(in4d, 128, 192, 192, 256, '4e')
- # stage 4
- in5a = InceptionFactoryA(in4e, 352, 192, 320, 160, 224, "avg", 128, '5a')
- in5b = InceptionFactoryA(in5a, 352, 192, 320, 192, 224, "max", 128, '5b')
- # global avg pooling
- avg = mx.symbol.Pooling(data=in5b, kernel=(7, 7), stride=(1, 1), name="global_pool", pool_type='avg')
- # linear classifier
- flatten = mx.symbol.Flatten(data=avg, name='flatten')
- fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=nhidden, name='fc1')
- softmax = mx.symbol.SoftmaxOutput(data=fc1, name='softmax')
- return softmax
-
-softmax = inception(1000, 1.0)
-
-batch_size = 128
-num_gpu = 4
-gpus = [mx.gpu(i) for i in range(num_gpu)]
-input_shape = (3, 224, 224)
-softmax = inception(1000, 1.0)
-
-train, val = ilsvrc12_iterator(batch_size=batch_size, input_shape=(3,224,224))
-
-model_prefix = "model/Inception"
-num_round = 40
-
-
-model = mx.model.FeedForward(ctx=gpus, symbol=softmax, num_epoch=num_round,
- learning_rate=0.05, momentum=0.9, wd=0.00001)
-
-model.fit(X=train, eval_data=val,
- eval_metric="acc",
- batch_end_callback=mx.callback.Speedometer(batch_size))
diff --git a/example/mnist/data.py b/example/mnist/data.py
deleted file mode 100644
--- a/example/mnist/data.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# pylint: skip-file
-""" data iterator for mnist """
-import sys
-import os
-# code to automatically download dataset
-curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
-sys.path.append(os.path.join(curr_path, "../../tests/python/common"))
-import get_data
-import mxnet as mx
-
-def mnist_iterator(batch_size, input_shape):
- """return train and val iterators for mnist"""
- # download data
- get_data.GetMNIST_ubyte()
- flat = False if len(input_shape) == 3 else True
-
- train_dataiter = mx.io.MNISTIter(
- image="data/train-images-idx3-ubyte",
- label="data/train-labels-idx1-ubyte",
- input_shape=input_shape,
- batch_size=batch_size,
- shuffle=True,
- flat=flat)
-
- val_dataiter = mx.io.MNISTIter(
- image="data/t10k-images-idx3-ubyte",
- label="data/t10k-labels-idx1-ubyte",
- input_shape=input_shape,
- batch_size=batch_size,
- flat=flat)
-
- return (train_dataiter, val_dataiter)
diff --git a/example/mnist/lenet.py b/example/mnist/lenet.py
deleted file mode 100644
--- a/example/mnist/lenet.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# pylint: skip-file
-from data import mnist_iterator
-import mxnet as mx
-import logging
-
-## define lenet
-# input
-data = mx.symbol.Variable('data')
-# first conv
-conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20)
-tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh")
-pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max",
- kernel=(2,2), stride=(2,2))
-# second conv
-conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50)
-tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh")
-pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max",
- kernel=(2,2), stride=(2,2))
-# first fullc
-flatten = mx.symbol.Flatten(data=pool2)
-fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
-tanh3 = mx.symbol.Activation(data=fc1, act_type="tanh")
-# second fullc
-fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=10)
-# loss
-lenet = mx.symbol.SoftmaxOutput(data=fc2, name='softmax')
-
-## data
-train, val = mnist_iterator(batch_size=100, input_shape=(1,28,28))
-
-## train
-logging.basicConfig(level=logging.DEBUG)
-# dev = [mx.gpu(i) for i in range(2)]
-dev = mx.gpu()
-model = mx.model.FeedForward(
- ctx = dev, symbol = lenet, num_epoch = 20,
- learning_rate = 0.05, momentum = 0.9, wd = 0.00001)
-model.fit(X=train, eval_data=val,
- batch_end_callback=mx.callback.Speedometer(100))
diff --git a/example/mnist/mlp.py b/example/mnist/mlp.py
deleted file mode 100644
--- a/example/mnist/mlp.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# pylint: skip-file
-from data import mnist_iterator
-import mxnet as mx
-import logging
-
-# define mlp
-
-data = mx.symbol.Variable('data')
-fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
-act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
-fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
-act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
-fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
-mlp = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')
-
-# data
-
-train, val = mnist_iterator(batch_size=100, input_shape = (784,))
-
-# train
-
-logging.basicConfig(level=logging.DEBUG)
-
-model = mx.model.FeedForward(
- ctx = mx.cpu(), symbol = mlp, num_epoch = 20,
- learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
-
-model.fit(train, eval_data=val)
-
-probs = model.predict(val)
diff --git a/example/mnist/mlp_numpy.py b/example/mnist/mlp_numpy.py
deleted file mode 100644
--- a/example/mnist/mlp_numpy.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# pylint: skip-file
-import mxnet as mx
-import logging
-
-
-# define mlp
-
-data = mx.symbol.Variable('data')
-fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
-act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
-fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
-act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
-fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
-mlp = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')
-
-# data
-
-from sklearn.datasets import fetch_mldata
-from sklearn.utils import shuffle
-mnist = fetch_mldata('MNIST original', data_home="./data")
-# shuffle data
-X, y = shuffle(mnist.data, mnist.target)
-# split dataset
-train_data = X[:50000, :].astype('float32')
-train_label = y[:50000]
-val_data = X[50000: 60000, :].astype('float32')
-val_label = y[50000:60000]
-# Normalize data
-train_data[:] /= 256.0
-val_data[:] /= 256.0
-
-
-batch_size = 100
-# or you can use numpy iterator, which make using model easier
-train_iter = mx.io.NDArrayIter(train_data, train_label, batch_size=batch_size, shuffle=True)
-val_iter = mx.io.NDArrayIter(val_data, val_label, batch_size=batch_size)
-
-
-logging.basicConfig(level=logging.DEBUG)
-
-model = mx.model.FeedForward(
- ctx = mx.cpu(), symbol = mlp, num_epoch = 20,
- learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
-
-# train by using Numpy ndarray direcly
-model.fit(X=train_data, y=train_label)
-
-# train by using Numpy Iterator
-#model.fit(train_iter, eval_data=val_iter)
-
-probs = model.predict(val_data)
-print(probs.shape)
diff --git a/python/mxnet/__init__.py b/python/mxnet/__init__.py
--- a/python/mxnet/__init__.py
+++ b/python/mxnet/__init__.py
@@ -28,7 +28,8 @@
# use viz as short for mx.ndarray
from . import visualization as viz
from . import callback
-from . import misc
+# from . import misc
+from . import lr_scheduler
# use mx.kv as short for kvstore
from . import kvstore as kv
from . import kvstore_server
diff --git a/python/mxnet/kvstore_server.py b/python/mxnet/kvstore_server.py
--- a/python/mxnet/kvstore_server.py
+++ b/python/mxnet/kvstore_server.py
@@ -4,6 +4,7 @@
import ctypes
import sys
import pickle
+import logging
from .base import _LIB, check_call
from .kvstore import create
@@ -18,11 +19,19 @@ def __init__(self, kvstore):
"""
self.kvstore = kvstore
self.handle = kvstore.handle
-
+ self.init_logginig = False
def _controller(self):
"""return the server controller"""
def server_controller(cmd_id, cmd_body):
"""server controler"""
+ if self.init_logginig == False:
+ # the reason put the codes here is because we cannot get
+ # kvstore.rank earlier
+ head = '%(asctime)-15s Server[' + str(
+ self.kvstore.rank) + '] %(message)s'
+ logging.basicConfig(level=logging.DEBUG, format=head)
+ self.init_logginig = True
+
if cmd_id == 0:
try:
optimizer = pickle.loads(cmd_body)
diff --git a/python/mxnet/lr_scheduler.py b/python/mxnet/lr_scheduler.py
new file mode 100644
--- /dev/null
+++ b/python/mxnet/lr_scheduler.py
@@ -0,0 +1,76 @@
+"""
+learning rate scheduler, which adaptive changes the learning rate based on the
+progress
+"""
+import logging
+
+class LRScheduler(object):
+ """Base class of a learning rate scheduler"""
+ def __init__(self):
+ """
+ base_lr : float
+ the initial learning rate
+ """
+ self.base_lr = 0.01
+
+ def __call__(self, num_update):
+ """
+ Call to schedule current learning rate
+
+ The training progress is presented by `num_update`, which can be roughly
+ viewed as the number of minibatches executed so far. Its value is
+ non-decreasing, and increases at most by one.
+
+ The exact value is the upper bound of the number of updates applied to
+ a weight/index
+
+ See more details in https://github.com/dmlc/mxnet/issues/625
+
+ Parameters
+ ----------
+ num_update: int
+ the maximal number of updates applied to a weight.
+ """
+ raise NotImplementedError("must override this")
+
+class FactorScheduler(LRScheduler):
+ """Reduce learning rate in factor
+
+ Assume the weight has been updated by n times, then the learning rate will
+ be
+
+ base_lr * factor^(floor(n/step))
+
+ Parameters
+ ----------
+ step: int
+ schedule learning rate after n updates
+ factor: float
+ the factor for reducing the learning rate
+ """
+ def __init__(self, step, factor=1):
+ super(FactorScheduler, self).__init__()
+ if step < 1:
+ raise ValueError("Schedule step must be greater or equal than 1 round")
+ if factor >= 1.0:
+ raise ValueError("Factor must be less than 1 to make lr reduce")
+ self.step = step
+ self.factor = factor
+ self.count = 0
+
+ def __call__(self, num_update):
+ """
+ Call to schedule current learning rate
+
+ Parameters
+ ----------
+ num_update: int
+ the maximal number of updates applied to a weight.
+ """
+
+ if num_update > self.count + self.step:
+ self.count += self.step
+ self.base_lr *= self.factor
+ logging.info("Update[%d]: Change learning rate to %.5f",
+ num_update, self.base_lr)
+ return self.base_lr
diff --git a/python/mxnet/model.py b/python/mxnet/model.py
--- a/python/mxnet/model.py
+++ b/python/mxnet/model.py
@@ -275,9 +275,6 @@ def _train_multi_device(symbol, ctx, arg_names, param_names, aux_names,
# Now start training
for epoch in range(begin_epoch, end_epoch):
- # init optmizer
- optimizer.begin_epoch(epoch)
-
# Training phase
tic = time.time()
eval_metric.reset()
diff --git a/python/mxnet/optimizer.py b/python/mxnet/optimizer.py
--- a/python/mxnet/optimizer.py
+++ b/python/mxnet/optimizer.py
@@ -49,19 +49,10 @@ def create_optimizer(name, rescale_grad=1, **kwargs):
raise ValueError('Cannot find optimizer %s' % name)
def __init__(self, rescale_grad=1):
- self.epoch = 0
self.rescale_grad = rescale_grad
self.lr_scale = {}
-
- def begin_epoch(self, epoch):
- """Function called to notify beginning of epoch.
-
- Parameters
- ----------
- epoch : int
- The epoch number.
- """
- self.epoch = epoch
+ self.num_update = 0
+ self._index_update_count = {}
def create_state(self, index, weight):
"""Create additional optimizer state such as momentum.
@@ -80,6 +71,20 @@ def set_lr_scale(self, args_lrscale):
"""
self.lr_scale = args_lrscale.copy()
+ def _update_count(self, index):
+ """
+ update num_update
+
+ Parameters:
+ index : int
+ The index will be updated
+ """
+ if index not in self._index_update_count:
+ self._index_update_count[index] = 0
+ self._index_update_count[index] += 1
+ self.num_update = max(self._index_update_count[index], self.num_update)
+
+
#convenience wrapper for Optimizer.Register
register = Optimizer.register
@@ -115,7 +120,6 @@ def __init__(self, learning_rate=0.01, momentum=0.0,
self.lr_scheduler = lr_scheduler
if lr_scheduler != None:
self.lr_scheduler.base_lr = learning_rate
- self.momentums = {}
def create_state(self, index, weight):
"""Create additional optimizer state such as momentum.
@@ -152,7 +156,8 @@ def update(self, index, weight, grad, state):
assert(isinstance(weight, NDArray))
assert(isinstance(grad, NDArray))
if self.lr_scheduler != None:
- lr = self.lr_scheduler(self.epoch)
+ lr = self.lr_scheduler(self.num_update)
+ self._update_count(index)
else:
lr = self.lr
lr *= self.lr_scale.get(index, 1.0)
diff --git a/tools/launch.py b/tools/launch.py
new file mode 100755
--- /dev/null
+++ b/tools/launch.py
@@ -0,0 +1,54 @@
+#!/usr/bin/env python
+"""
+Launch a distributed job
+"""
+import argparse
+import os, sys
+import signal
+import logging
+
+def main():
+ parser = argparse.ArgumentParser(description='Launch a distributed job')
+ parser.add_argument('-n', '--num-workers', required=True, type=int,
+ help = 'number of worker nodes to be launched')
+ parser.add_argument('-s', '--num-servers', type=int,
+ help = 'number of server nodes to be launched, \
+ in default it is equal to NUM_WORKERS')
+ parser.add_argument('-H', '--hostfile', type=str,
+ help = 'the hostfile of slave machines which will run the job')
+ parser.add_argument('--sync-dir', type=str,
+ help = 'if specificed, it will sync the current \
+ directory into slave machines\'s SYNC_DIR')
+ parser.add_argument('--launcher', type=str, default='ssh',
+ choices = ['ssh', 'mpirun'],
+ help = 'the lancher to use')
+ parser.add_argument('command', nargs='+',
+ help = 'command for launching the program')
+ args, unknown = parser.parse_known_args()
+
+ if args.num_servers is None:
+ args.num_servers = args.num_workers
+
+ curr_path = os.path.abspath(os.path.dirname(__file__))
+ sys.path.append(os.path.join(curr_path, "../ps-lite/tracker"))
+
+ if args.hostfile is None:
+ from dmlc_local import LocalLauncher
+ launcher = LocalLauncher(args, unknown)
+ elif args.launcher == 'ssh':
+ from dmlc_ssh import SSHLauncher
+ launcher = SSHLauncher(args, unknown)
+ else:
+ return
+
+ launcher.run()
+
+def signal_handler(signal, frame):
+ logging.info('Stop luancher')
+ sys.exit(0)
+
+if __name__ == '__main__':
+
+ signal.signal(signal.SIGINT, signal_handler)
+
+ main()
|
improve the sematic of lr_scheduler
the current lr_scheduler is based on epoch, it is not flexible, and it has problem with the distributed training, where the optimizer runs on the server nodes. a server node neither can get the current epoch number, nor there is a clear definition of epoch for async sgd.
i suggest to turn to a `batch` based lr scheduler implemented in python/mxnet, while provide an epoch like interface to users, say
```
new_lr = .8 * old_lr for every .5 epoch
```
we then set a correct batch value for lr_scheduler.
but for a correct implementation, `batch` is not a good term. it means reading a data batch, while not necessary a weight updating:
1. we can sum the gradient for several batches and then apply one weight updating. often useful if the model is huge, while gpu mem is little, especially on the distributed setting.
2. on distributed training, assume n machines and b batches, there are n*b weight updatings for async_sgd while only b updatings for sync_sgd
so i suggest to pass `num_updates` to the lr_scheduler. but there there are still two issues
1. we may update layer 1 (conv layer) for each batch, while layer 2 (big fullc layer) for every 2 batches.
2. assume there are two workers, on async_sgd, the possible updating order on the server node could be
```
layer_1_worker_1, layer_2_worker_2, layer_1_worker_2, layer_2_worker_1
```
to simplify the thing, we pass "the maximal number of updates applied to a weight" to the lr_scheduler, for the above example, the values are
```
1, 1, 2, 2
```
|
Can we put learning rate in kvstore and synchronize it?
currently it only syncs once when call `kvstore.set_optimizer(optm)`. syncing later is nontrivial in a full async updating, some nodes may fast while others may slow, especially, any worker may die at any moment if running on the ec2 spot instance
This will result in different learning rate for each layer right?
With async sgd, each thread push their own grad update on their own weight, I think they should each increment num_updates by one.
no if set num_update = "the maximal number of updates applied to a weight", so every layer share the same upper bound. we can support defining a scheduler for each layer, but it seems over-engineering.
|
2015-11-19T03:50:25Z
|
[]
|
[]
| |||
apache/mxnet
|
apache__mxnet-724
|
c5c8a20556480fb716f003aecb26b548f131c4d3
|
diff --git a/example/image-classification/train_model.py b/example/image-classification/train_model.py
--- a/example/image-classification/train_model.py
+++ b/example/image-classification/train_model.py
@@ -43,12 +43,14 @@ def fit(args, network, data_loader):
step = max(int(epoch_size * args.lr_factor_epoch), 1),
factor = args.lr_factor)
+ if 'clip_gradient' in args and args.clip_gradient is not None:
+ model_args['clip_gradient'] = args.clip_gradient
+
model = mx.model.FeedForward(
ctx = devs,
symbol = network,
num_epoch = args.num_epochs,
learning_rate = args.lr,
- clip_gradient = args.clip_gradient,
momentum = 0.9,
wd = 0.00001,
initializer = mx.init.Xavier(factor_type="in", magnitude=2.34),
|
[example] train_model.py AttributeError: 'Namespace' object has no attribute 'clip_gradient'
# ../../tools/launch.py -n 2 python train_mnist.py
2015-11-26 10:03:21,152 Node[0] start with arguments Namespace(batch_size=128, data_dir='mnist/', gpus=None, kv_store='local', load_epoch=None, lr=0.1, lr_factor=1, lr_factor_epoch=1, model_prefix=None, network='mlp', num_epochs=10, num_examples=60000)
2015-11-26 10:03:21,153 Node[0] start with arguments Namespace(batch_size=128, data_dir='mnist/', gpus=None, kv_store='local', load_epoch=None, lr=0.1, lr_factor=1, lr_factor_epoch=1, model_prefix=None, network='mlp', num_epochs=10, num_examples=60000)
[10:03:22] src/io/iter_mnist.cc:91: MNISTIter: load 60000 images, shuffle=1, shape=(128,784)
[10:03:22] src/io/iter_mnist.cc:91: MNISTIter: load 60000 images, shuffle=1, shape=(128,784)
[10:03:22] src/io/iter_mnist.cc:91: MNISTIter: load 10000 images, shuffle=1, shape=(128,784)
Traceback (most recent call last):
File "train_mnist.py", line 122, in <module>
train_model.fit(args, net, get_iterator)
File "/search/data/user/qhj/github/mxnet/example/image-classification/train_model.py", line 51, in fit
clip_gradient = args.clip_gradient,
AttributeError: 'Namespace' object has no attribute 'clip_gradient'
[10:03:22] src/io/iter_mnist.cc:91: MNISTIter: load 10000 images, shuffle=1, shape=(128,784)
Traceback (most recent call last):
File "train_mnist.py", line 122, in <module>
train_model.fit(args, net, get_iterator)
File "/search/data/user/qhj/github/mxnet/example/image-classification/train_model.py", line 51, in fit
clip_gradient = args.clip_gradient,
AttributeError: 'Namespace' object has no attribute 'clip_gradient'
2015-11-26 10:03:22,423 DEBUG Thread %d exit with 0
|
env:
1. g++ 4.8
2. cuda 7.0
execution:
1. git pull
2. git submodule update
|
2015-11-26T02:30:22Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1122
|
025b38df40a18f94322fab445aee13024fe783da
|
diff --git a/fastapi/routing.py b/fastapi/routing.py
--- a/fastapi/routing.py
+++ b/fastapi/routing.py
@@ -480,7 +480,12 @@ def decorator(func: Callable) -> Callable:
def add_api_websocket_route(
self, path: str, endpoint: Callable, name: str = None
) -> None:
- route = APIWebSocketRoute(path, endpoint=endpoint, name=name)
+ route = APIWebSocketRoute(
+ path,
+ endpoint=endpoint,
+ name=name,
+ dependency_overrides_provider=self.dependency_overrides_provider,
+ )
self.routes.append(route)
def websocket(self, path: str, name: str = None) -> Callable:
|
Dependency override websocket broken
### Describe the bug
Dependency override does not work for websockets.
The function `add_api_websocket_route` does not add `dependency_overrides_provider` to `APIWebSocketRoute`.
### To Reproduce
Create a simple app with websockets and test it with override.
### Expected behavior
The overrides should be taken into account, but the test uses the original dependency.
### Environment
- OS: Windows
- FastAPI version: 0.49.0
- Python version: 3.6.8
|
`APIRouter.add_api_websocket_route` should be modified to construct the `APIWebSocketRoute` like this:
```
route = APIWebSocketRoute(
path,
endpoint=endpoint,
name=name,
dependency_overrides_provider=self.dependency_overrides_provider,
)
```
|
2020-03-16T17:12:49Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1524
|
8cfe254400a92c1184c354a92541b401932d24a3
|
diff --git a/fastapi/encoders.py b/fastapi/encoders.py
--- a/fastapi/encoders.py
+++ b/fastapi/encoders.py
@@ -71,6 +71,8 @@ def jsonable_encoder(
by_alias=by_alias,
skip_defaults=bool(exclude_unset or skip_defaults),
)
+ if "__root__" in obj_dict:
+ obj_dict = obj_dict["__root__"]
return jsonable_encoder(
obj_dict,
exclude_none=exclude_none,
|
Pydantic __root__ model - incorrect handling
### Describe the bug
https://pydantic-docs.helpmanual.io/usage/models/#custom-root-types
Pydantic allows to create models with only `__root__` field. In such scenario the model behaves as transparent wrapper for this single type.
When such model is used in response (request also?) fastapi does not treat it correctly and renders it as object with `__root__` field.
Object is treated correctly by pydantic itself.
### To Reproduce
```
from typing import List
from fastapi import FastAPI
from pydantic.main import BaseModel
app = FastAPI()
class RootTestClass(BaseModel):
__root__: List[str]
@app.get("/")
async def root():
response = RootTestClass(__root__=['a', 'b', 'c'])
print(response.json()) # ["a", "b", "c"] so it's OK
print(RootTestClass.schema()) # {'title': 'RootTestClass', 'type': 'array', 'items': {'type': 'string'}} this is also OK
return response # Wrong value in http response
```
### Expected behavior
The response should be:
```
["a", "b", "c"]
```
but at the moment is:
```
{"__root__":["a","b","c"]}
```
### Screenshots
N/A
### Environment
- OS: Linux
- FastAPI Version: 0.47.1
- Python version: Python 3.7.5
### Additional context
N/A
|
If anyone wants to submit a PR to fix this I'd be happy to review it. (I think it's worth handling this properly.)
For now created issue for `pydantic` (https://github.com/samuelcolvin/pydantic/issues/1193) as it looks like it is more broken there than here.
I wouldn't recommend using `__root__` in FastAPI. `__root__` allows using other types in Pydantic apart from things with key values, like lists.
But in FastAPI, everywhere you can use a Pydantic model you can also use what would be the (arguably?) most "Pythonic" way, using `typing`. So you can do `List[SomeModel]`. Instead of having to create a `SomeModelWrapper` that users `__root__`.
`__root__` is valid and useful in Pydantic standalone as there's no other way to achieve what it does. But in FastAPI the preferred way is to use standard types that have Pydantic models as type parameters (the thing inside `List[]`).
Given that, as it's still valid Pydantic, I would be happy to support it if someone wants to add a PR with it (as @dmontagu says).
@tiangolo I understands that `response_model=Dict[str, str]` instead of a wrapped model with `__root__` is viable, however is there a way to include an `example`, perhaps similar to the `schema_extra` section that can be attach to `response_model` ?
@tiangolo Supporting pydantic root types would allow a single validator (defined in the wrapper class) to be run on all objects of a certain type- otherwise, the validator must be specified in each object that has a child of that type (as far as I can tell- I'm new to fastAPI, please let me know if there's a better way).
as @sidmani also mentioned, I'm running into wanting the ability to be able to say:
```python
pydantic_list_as_root.dict()
```
and the above output a dict. Rather than having to manually loop through my `List[pydantic_entity]` and call `dict()` on each one.
However, I do appreciate what @tiangolo is trying to achieve by keeping things as pythonic as possible, but I would imagine that many if not all FastAPI implementations heavily rely on Pydantic for defining schemas. Therefore, I think it would be a great idea to embrace all/most of its capabilities.
Yeah, I would be happy to support it if someone wants to add a PR with it.
|
2020-06-06T03:48:18Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1534
|
543ef7753aff639ad3aed7c153e42f719e361d38
|
diff --git a/fastapi/routing.py b/fastapi/routing.py
--- a/fastapi/routing.py
+++ b/fastapi/routing.py
@@ -1,4 +1,5 @@
import asyncio
+import enum
import inspect
from typing import Any, Callable, Dict, List, Optional, Sequence, Set, Type, Union
@@ -295,6 +296,9 @@ def __init__(
dependency_overrides_provider: Any = None,
callbacks: Optional[List["APIRoute"]] = None,
) -> None:
+ # normalise enums e.g. http.HTTPStatus
+ if isinstance(status_code, enum.IntEnum):
+ status_code = int(status_code)
self.path = path
self.endpoint = endpoint
self.name = get_name(endpoint) if name is None else name
|
Support HTTPStatus
### Is your feature request related to a problem
We typically use [HTTPStatus](https://docs.python.org/3/library/http.html#http.HTTPStatus) in our code. When used as a value for `status_code` in path functions this results in the string literal in the documentation and when "trying" the response is shown as undocumented because it doesn't match the actual response code
### The solution you would like
I want to use HTTPStatus enum values and have it work exactly like the `starlette.status` pseudo enum values
|
For now you can create your own HTTPStatus class (you can't subclass Enums) and add a `__str__` method to convert it to the string representation of the actual status code, for example:
```python
class MyHTTPStatus(IntEnum):
OK = 200
...
def __str__(self):
return str(int(self))
```
or use `int(HTTPStatus)` in the endpoint decorator.
Thanks for the help @retnikt ! :bow:
Yeah, @hmvp you can probably use that for your use case.
I am sorry to disagree...
It is indeed quite easy to workaround this. For example by just importing status from starlette and using that in the decorators.
However `HTTPStatus` is part of the standard lib and since we use it extensively in the rest of our code I would like to use that instead of the starlette status. The second reason is that I expected it to work (since its part of stdlib) and it somewhat did but gave unexpected results.. This is a papercut/pitfall/surpising behavior of fastapi and given the high standard of the rest of the library this should not be there
I am not sure if @retnikt understood what I am trying to do here.. Adding another class is a weird suggestion given that both `HTTPStatus` and `starlette.status` already exist
The ASGI spec uses `int`s for status codes, not enums. That's why it takes `int`s.
Also, @retnikt has been helping a lot here, answering a lot of questions, trying to help others like you, for free. Just out of being a cool person. Please try to be more respectful to the community that is here trying to help.
If you really want to use `HTTPStatus` enums you can use the `int` value, it's quite simple, e.g. `HTTPStatus.OK.value`.
I did not want to be disrespectful to @retnikt and I value his contribution for other people that might have a similar but different issue. I just don't see how it is relevant for my issue. I am sorry if my wording was to strong... As a non-native English speaker I might miss some subtleties.
With regards to the ASGI spec, I did not know that, but I also was not aware that the `status_code` code argument followed the ASGI spec in that regard, especially since putting in a HTTPStatus code enum value just works, but will give a weird result in the docs. It should not be to difficult to add some code along the line of:
```
if isinstance(HTTPStatus, status_code):
status_code = status_code.value
```
or just `status_code = int(status_code)` to the path functions, which would solve a papercut and would still be valid ASGI. Otherwise I would expect the path functions to be noisy about wrong input..
On the other hand, this is indeed not a big issue and if you don't want to change anything, thats fine with me. I just wanted to signal that this is a [papercut](https://en.wikipedia.org/wiki/Paper_cut_bug) in fastapi
Cool, thanks!
Yeah, I would accept a PR checking if a status code is an enum to get its value first. :nerd_face: :heavy_check_mark:
Working on it!
|
2020-06-08T12:29:39Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1540
|
543ef7753aff639ad3aed7c153e42f719e361d38
|
diff --git a/docs_src/websockets/tutorial002.py b/docs_src/websockets/tutorial002.py
--- a/docs_src/websockets/tutorial002.py
+++ b/docs_src/websockets/tutorial002.py
@@ -1,4 +1,4 @@
-from fastapi import Cookie, Depends, FastAPI, Header, WebSocket, status
+from fastapi import Cookie, Depends, FastAPI, Query, WebSocket, status
from fastapi.responses import HTMLResponse
app = FastAPI()
@@ -13,8 +13,9 @@
<h1>WebSocket Chat</h1>
<form action="" onsubmit="sendMessage(event)">
<label>Item ID: <input type="text" id="itemId" autocomplete="off" value="foo"/></label>
+ <label>Token: <input type="text" id="token" autocomplete="off" value="some-key-token"/></label>
<button onclick="connect(event)">Connect</button>
- <br>
+ <hr>
<label>Message: <input type="text" id="messageText" autocomplete="off"/></label>
<button>Send</button>
</form>
@@ -23,8 +24,9 @@
<script>
var ws = null;
function connect(event) {
- var input = document.getElementById("itemId")
- ws = new WebSocket("ws://localhost:8000/items/" + input.value + "/ws");
+ var itemId = document.getElementById("itemId")
+ var token = document.getElementById("token")
+ ws = new WebSocket("ws://localhost:8000/items/" + itemId.value + "/ws?token=" + token.value);
ws.onmessage = function(event) {
var messages = document.getElementById('messages')
var message = document.createElement('li')
@@ -32,6 +34,7 @@
message.appendChild(content)
messages.appendChild(message)
};
+ event.preventDefault()
}
function sendMessage(event) {
var input = document.getElementById("messageText")
@@ -50,26 +53,26 @@ async def get():
return HTMLResponse(html)
-async def get_cookie_or_client(
- websocket: WebSocket, session: str = Cookie(None), x_client: str = Header(None)
+async def get_cookie_or_token(
+ websocket: WebSocket, session: str = Cookie(None), token: str = Query(None)
):
- if session is None and x_client is None:
+ if session is None and token is None:
await websocket.close(code=status.WS_1008_POLICY_VIOLATION)
- return session or x_client
+ return session or token
@app.websocket("/items/{item_id}/ws")
async def websocket_endpoint(
websocket: WebSocket,
- item_id: int,
- q: str = None,
- cookie_or_client: str = Depends(get_cookie_or_client),
+ item_id: str,
+ q: int = None,
+ cookie_or_token: str = Depends(get_cookie_or_token),
):
await websocket.accept()
while True:
data = await websocket.receive_text()
await websocket.send_text(
- f"Session Cookie or X-Client Header value is: {cookie_or_client}"
+ f"Session cookie or query token value is: {cookie_or_token}"
)
if q is not None:
await websocket.send_text(f"Query parameter q is: {q}")
|
Tutorial websocket doc example
**Describe the bug**
Hi,
On the docs of websocket the last example doesn't work.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a file main.py with the last example on the bottom of the file
>https://fastapi.tiangolo.com/tutorial/websockets/#create-a-websocket
```python
from fastapi import Cookie, Depends, FastAPI, Header
from starlette.responses import HTMLResponse
from starlette.status import WS_1008_POLICY_VIOLATION
from starlette.websockets import WebSocket
app = FastAPI()
html = """
<!DOCTYPE html>
<html>
<head>
<title>Chat</title>
</head>
<body>
<h1>WebSocket Chat</h1>
<form action="" onsubmit="sendMessage(event)">
<label>Item ID: <input type="text" id="itemId" autocomplete="off" value="foo"/></label>
<button onclick="connect(event)">Connect</button>
<br>
<label>Message: <input type="text" id="messageText" autocomplete="off"/></label>
<button>Send</button>
</form>
<ul id='messages'>
</ul>
<script>
var ws = null;
function connect(event) {
var input = document.getElementById("itemId")
ws = new WebSocket("ws://localhost:8000/items/" + input.value + "/ws");
ws.onmessage = function(event) {
var messages = document.getElementById('messages')
var message = document.createElement('li')
var content = document.createTextNode(event.data)
message.appendChild(content)
messages.appendChild(message)
};
}
function sendMessage(event) {
var input = document.getElementById("messageText")
ws.send(input.value)
input.value = ''
event.preventDefault()
}
</script>
</body>
</html>
"""
@app.get("/")
async def get():
return HTMLResponse(html)
async def get_cookie_or_client(
websocket: WebSocket, session: str = Cookie(None), x_client: str = Header(None)
):
if session is None and x_client is None:
await websocket.close(code=WS_1008_POLICY_VIOLATION)
return session or x_client
@app.websocket("/items/{item_id}/ws")
async def websocket_endpoint(
websocket: WebSocket,
item_id: int,
q: str = None,
cookie_or_client: str = Depends(get_cookie_or_client),
):
await websocket.accept()
while True:
data = await websocket.receive_text()
await websocket.send_text(
f"Session Cookie or X-Client Header value is: {cookie_or_client}"
)
if q is not None:
await websocket.send_text(f"Query parameter q is: {q}")
await websocket.send_text(f"Message text was: {data}, for item ID: {item_id}")
```
2. Run the application with the cmd:
```
uvicorn main:app --log-level debug --reload
```
3. Open the browser 127.0.0.01
- the first time i connect with ItemID foo , press the button connect
- send message hi with ItemID foo and press the button send.
it's look like the connect fail but the second ,but the send have return code 200
but nothing happen on the web side.

4. See error
```python
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [366952]
email-validator not installed, email fields will be treated as str.
To install, run: pip install email-validator
INFO: Started server process [366957]
INFO: Waiting for application startup.
DEBUG: None - ASGI [1] Started
DEBUG: None - ASGI [1] Sent {'type': 'lifespan.startup'}
DEBUG: None - ASGI [1] Received {'type': 'lifespan.startup.complete'}
DEBUG: ('127.0.0.1', 50056) - Connected
DEBUG: server - state = CONNECTING
DEBUG: server - event = connection_made(<TCPTransport closed=False reading=True 0x1819178>)
DEBUG: ('127.0.0.1', 50056) - ASGI [2] Started
DEBUG: ('127.0.0.1', 50056) - ASGI [2] Received {'type': 'websocket.close', 'code': 1008}
INFO: ('127.0.0.1', 50056) - "WebSocket /items/foo/ws" 403
DEBUG: ('127.0.0.1', 50056) - ASGI [2] Raised exception
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 147, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/uvicorn/middleware/message_logger.py", line 58, in __call__
raise exc from None
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/uvicorn/middleware/message_logger.py", line 54, in __call__
await self.app(scope, inner_receive, inner_send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/applications.py", line 133, in __call__
await self.error_middleware(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/middleware/errors.py", line 87, in __call__
await self.app(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/exceptions.py", line 49, in __call__
await self.app(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/routing.py", line 585, in __call__
await route(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/routing.py", line 265, in __call__
await self.app(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/routing.py", line 56, in app
await func(session)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/fastapi/routing.py", line 148, in app
await websocket.close(code=WS_1008_POLICY_VIOLATION)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/websockets.py", line 121, in close
await self.send({"type": "websocket.close", "code": code})
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/websockets.py", line 70, in send
raise RuntimeError('Cannot call "send" once a close message has been sent.')
RuntimeError: Cannot call "send" once a close message has been sent.
DEBUG: server ! failing WebSocket connection in the CONNECTING state: 1006 [no reason]
DEBUG: ('127.0.0.1', 50058) - Connected
DEBUG: server x half-closing TCP connection
DEBUG: ('127.0.0.1', 50058) - ASGI [3] Started
DEBUG: ('127.0.0.1', 50058) - ASGI [3] Received {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO: ('127.0.0.1', 50058) - "GET / HTTP/1.1" 200
DEBUG: ('127.0.0.1', 50058) - ASGI [3] Received {'type': 'http.response.body', 'body': '<1419 bytes>'}
DEBUG: ('127.0.0.1', 50058) - ASGI [3] Completed
DEBUG: server - event = eof_received()
DEBUG: server - event = connection_lost(None)
DEBUG: server - state = CLOSED
DEBUG: server x code = 1006, reason = [no reason]
DEBUG: ('127.0.0.1', 50058) - Disconnected
DEBUG: ('127.0.0.1', 50060) - Connected
DEBUG: ('127.0.0.1', 50060) - ASGI [4] Started
DEBUG: ('127.0.0.1', 50060) - ASGI [4] Received {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO: ('127.0.0.1', 50060) - "GET / HTTP/1.1" 200
DEBUG: ('127.0.0.1', 50060) - ASGI [4] Received {'type': 'http.response.body', 'body': '<1419 bytes>'}
DEBUG: ('127.0.0.1', 50060) - ASGI [4] Completed
DEBUG: ('127.0.0.1', 50060) - Disconnected
```
**Expected behavior**
expected to appear the send bold message on the web page.
**Environment:**
- OS: centos 7
- FastAPI Version [e.g. 0.3.0], get it with: fastapi==0.31.0
```Python
import fastapi
print(fastapi.__version__)
0.31.0
```
- Python version, get it with:
```bash
python --version
Python 3.7.3
```
|
@BenjPy ,
Just add `event.preventDefault()` in the beginning of `connect` js function.
The problem here is when you are trying to make websocket connection, browser refreshes page and closes websocket connection.
So `connect` function should looks like this:
```
function connect(event) {
event.preventDefault()
var input = document.getElementById("itemId")
ws = new WebSocket("ws://localhost:8000/items/" + input.value + "/ws");
ws.onmessage = function(event) {
var messages = document.getElementById('messages')
var message = document.createElement('li')
var content = document.createTextNode(event.data)
message.appendChild(content)
messages.appendChild(message)
};
}
```
@alj06ka ,
still nothing appear , when added the line on the web page
it's look like the first time the Websocket fail to connect
see on below the js code
```js
html = """
<!DOCTYPE html>
<html>
<head>
<title>Chat</title>
</head>
<body>
<h1>WebSocket Chat</h1>
<form action="" onsubmit="sendMessage(event)">
<label>Item ID: <input type="text" id="itemId" autocomplete="off" value="foo"/></label>
<button onclick="connect(event)">Connect</button>
<br>
<label>Message: <input type="text" id="messageText" autocomplete="off"/></label>
<button>Send</button>
</form>
<ul id='messages'>
</ul>
<script>
var ws = null;
function connect(event) {
event.preventDefault()
var input = document.getElementById("itemId")
ws = new WebSocket("ws://127.0.0.1:8000/items/" + input.value + "/ws");
ws.onmessage = function(event) {
var messages = document.getElementById('messages')
var message = document.createElement('li')
var content = document.createTextNode(event.data)
message.appendChild(content)
messages.appendChild(message)
};
}
function sendMessage(event) {
var input = document.getElementById("messageText")
ws.send(input.value)
input.value = ''
event.preventDefault()
}
</script>
</body>
</html>
"""
```
### see the log on below
```bash
INFO: ('127.0.0.1', 59388) - "WebSocket /items/foo/ws" 403
DEBUG: ('127.0.0.1', 59388) - ASGI [13] Raised exception
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/uvicorn/protocols/websockets/websockets_impl.py ", line 147, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/uvicorn/middleware/message_logger.py", line 58, in __call__
raise exc from None
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/uvicorn/middleware/message_logger.py", line 54, in __call__
await self.app(scope, inner_receive, inner_send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/applications.py", line 133, in __call __
await self.error_middleware(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/middleware/errors.py", line 87, in __ call__
await self.app(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/exceptions.py", line 49, in __call__
await self.app(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/routing.py", line 585, in __call__
await route(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/routing.py", line 265, in __call__
await self.app(scope, receive, send)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/routing.py", line 56, in app
await func(session)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/fastapi/routing.py", line 148, in app
await websocket.close(code=WS_1008_POLICY_VIOLATION)
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/websockets.py", line 121, in close
await self.send({"type": "websocket.close", "code": code})
File "/data/experiments/realtime_web_socket/lib/python3.7/site-packages/starlette/websockets.py", line 70, in send
raise RuntimeError('Cannot call "send" once a close message has been sent.')
RuntimeError: Cannot call "send" once a close message has been sent.
DEBUG: server ! failing WebSocket connection in the CONNECTING state: 1006 [no reason]
DEBUG: server x half-closing TCP connection
DEBUG: server - event = eof_received()
DEBUG: server - event = connection_lost(None)
DEBUG: server - state = CLOSED
DEBUG: server x code = 1006, reason = [no reason]
DEBUG: ('127.0.0.1', 59390) - Connected
DEBUG: ('127.0.0.1', 59390) - ASGI [14] Started
DEBUG: ('127.0.0.1', 59390) - ASGI [14] Received {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO: ('127.0.0.1', 59390) - "GET / HTTP/1.1" 200
DEBUG: ('127.0.0.1', 59390) - ASGI [14] Received {'type': 'http.response.body', 'body': '<1458 bytes>'}
DEBUG: ('127.0.0.1', 59390) - ASGI [14] Completed
DEBUG: ('127.0.0.1', 59390) - ASGI [15] Started
DEBUG: ('127.0.0.1', 59390) - ASGI [15] Received {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO: ('127.0.0.1', 59390) - "GET / HTTP/1.1" 200
DEBUG: ('127.0.0.1', 59390) - ASGI [15] Received {'type': 'http.response.body', 'body': '<1458 bytes>'}
DEBUG: ('127.0.0.1', 59390) - ASGI [15] Completed
DEBUG: ('127.0.0.1', 59390) - Disconnected
DEBUG: ('127.0.0.1', 59448) - Connected
DEBUG: ('127.0.0.1', 59448) - ASGI [16] Started
DEBUG: ('127.0.0.1', 59448) - ASGI [16] Received {'type': 'http.response.start', 'status': 200, 'headers': '<...>'}
INFO: ('127.0.0.1', 59448) - "GET / HTTP/1.1" 200
DEBUG: ('127.0.0.1', 59448) - ASGI [16] Received {'type': 'http.response.body', 'body': '<1458 bytes>'}
DEBUG: ('127.0.0.1', 59448) - ASGI [16] Completed
```
@BenjPy ,
Looks like this window is still reloading...
Actually, I think, that separation onto two forms will help you:
```
<form action="" onsubmit="connect(event)">
<label>Item ID: <input type="text" id="itemId" autocomplete="off" value="foo"/></label>
<button>Connect</button>
</form>
<form action="" onsubmit="sendMessage(event)">
<label>Message: <input type="text" id="messageText" autocomplete="off"/></label>
<button>Send</button>
</form>
```
It's not a good way, but it's okay to try out websockets.
@alj06ka ,
Hi, still nothing appear on the web page.
@BenjPy ,
Hi, actually, problem was not in page reloading. I find out, that this example shows how to pass cookie or header params as well. So, you can see dependency `cookie_or_client`. It means, that you must pass `session` param in `Cookie`, or `x-client` param in `Header` on websocket connection request. So if you pass it, everything works correctly.
Here is my code of this example:
```
import uvicorn
from fastapi import Cookie, Depends, FastAPI, Header
from starlette.responses import HTMLResponse
from starlette.status import WS_1008_POLICY_VIOLATION
from starlette.websockets import WebSocket
app = FastAPI()
html = """
<!DOCTYPE html>
<html>
<head>
<title>Chat</title>
</head>
<body>
<h1>WebSocket Chat</h1>
<form action="" onsubmit="sendMessage(event)">
<label>Item ID: <input type="text" id="itemId" autocomplete="off" value="foo"/></label>
<button onclick="connect(event)">Connect</button>
<br>
<label>Message: <input type="text" id="messageText" autocomplete="off"/></label>
<button>Send</button>
</form>
<ul id='messages'>
</ul>
<script>
var ws = null;
function connect(event) {
event.preventDefault()
var input = document.getElementById("itemId")
document.cookie = "session=Test;path=/"
ws = new WebSocket("ws://localhost:8000/items/" + input.value + "/ws");
ws.onmessage = function(event) {
var messages = document.getElementById('messages')
var message = document.createElement('li')
var content = document.createTextNode(event.data)
message.appendChild(content)
messages.appendChild(message)
};
}
function sendMessage(event) {
var input = document.getElementById("messageText")
ws.send(input.value)
input.value = ''
event.preventDefault()
}
</script>
</body>
</html>
"""
@app.get("/")
async def get():
return HTMLResponse(html)
async def get_cookie_or_client(
websocket: WebSocket, session: str = Cookie(None), x_client: str = Header(None)
):
if session is None and x_client is None:
await websocket.close(code=WS_1008_POLICY_VIOLATION)
return session or x_client
@app.websocket("/items/{item_id}/ws")
async def websocket_endpoint(
websocket: WebSocket,
item_id: int,
q: str = None,
cookie_or_client: str = Depends(get_cookie_or_client),
):
await websocket.accept()
while True:
data = await websocket.receive_text()
await websocket.send_text(
f"Session Cookie or X-Client Header value is: {cookie_or_client}"
)
if q is not None:
await websocket.send_text(f"Query parameter q is: {q}")
await websocket.send_text(f"Message text was: {data}, for item ID: {item_id}")
if __name__ == '__main__':
uvicorn.run(app, host='localhost', port=8000)
```
@alj06ka
work, thank you
need to change item_id to str
```python
@app.websocket("/items/{item_id}/ws")
async def websocket_endpoint(
websocket: WebSocket,
item_id: str,
q: str = None,
cookie_or_client: str = Depends(get_cookie_or_client),
):
```
> it's possible to update the doc ?
I just had the same problem, and looks like the doc hasn't been edited yet as of Mar. 3rd 2020.
The code above seems like a decent fix, which has worked for me too.
I still have this problem:
```
from fastapi import Cookie, Depends, FastAPI, Header, WebSocket, status
app = FastAPI()
async def get_cookie_or_client(
websocket: WebSocket, session: str = Cookie(None), x_client: str = Header(None)
):
if session is None and x_client is None:
await websocket.close(code=status.WS_1008_POLICY_VIOLATION)
return session or x_client
@app.websocket("/ws")
async def websocket_endpoint(
websocket: WebSocket, cookie_or_client: str = Depends(get_cookie_or_client),
):
await websocket.accept()
while True:
data = await websocket.receive_text()
await websocket.send_text(f"Message text was: {data}")
```
|
2020-06-09T15:37:27Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1547
|
34c857b7cb493fa41f296c001234bc6b2ed6a083
|
diff --git a/fastapi/applications.py b/fastapi/applications.py
--- a/fastapi/applications.py
+++ b/fastapi/applications.py
@@ -38,6 +38,7 @@ def __init__(
version: str = "0.1.0",
openapi_url: Optional[str] = "/openapi.json",
openapi_tags: Optional[List[Dict[str, Any]]] = None,
+ servers: Optional[List[Dict[str, Union[str, Any]]]] = None,
default_response_class: Type[Response] = JSONResponse,
docs_url: Optional[str] = "/docs",
redoc_url: Optional[str] = "/redoc",
@@ -70,6 +71,7 @@ def __init__(
self.title = title
self.description = description
self.version = version
+ self.servers = servers
self.openapi_url = openapi_url
self.openapi_tags = openapi_tags
# TODO: remove when discarding the openapi_prefix parameter
@@ -106,6 +108,7 @@ def openapi(self, openapi_prefix: str = "") -> Dict:
routes=self.routes,
openapi_prefix=openapi_prefix,
tags=self.openapi_tags,
+ servers=self.servers,
)
return self.openapi_schema
diff --git a/fastapi/openapi/models.py b/fastapi/openapi/models.py
--- a/fastapi/openapi/models.py
+++ b/fastapi/openapi/models.py
@@ -63,7 +63,7 @@ class ServerVariable(BaseModel):
class Server(BaseModel):
- url: AnyUrl
+ url: Union[AnyUrl, str]
description: Optional[str] = None
variables: Optional[Dict[str, ServerVariable]] = None
diff --git a/fastapi/openapi/utils.py b/fastapi/openapi/utils.py
--- a/fastapi/openapi/utils.py
+++ b/fastapi/openapi/utils.py
@@ -86,7 +86,7 @@ def get_openapi_security_definitions(flat_dependant: Dependant) -> Tuple[Dict, L
def get_openapi_operation_parameters(
*,
all_route_params: Sequence[ModelField],
- model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str]
+ model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str],
) -> List[Dict[str, Any]]:
parameters = []
for param in all_route_params:
@@ -112,7 +112,7 @@ def get_openapi_operation_parameters(
def get_openapi_operation_request_body(
*,
body_field: Optional[ModelField],
- model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str]
+ model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str],
) -> Optional[Dict]:
if not body_field:
return None
@@ -318,12 +318,15 @@ def get_openapi(
description: str = None,
routes: Sequence[BaseRoute],
openapi_prefix: str = "",
- tags: Optional[List[Dict[str, Any]]] = None
+ tags: Optional[List[Dict[str, Any]]] = None,
+ servers: Optional[List[Dict[str, Union[str, Any]]]] = None,
) -> Dict:
info = {"title": title, "version": version}
if description:
info["description"] = description
output: Dict[str, Any] = {"openapi": openapi_version, "info": info}
+ if servers:
+ output["servers"] = servers
components: Dict[str, Dict] = {}
paths: Dict[str, Dict] = {}
flat_models = get_flat_models_from_routes(routes)
|
I need a way to specify servers in the openapi spec
### Is your feature request related to a problem
I want to be able to use the generated openapi.json doc as it is and hook it up with a document publishing flow, but i'm not able to because I have to add in information about `servers` manually.
### The solution you would like
Someway to specify at a global level what the base server url should be.
### Describe alternatives you've considered
Currently I'm doing this manually in the generated openapi.json by adding in something like -
```
"servers": [
{
"url": "http://example.com"
}
]
```
I don't mind submitting a PR which enables this if someone can guide me about the changes that need to be made. One thing I saw was that the `get_openapi` method in `fastapi.openapi.utils`, doesn't expose a parameter for setting a value for the `servers` key.
|
It's @tiangolo's decision to make, but given this *is* part of the OpenAPI spec, I personally would be in favor of adding this as a keyword argument to `FastAPI`, and as an argument to `get_openapi`, making it easier to set this.
I think those should be the only changes you need to make (just make sure the value also gets passed to the `get_openapi` call, and added to the returned value in the `get_openapi` call).
It should be a quick PR if you want to open it.
I think eventually we should group the arguments to `FastAPI` into more nested chunks to make it a little easier to parse, but I would be fine with the approach described above for now.
Hey there it's needed also here
|
2020-06-10T19:32:26Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1549
|
543ef7753aff639ad3aed7c153e42f719e361d38
|
diff --git a/fastapi/dependencies/utils.py b/fastapi/dependencies/utils.py
--- a/fastapi/dependencies/utils.py
+++ b/fastapi/dependencies/utils.py
@@ -478,6 +478,7 @@ async def solve_dependencies(
name=sub_dependant.name,
security_scopes=sub_dependant.security_scopes,
)
+ use_sub_dependant.security_scopes = sub_dependant.security_scopes
solved_result = await solve_dependencies(
request=request,
|
dependency_overrides does not play well with scopes
**Describe the bug**
When working with `Security()` dependencies, the scopes disappear when `app.dependency_overrides` is executed. The callable dealing with the scopes gets an empty list instead of the scopes.
**To Reproduce**
```python
from fastapi import FastAPI, Header, Security, Depends
from fastapi.security import SecurityScopes
from starlette.testclient import TestClient
app = FastAPI()
def get_user(required_scopes: SecurityScopes):
print(required_scopes.scopes)
return "John Doe"
def data():
return [1,2,3]
def other_data():
return [3,4,5]
@app.get("/test")
def test(user: str = Security(get_user, scopes=["foo", "bar"]), data = Depends(data)):
return data
client = TestClient(app)
response = client.get("/test")
app.dependency_overrides[data] = other_data
response = client.get("/test")
# prints: ["foo", "bar"] and [] instead of ["foo", "bar"] and ["foo", "bar"]
```
**Expected behavior**
In the above example I expect `get_user()` to print the same scopes twice. Instead, before the `dependency_overrides` it prints the correct scpoes, but an empty list afterwards.
**Environment:**
- OS: Linux
- FastAPI Version 0.43.0
- Python 3.7.4
|
Hello,
I was reading your [comment](https://github.com/tiangolo/fastapi/issues/738#issuecomment-558795651) in the other thread. In my case, I am using `dependency_overrides` to mock the connection to database.
```python
class TransactionTestCaseMixin:
db_session: Session
@pytest.fixture(autouse=True)
def receive_db_session(self, dbsession: Session):
self.db_session = dbsession
app.dependency_overrides[get_db] = lambda: self.db_session
```
That's causing us an issue using `SecurityScopes` when we are testing our service endpoint where we include a `Dependant` (Security) to manage the permissions of our endpoints.
|
2020-06-11T01:14:25Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-1553
|
543ef7753aff639ad3aed7c153e42f719e361d38
|
diff --git a/fastapi/dependencies/utils.py b/fastapi/dependencies/utils.py
--- a/fastapi/dependencies/utils.py
+++ b/fastapi/dependencies/utils.py
@@ -623,9 +623,17 @@ async def request_body_to_args(
field = required_params[0]
field_info = get_field_info(field)
embed = getattr(field_info, "embed", None)
- if len(required_params) == 1 and not embed:
+ field_alias_omitted = len(required_params) == 1 and not embed
+ if field_alias_omitted:
received_body = {field.alias: received_body}
+
for field in required_params:
+ loc: Tuple[str, ...]
+ if field_alias_omitted:
+ loc = ("body",)
+ else:
+ loc = ("body", field.alias)
+
value: Any = None
if received_body is not None:
if (
@@ -636,7 +644,7 @@ async def request_body_to_args(
try:
value = received_body.get(field.alias)
except AttributeError:
- errors.append(get_missing_field_error(field.alias))
+ errors.append(get_missing_field_error(loc))
continue
if (
value is None
@@ -648,7 +656,7 @@ async def request_body_to_args(
)
):
if field.required:
- errors.append(get_missing_field_error(field.alias))
+ errors.append(get_missing_field_error(loc))
else:
values[field.name] = deepcopy(field.default)
continue
@@ -667,7 +675,9 @@ async def request_body_to_args(
awaitables = [sub_value.read() for sub_value in value]
contents = await asyncio.gather(*awaitables)
value = sequence_shape_to_type[field.shape](contents)
- v_, errors_ = field.validate(value, values, loc=("body", field.alias))
+
+ v_, errors_ = field.validate(value, values, loc=loc)
+
if isinstance(errors_, ErrorWrapper):
errors.append(errors_)
elif isinstance(errors_, list):
@@ -677,12 +687,12 @@ async def request_body_to_args(
return values, errors
-def get_missing_field_error(field_alias: str) -> ErrorWrapper:
+def get_missing_field_error(loc: Tuple[str, ...]) -> ErrorWrapper:
if PYDANTIC_1:
- missing_field_error = ErrorWrapper(MissingError(), loc=("body", field_alias))
+ missing_field_error = ErrorWrapper(MissingError(), loc=loc)
else: # pragma: no cover
missing_field_error = ErrorWrapper( # type: ignore
- MissingError(), loc=("body", field_alias), config=BaseConfig,
+ MissingError(), loc=loc, config=BaseConfig,
)
return missing_field_error
|
Bad `loc` on validation error, if payload represended by one model
### Describe the bug
Really like your framework, but there is, indeed, an annoying issue with `loc` on validation error with one object as payload.
### To Reproduce
Code sample
```Python
from typing import List
from fastapi import FastAPI, Body
from pydantic import BaseModel
app = FastAPI()
class NameModel(BaseModel):
name: str
@app.post("/test", response_model=NameModel)
def test(obj: NameModel, ): # bad
return obj
@app.post("/test_embed", response_model=NameModel)
def test(obj: NameModel = Body(..., embed=True)): # ok
return obj
@app.post("/test_multiple", response_model=List[NameModel])
def test(obj1: NameModel, obj2: NameModel): # ok
return obj1, obj2
```
When you make a request to endpoint (`/test`) with the wrong payload (e.g.: `{}`), it always includes the variable name into error location, despite it has no relation to request.
It makes no sense, moreover, it complicates the logic for error printing on fronted, because they just don't know and not required to know the name of the backend`s internal variable.
```json
{
"detail": [
{
"loc": [
"body",
"obj",
"name"
],
"msg": "field required",
"type": "value_error.missing"
}
]
}
```
it should be
```json
{
"detail": [
{
"loc": [
"body",
"name"
],
"msg": "field required",
"type": "value_error.missing"
}
]
}
```
With the embedded object (`/test_embed`) or multiple objects (`/test_multiple`), it works as expected, putting the variable name into location, because it should be in the payload.
### Expected behavior
Don't include the variable name into location error, if it is not reflected in schema / not embedded / not expected to be in payload.
### Environment
- OS: macOS
- FastAPI 0.52.0
- Python 3.6.8
|
We can also observe this behaviour, it caused a bit of head-scratching when trying to diagnose the source of an error between frontend and backend services.
Also observed this behaviour when fiddling around with the framework for the first time.
|
2020-06-11T17:24:14Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-17
|
014c7df142baf0e5cade2c452edfc0c138fda398
|
diff --git a/fastapi/routing.py b/fastapi/routing.py
--- a/fastapi/routing.py
+++ b/fastapi/routing.py
@@ -18,7 +18,7 @@
from starlette.formparsers import UploadFile
from starlette.requests import Request
from starlette.responses import JSONResponse, Response
-from starlette.routing import get_name, request_response
+from starlette.routing import compile_path, get_name, request_response
from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY
@@ -149,9 +149,7 @@ def __init__(
self.include_in_schema = include_in_schema
self.content_type = content_type
- self.path_regex, self.path_format, self.param_convertors = self.compile_path(
- path
- )
+ self.path_regex, self.path_format, self.param_convertors = compile_path(path)
assert inspect.isfunction(endpoint) or inspect.ismethod(
endpoint
), f"An endpoint must be a function or method"
|
starlette update breaks routing
[starlette 0.9.11](https://pypi.org/project/starlette/0.9.11/) breaks fastapi routing
I'm currently working around this by enforcing starlette==0.9.10
|
Thanks for the report! I'll check it as soon as I get to my laptop.
It should now be fixed in the latest version `0.1.18`. The change is only pinning the dependencies so FastAPI is not broken, it should work now.
The next step will be to actually update FastAPI's code to be compatible with the latest changes in Starlette.
|
2019-01-30T20:16:49Z
|
[]
|
[]
| |||
tiangolo/fastapi
|
tiangolo__fastapi-241
|
3cf92a156ce36c3127366edac3b09c89fdb3a195
|
diff --git a/fastapi/applications.py b/fastapi/applications.py
--- a/fastapi/applications.py
+++ b/fastapi/applications.py
@@ -9,7 +9,7 @@
from starlette.exceptions import ExceptionMiddleware, HTTPException
from starlette.middleware.errors import ServerErrorMiddleware
from starlette.requests import Request
-from starlette.responses import JSONResponse, Response
+from starlette.responses import HTMLResponse, JSONResponse, Response
from starlette.routing import BaseRoute
@@ -79,29 +79,28 @@ def openapi(self) -> Dict:
def setup(self) -> None:
if self.openapi_url:
- self.add_route(
- self.openapi_url,
- lambda req: JSONResponse(self.openapi()),
- include_in_schema=False,
- )
+
+ async def openapi(req: Request) -> JSONResponse:
+ return JSONResponse(self.openapi())
+
+ self.add_route(self.openapi_url, openapi, include_in_schema=False)
+ openapi_url = self.openapi_prefix + self.openapi_url
if self.openapi_url and self.docs_url:
- self.add_route(
- self.docs_url,
- lambda r: get_swagger_ui_html(
- openapi_url=self.openapi_prefix + self.openapi_url,
- title=self.title + " - Swagger UI",
- ),
- include_in_schema=False,
- )
+
+ async def swagger_ui_html(req: Request) -> HTMLResponse:
+ return get_swagger_ui_html(
+ openapi_url=openapi_url, title=self.title + " - Swagger UI"
+ )
+
+ self.add_route(self.docs_url, swagger_ui_html, include_in_schema=False)
if self.openapi_url and self.redoc_url:
- self.add_route(
- self.redoc_url,
- lambda r: get_redoc_html(
- openapi_url=self.openapi_prefix + self.openapi_url,
- title=self.title + " - ReDoc",
- ),
- include_in_schema=False,
- )
+
+ async def redoc_html(req: Request) -> HTMLResponse:
+ return get_redoc_html(
+ openapi_url=openapi_url, title=self.title + " - ReDoc"
+ )
+
+ self.add_route(self.redoc_url, redoc_html, include_in_schema=False)
self.add_exception_handler(HTTPException, http_exception)
def add_api_route(
|
make swagger_ui_html, redoc_html and openapi.json handled by async function?
**Is your feature request related to a problem? Please describe.**
I add a loop on all my router of my app to check if the handler are all coroutine function to get better performance, but I fount that swagger_ui_html, redoc_html and openapi.json are handled by normal function.
https://github.com/tiangolo/fastapi/blob/56ab106bbbf8054af437821c6683491ca7952c3b/fastapi/applications.py#L80-L86
**Describe the solution you'd like**
I'm wondering if its possible to handle these 3 router with async function, as its only a simple `getattr` operator or concat string.
https://github.com/tiangolo/fastapi/blob/56ab106bbbf8054af437821c6683491ca7952c3b/fastapi/applications.py#L78
```python
def setup(self) -> None:
if self.openapi_url:
async def openapi_handler(req):
return JSONResponse(self.openapi())
self.add_route(
self.openapi_url,
openapi_handler,
include_in_schema=False,
)
```
**Describe alternatives you've considered**
Disable default handlers and I write async handlers by my self to replace inner handlers.
**Additional context**
In `Starlette`, if a handler is not coroutine function, it will be executed by `loop.run_in_executor` . So this could improve performance a litte bit.(In my test, about 2800 r/s to 3200r/s)
|
Nice catch @Trim21 ! I'll fix it.
If you don't mind, could I give a PR to fix it?
Of course! PRs are very welcome.
|
2019-05-20T09:39:22Z
|
[]
|
[]
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6