INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with numPartitions partitions. .. note:: If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using reduceByKey or aggregateByKey will p...
def groupByKey(self, numPartitions=None, partitionFunc=portable_hash): """ Group the values for each key in the RDD into a single sequence. Hash-partitions the resulting RDD with numPartitions partitions. .. note:: If you are grouping in order to perform an aggregation (such as a ...
Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning. >>> x = sc.parallelize([("a", ["x", "y", "z"]), ("b", ["p", "r"])]) >>> def f(x): return x >>> x.flatMapValues(f).collect() ...
def flatMapValues(self, f): """ Pass each value in the key-value pair RDD through a flatMap function without changing the keys; this also retains the original RDD's partitioning. >>> x = sc.parallelize([("a", ["x", "y", "z"]), ("b", ["p", "r"])]) >>> def f(x): return x ...
Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning. >>> x = sc.parallelize([("a", ["apple", "banana", "lemon"]), ("b", ["grapes"])]) >>> def f(x): return len(x) >>> x.mapValues(f).collect(...
def mapValues(self, f): """ Pass each value in the key-value pair RDD through a map function without changing the keys; this also retains the original RDD's partitioning. >>> x = sc.parallelize([("a", ["apple", "banana", "lemon"]), ("b", ["grapes"])]) >>> def f(x): retur...
Return a subset of this RDD sampled by key (via stratified sampling). Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map. >>> fractions = {"a": 0.2, "b": 0.1} >>> rdd = sc.parallelize(fractions.keys()).carte...
def sampleByKey(self, withReplacement, fractions, seed=None): """ Return a subset of this RDD sampled by key (via stratified sampling). Create a sample of this RDD using variable sampling rates for different keys as specified by fractions, a key to sampling rate map. >>> fractio...
Return each (key, value) pair in C{self} that has no pair with matching key in C{other}. >>> x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 2)]) >>> y = sc.parallelize([("a", 3), ("c", None)]) >>> sorted(x.subtractByKey(y).collect()) [('b', 4), ('b', 5)]
def subtractByKey(self, other, numPartitions=None): """ Return each (key, value) pair in C{self} that has no pair with matching key in C{other}. >>> x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 2)]) >>> y = sc.parallelize([("a", 3), ("c", None)]) >>> sorted(x....
Return each value in C{self} that is not contained in C{other}. >>> x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 3)]) >>> y = sc.parallelize([("a", 3), ("c", None)]) >>> sorted(x.subtract(y).collect()) [('a', 1), ('b', 4), ('b', 5)]
def subtract(self, other, numPartitions=None): """ Return each value in C{self} that is not contained in C{other}. >>> x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 3)]) >>> y = sc.parallelize([("a", 3), ("c", None)]) >>> sorted(x.subtract(y).collect()) [('a', ...
Return a new RDD that is reduced into `numPartitions` partitions. >>> sc.parallelize([1, 2, 3, 4, 5], 3).glom().collect() [[1], [2, 3], [4, 5]] >>> sc.parallelize([1, 2, 3, 4, 5], 3).coalesce(1).glom().collect() [[1, 2, 3, 4, 5]]
def coalesce(self, numPartitions, shuffle=False): """ Return a new RDD that is reduced into `numPartitions` partitions. >>> sc.parallelize([1, 2, 3, 4, 5], 3).glom().collect() [[1], [2, 3], [4, 5]] >>> sc.parallelize([1, 2, 3, 4, 5], 3).coalesce(1).glom().collect() [[1, ...
Zips this RDD with another one, returning key-value pairs with the first element in each RDD second element in each RDD, etc. Assumes that the two RDDs have the same number of partitions and the same number of elements in each partition (e.g. one was made through a map on the other). ...
def zip(self, other): """ Zips this RDD with another one, returning key-value pairs with the first element in each RDD second element in each RDD, etc. Assumes that the two RDDs have the same number of partitions and the same number of elements in each partition (e.g. one was mad...
Zips this RDD with its element indices. The ordering is first based on the partition index and then the ordering of items within each partition. So the first item in the first partition gets index 0, and the last item in the last partition receives the largest index. This metho...
def zipWithIndex(self): """ Zips this RDD with its element indices. The ordering is first based on the partition index and then the ordering of items within each partition. So the first item in the first partition gets index 0, and the last item in the last partition rec...
Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k, 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method won't trigger a spark job, which is different from L{zipWithIndex} >>> sc.parallelize(["a", "b...
def zipWithUniqueId(self): """ Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k, 2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this method won't trigger a spark job, which is different from L{zip...
Get the RDD's current storage level. >>> rdd1 = sc.parallelize([1,2]) >>> rdd1.getStorageLevel() StorageLevel(False, False, False, False, 1) >>> print(rdd1.getStorageLevel()) Serialized 1x Replicated
def getStorageLevel(self): """ Get the RDD's current storage level. >>> rdd1 = sc.parallelize([1,2]) >>> rdd1.getStorageLevel() StorageLevel(False, False, False, False, 1) >>> print(rdd1.getStorageLevel()) Serialized 1x Replicated """ java_storage...
Returns the default number of partitions to use during reduce tasks (e.g., groupBy). If spark.default.parallelism is set, then we'll use the value from SparkContext defaultParallelism, otherwise we'll use the number of partitions in this RDD. This mirrors the behavior of the Scala Partitioner#d...
def _defaultReducePartitions(self): """ Returns the default number of partitions to use during reduce tasks (e.g., groupBy). If spark.default.parallelism is set, then we'll use the value from SparkContext defaultParallelism, otherwise we'll use the number of partitions in this RDD. ...
Return the list of values in the RDD for key `key`. This operation is done efficiently if the RDD has a known partitioner by only searching the partition that the key maps to. >>> l = range(1000) >>> rdd = sc.parallelize(zip(l, l), 10) >>> rdd.lookup(42) # slow [42] ...
def lookup(self, key): """ Return the list of values in the RDD for key `key`. This operation is done efficiently if the RDD has a known partitioner by only searching the partition that the key maps to. >>> l = range(1000) >>> rdd = sc.parallelize(zip(l, l), 10) ...
Return a JavaRDD of Object by unpickling It will convert each Python object into Java object by Pyrolite, whenever the RDD is serialized in batch or not.
def _to_java_object_rdd(self): """ Return a JavaRDD of Object by unpickling It will convert each Python object into Java object by Pyrolite, whenever the RDD is serialized in batch or not. """ rdd = self._pickled() return self.ctx._jvm.SerDeUtil.pythonToJava(rdd._jrdd, T...
.. note:: Experimental Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished. >>> rdd = sc.parallelize(range(1000), 10) >>> rdd.countApprox(1000, 1.0) 1000
def countApprox(self, timeout, confidence=0.95): """ .. note:: Experimental Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished. >>> rdd = sc.parallelize(range(1000), 10) >>> rdd.countApprox(1...
.. note:: Experimental Approximate operation to return the sum within a timeout or meet the confidence. >>> rdd = sc.parallelize(range(1000), 10) >>> r = sum(range(1000)) >>> abs(rdd.sumApprox(1000) - r) / r < 0.05 True
def sumApprox(self, timeout, confidence=0.95): """ .. note:: Experimental Approximate operation to return the sum within a timeout or meet the confidence. >>> rdd = sc.parallelize(range(1000), 10) >>> r = sum(range(1000)) >>> abs(rdd.sumApprox(1000) - r) / r < 0...
.. note:: Experimental Approximate operation to return the mean within a timeout or meet the confidence. >>> rdd = sc.parallelize(range(1000), 10) >>> r = sum(range(1000)) / 1000.0 >>> abs(rdd.meanApprox(1000) - r) / r < 0.05 True
def meanApprox(self, timeout, confidence=0.95): """ .. note:: Experimental Approximate operation to return the mean within a timeout or meet the confidence. >>> rdd = sc.parallelize(range(1000), 10) >>> r = sum(range(1000)) / 1000.0 >>> abs(rdd.meanApprox(1000) ...
.. note:: Experimental Return approximate number of distinct elements in the RDD. The algorithm used is based on streamlib's implementation of `"HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available here <https://doi...
def countApproxDistinct(self, relativeSD=0.05): """ .. note:: Experimental Return approximate number of distinct elements in the RDD. The algorithm used is based on streamlib's implementation of `"HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Ca...
Return an iterator that contains all of the elements in this RDD. The iterator will consume as much memory as the largest partition in this RDD. >>> rdd = sc.parallelize(range(10)) >>> [x for x in rdd.toLocalIterator()] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
def toLocalIterator(self): """ Return an iterator that contains all of the elements in this RDD. The iterator will consume as much memory as the largest partition in this RDD. >>> rdd = sc.parallelize(range(10)) >>> [x for x in rdd.toLocalIterator()] [0, 1, 2, 3, 4, 5, 6...
.. note:: Experimental Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as :func:`RDD.mapPartitions`. Please see the API doc there. .. versionadded:: 2.4.0
def mapPartitions(self, f, preservesPartitioning=False): """ .. note:: Experimental Returns a new RDD by applying a function to each partition of the wrapped RDD, where tasks are launched together in a barrier stage. The interface is the same as :func:`RDD.mapPartitions`. ...
Convert a list of Column (or names) into a JVM Seq of Column. An optional `converter` could be used to convert items in `cols` into JVM Column objects.
def _to_seq(sc, cols, converter=None): """ Convert a list of Column (or names) into a JVM Seq of Column. An optional `converter` could be used to convert items in `cols` into JVM Column objects. """ if converter: cols = [converter(c) for c in cols] return sc._jvm.PythonUtils.toSeq(c...
Convert a list of Column (or names) into a JVM (Scala) List of Column. An optional `converter` could be used to convert items in `cols` into JVM Column objects.
def _to_list(sc, cols, converter=None): """ Convert a list of Column (or names) into a JVM (Scala) List of Column. An optional `converter` could be used to convert items in `cols` into JVM Column objects. """ if converter: cols = [converter(c) for c in cols] return sc._jvm.PythonUti...
Create a method for given unary operator
def _unary_op(name, doc="unary operator"): """ Create a method for given unary operator """ def _(self): jc = getattr(self._jc, name)() return Column(jc) _.__doc__ = doc return _
Create a method for given binary operator
def _bin_op(name, doc="binary operator"): """ Create a method for given binary operator """ def _(self, other): jc = other._jc if isinstance(other, Column) else other njc = getattr(self._jc, name)(jc) return Column(njc) _.__doc__ = doc return _
Create a method for binary operator (this object is on right side)
def _reverse_op(name, doc="binary operator"): """ Create a method for binary operator (this object is on right side) """ def _(self, other): jother = _create_column_from_literal(other) jc = getattr(jother, name)(self._jc) return Column(jc) _.__doc__ = doc return _
Return a :class:`Column` which is a substring of the column. :param startPos: start position (int or Column) :param length: length of the substring (int or Column) >>> df.select(df.name.substr(1, 3).alias("col")).collect() [Row(col=u'Ali'), Row(col=u'Bob')]
def substr(self, startPos, length): """ Return a :class:`Column` which is a substring of the column. :param startPos: start position (int or Column) :param length: length of the substring (int or Column) >>> df.select(df.name.substr(1, 3).alias("col")).collect() [Row(c...
A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments. >>> df[df.name.isin("Bob", "Mike")].collect() [Row(age=5, name=u'Bob')] >>> df[df.age.isin([1, 2, 3])].collect() [Row(age=2, name=u'Alice')]
def isin(self, *cols): """ A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments. >>> df[df.name.isin("Bob", "Mike")].collect() [Row(age=5, name=u'Bob')] >>> df[df.age.isin([1, 2, 3])].collect...
Returns this column aliased with a new name or names (in the case of expressions that return more than one column, such as explode). :param alias: strings of desired column names (collects all positional arguments passed) :param metadata: a dict of information to be stored in ``metadata`` attri...
def alias(self, *alias, **kwargs): """ Returns this column aliased with a new name or names (in the case of expressions that return more than one column, such as explode). :param alias: strings of desired column names (collects all positional arguments passed) :param metadata: a...
Convert the column into type ``dataType``. >>> df.select(df.age.cast("string").alias('ages')).collect() [Row(ages=u'2'), Row(ages=u'5')] >>> df.select(df.age.cast(StringType()).alias('ages')).collect() [Row(ages=u'2'), Row(ages=u'5')]
def cast(self, dataType): """ Convert the column into type ``dataType``. >>> df.select(df.age.cast("string").alias('ages')).collect() [Row(ages=u'2'), Row(ages=u'5')] >>> df.select(df.age.cast(StringType()).alias('ages')).collect() [Row(ages=u'2'), Row(ages=u'5')] """ ...
Evaluates a list of conditions and returns one of multiple possible result expressions. If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions. See :func:`pyspark.sql.functions.when` for example usage. :param condition: a boolean :class:`Column` expression. ...
def when(self, condition, value): """ Evaluates a list of conditions and returns one of multiple possible result expressions. If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions. See :func:`pyspark.sql.functions.when` for example usage. :param ...
Evaluates a list of conditions and returns one of multiple possible result expressions. If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions. See :func:`pyspark.sql.functions.when` for example usage. :param value: a literal value, or a :class:`Column` expressio...
def otherwise(self, value): """ Evaluates a list of conditions and returns one of multiple possible result expressions. If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions. See :func:`pyspark.sql.functions.when` for example usage. :param value:...
Define a windowing column. :param window: a :class:`WindowSpec` :return: a Column >>> from pyspark.sql import Window >>> window = Window.partitionBy("name").orderBy("age").rowsBetween(-1, 1) >>> from pyspark.sql.functions import rank, min >>> # df.select(rank().over(win...
def over(self, window): """ Define a windowing column. :param window: a :class:`WindowSpec` :return: a Column >>> from pyspark.sql import Window >>> window = Window.partitionBy("name").orderBy("age").rowsBetween(-1, 1) >>> from pyspark.sql.functions import rank,...
Applies transformation on a vector or an RDD[Vector]. .. note:: In Python, transform cannot currently be used within an RDD transformation or action. Call transform directly on the RDD instead. :param vector: Vector or RDD of Vector to be transformed.
def transform(self, vector): """ Applies transformation on a vector or an RDD[Vector]. .. note:: In Python, transform cannot currently be used within an RDD transformation or action. Call transform directly on the RDD instead. :param vector: Vector or RDD of Vec...
Computes the mean and variance and stores as a model to be used for later scaling. :param dataset: The data used to compute the mean and variance to build the transformation model. :return: a StandardScalarModel
def fit(self, dataset): """ Computes the mean and variance and stores as a model to be used for later scaling. :param dataset: The data used to compute the mean and variance to build the transformation model. :return: a StandardScalarModel """ ...
Returns a ChiSquared feature selector. :param data: an `RDD[LabeledPoint]` containing the labeled dataset with categorical features. Real-valued features will be treated as categorical for each distinct value. Apply feature discretizer before using...
def fit(self, data): """ Returns a ChiSquared feature selector. :param data: an `RDD[LabeledPoint]` containing the labeled dataset with categorical features. Real-valued features will be treated as categorical for each distinct value. ...
Computes a [[PCAModel]] that contains the principal components of the input vectors. :param data: source vectors
def fit(self, data): """ Computes a [[PCAModel]] that contains the principal components of the input vectors. :param data: source vectors """ jmodel = callMLlibFunc("fitPCA", self.k, data) return PCAModel(jmodel)
Transforms the input document (list of terms) to term frequency vectors, or transform the RDD of document to RDD of term frequency vectors.
def transform(self, document): """ Transforms the input document (list of terms) to term frequency vectors, or transform the RDD of document to RDD of term frequency vectors. """ if isinstance(document, RDD): return document.map(self.transform) freq =...
Computes the inverse document frequency. :param dataset: an RDD of term frequency vectors
def fit(self, dataset): """ Computes the inverse document frequency. :param dataset: an RDD of term frequency vectors """ if not isinstance(dataset, RDD): raise TypeError("dataset should be an RDD of term frequency vectors") jmodel = callMLlibFunc("fitIDF", s...
Find synonyms of a word :param word: a word or a vector representation of word :param num: number of synonyms to find :return: array of (word, cosineSimilarity) .. note:: Local use only
def findSynonyms(self, word, num): """ Find synonyms of a word :param word: a word or a vector representation of word :param num: number of synonyms to find :return: array of (word, cosineSimilarity) .. note:: Local use only """ if not isinstance(word, b...
Load a model from the given path.
def load(cls, sc, path): """ Load a model from the given path. """ jmodel = sc._jvm.org.apache.spark.mllib.feature \ .Word2VecModel.load(sc._jsc.sc(), path) model = sc._jvm.org.apache.spark.mllib.api.python.Word2VecModelWrapper(jmodel) return Word2VecModel(mod...
Computes the Hadamard product of the vector.
def transform(self, vector): """ Computes the Hadamard product of the vector. """ if isinstance(vector, RDD): vector = vector.map(_convert_to_vector) else: vector = _convert_to_vector(vector) return callMLlibFunc("elementwiseProductVector", self.s...
Predict values for a single data point or an RDD of points using the model trained. .. note:: In Python, predict cannot currently be used within an RDD transformation or action. Call predict directly on the RDD instead.
def predict(self, x): """ Predict values for a single data point or an RDD of points using the model trained. .. note:: In Python, predict cannot currently be used within an RDD transformation or action. Call predict directly on the RDD instead. """ ...
Train a decision tree model for classification. :param data: Training data: RDD of LabeledPoint. Labels should take values {0, 1, ..., numClasses-1}. :param numClasses: Number of classes for classification. :param categoricalFeaturesInfo: Map storing arit...
def trainClassifier(cls, data, numClasses, categoricalFeaturesInfo, impurity="gini", maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0): """ Train a decision tree model for classification. :param data: Training data: RDD of ...
Train a decision tree model for regression. :param data: Training data: RDD of LabeledPoint. Labels are real numbers. :param categoricalFeaturesInfo: Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories ...
def trainRegressor(cls, data, categoricalFeaturesInfo, impurity="variance", maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0): """ Train a decision tree model for regression. :param data: Training data: RDD of LabeledPoint. L...
Train a random forest model for regression. :param data: Training dataset: RDD of LabeledPoint. Labels are real numbers. :param categoricalFeaturesInfo: Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories ...
def trainRegressor(cls, data, categoricalFeaturesInfo, numTrees, featureSubsetStrategy="auto", impurity="variance", maxDepth=4, maxBins=32, seed=None): """ Train a random forest model for regression. :param data: Training dataset: RDD of LabeledPoint. Labels are...
Train a random forest model for binary or multiclass classification. :param data: Training dataset: RDD of LabeledPoint. Labels should take values {0, 1, ..., numClasses-1}. :param numClasses: Number of classes for classification. :param categoricalFeatures...
def trainClassifier(cls, data, numClasses, categoricalFeaturesInfo, numTrees, featureSubsetStrategy="auto", impurity="gini", maxDepth=4, maxBins=32, seed=None): """ Train a random forest model for binary or multiclass classification. :para...
Train a gradient-boosted trees model for classification. :param data: Training dataset: RDD of LabeledPoint. Labels should take values {0, 1}. :param categoricalFeaturesInfo: Map storing arity of categorical features. An entry (n -> k) indicates that feature n is...
def trainClassifier(cls, data, categoricalFeaturesInfo, loss="logLoss", numIterations=100, learningRate=0.1, maxDepth=3, maxBins=32): """ Train a gradient-boosted trees model for classification. :param data: Training dataset: RDD of Labe...
Set a configuration property.
def set(self, key, value): """Set a configuration property.""" # Try to set self._jconf first if JVM is created, set self._conf if JVM is not created yet. if self._jconf is not None: self._jconf.set(key, unicode(value)) else: self._conf[key] = unicode(value) ...
Set a configuration property, if not already set.
def setIfMissing(self, key, value): """Set a configuration property, if not already set.""" if self.get(key) is None: self.set(key, value) return self
Set an environment variable to be passed to executors.
def setExecutorEnv(self, key=None, value=None, pairs=None): """Set an environment variable to be passed to executors.""" if (key is not None and pairs is not None) or (key is None and pairs is None): raise Exception("Either pass one key-value pair or a list of pairs") elif key is not...
Set multiple parameters, passed as a list of key-value pairs. :param pairs: list of key-value pairs to set
def setAll(self, pairs): """ Set multiple parameters, passed as a list of key-value pairs. :param pairs: list of key-value pairs to set """ for (k, v) in pairs: self.set(k, v) return self
Get the configured value for some key, or return a default otherwise.
def get(self, key, defaultValue=None): """Get the configured value for some key, or return a default otherwise.""" if defaultValue is None: # Py4J doesn't call the right get() if we pass None if self._jconf is not None: if not self._jconf.contains(key): ...
Get all values as a list of key-value pairs.
def getAll(self): """Get all values as a list of key-value pairs.""" if self._jconf is not None: return [(elem._1(), elem._2()) for elem in self._jconf.getAll()] else: return self._conf.items()
Does this configuration contain a given key?
def contains(self, key): """Does this configuration contain a given key?""" if self._jconf is not None: return self._jconf.contains(key) else: return key in self._conf
Returns a printable version of the configuration, as a list of key=value pairs, one per line.
def toDebugString(self): """ Returns a printable version of the configuration, as a list of key=value pairs, one per line. """ if self._jconf is not None: return self._jconf.toDebugString() else: return '\n'.join('%s=%s' % (k, v) for k, v in self._...
Returns a list of databases available across all sessions.
def listDatabases(self): """Returns a list of databases available across all sessions.""" iter = self._jcatalog.listDatabases().toLocalIterator() databases = [] while iter.hasNext(): jdb = iter.next() databases.append(Database( name=jdb.name(), ...
Returns a list of tables/views in the specified database. If no database is specified, the current database is used. This includes all temporary views.
def listTables(self, dbName=None): """Returns a list of tables/views in the specified database. If no database is specified, the current database is used. This includes all temporary views. """ if dbName is None: dbName = self.currentDatabase() iter = self._j...
Returns a list of functions registered in the specified database. If no database is specified, the current database is used. This includes all temporary functions.
def listFunctions(self, dbName=None): """Returns a list of functions registered in the specified database. If no database is specified, the current database is used. This includes all temporary functions. """ if dbName is None: dbName = self.currentDatabase() ...
Returns a list of columns for the given table/view in the specified database. If no database is specified, the current database is used. Note: the order of arguments here is different from that of its JVM counterpart because Python does not support method overloading.
def listColumns(self, tableName, dbName=None): """Returns a list of columns for the given table/view in the specified database. If no database is specified, the current database is used. Note: the order of arguments here is different from that of its JVM counterpart because Python does...
Creates a table based on the dataset in a data source. It returns the DataFrame associated with the external table. The data source is specified by the ``source`` and a set of ``options``. If ``source`` is not specified, the default data source configured by ``spark.sql.sources.default...
def createExternalTable(self, tableName, path=None, source=None, schema=None, **options): """Creates a table based on the dataset in a data source. It returns the DataFrame associated with the external table. The data source is specified by the ``source`` and a set of ``options``. If `...
Creates a table based on the dataset in a data source. It returns the DataFrame associated with the table. The data source is specified by the ``source`` and a set of ``options``. If ``source`` is not specified, the default data source configured by ``spark.sql.sources.default`` will b...
def createTable(self, tableName, path=None, source=None, schema=None, **options): """Creates a table based on the dataset in a data source. It returns the DataFrame associated with the table. The data source is specified by the ``source`` and a set of ``options``. If ``source`` is not ...
Load data from a given socket, this is a blocking method thus only return when the socket connection has been closed.
def _load_from_socket(port, auth_secret): """ Load data from a given socket, this is a blocking method thus only return when the socket connection has been closed. """ (sockfile, sock) = local_connect_and_auth(port, auth_secret) # The barrier() call may block forever, so no timeout sock.sett...
Internal function to get or create global BarrierTaskContext. We need to make sure BarrierTaskContext is returned from here because it is needed in python worker reuse scenario, see SPARK-25921 for more details.
def _getOrCreate(cls): """ Internal function to get or create global BarrierTaskContext. We need to make sure BarrierTaskContext is returned from here because it is needed in python worker reuse scenario, see SPARK-25921 for more details. """ if not isinstance(cls._taskCo...
Initialize BarrierTaskContext, other methods within BarrierTaskContext can only be called after BarrierTaskContext is initialized.
def _initialize(cls, port, secret): """ Initialize BarrierTaskContext, other methods within BarrierTaskContext can only be called after BarrierTaskContext is initialized. """ cls._port = port cls._secret = secret
.. note:: Experimental Sets a global barrier and waits until all tasks in this stage hit this barrier. Similar to `MPI_Barrier` function in MPI, this function blocks until all tasks in the same stage have reached this routine. .. warning:: In a barrier stage, each task much have the sa...
def barrier(self): """ .. note:: Experimental Sets a global barrier and waits until all tasks in this stage hit this barrier. Similar to `MPI_Barrier` function in MPI, this function blocks until all tasks in the same stage have reached this routine. .. warning:: In a ba...
.. note:: Experimental Returns :class:`BarrierTaskInfo` for all tasks in this barrier stage, ordered by partition ID. .. versionadded:: 2.4.0
def getTaskInfos(self): """ .. note:: Experimental Returns :class:`BarrierTaskInfo` for all tasks in this barrier stage, ordered by partition ID. .. versionadded:: 2.4.0 """ if self._port is None or self._secret is None: raise Exception("Not supporte...
A decorator that annotates a function to append the version of Spark the function was added.
def since(version): """ A decorator that annotates a function to append the version of Spark the function was added. """ import re indent_p = re.compile(r'\n( +)') def deco(f): indents = indent_p.findall(f.__doc__) indent = ' ' * (min(len(m) for m in indents) if indents else 0) ...
Returns a function with same code, globals, defaults, closure, and name (or provide a new name).
def copy_func(f, name=None, sinceversion=None, doc=None): """ Returns a function with same code, globals, defaults, closure, and name (or provide a new name). """ # See # http://stackoverflow.com/questions/6527633/how-can-i-make-a-deepcopy-of-a-function-in-python fn = types.FunctionType(f.__...
A decorator that forces keyword arguments in the wrapped method and saves actual input keyword arguments in `_input_kwargs`. .. note:: Should only be used to wrap a method where first arg is `self`
def keyword_only(func): """ A decorator that forces keyword arguments in the wrapped method and saves actual input keyword arguments in `_input_kwargs`. .. note:: Should only be used to wrap a method where first arg is `self` """ @wraps(func) def wrapper(self, *args, **kwargs): if l...
Generates the header part for shared variables :param name: param name :param doc: param doc
def _gen_param_header(name, doc, defaultValueStr, typeConverter): """ Generates the header part for shared variables :param name: param name :param doc: param doc """ template = '''class Has$Name(Params): """ Mixin for param $name: $doc """ $name = Param(Params._dummy(), "$name...
Generates Python code for a shared param class. :param name: param name :param doc: param doc :param defaultValueStr: string representation of the default value :return: code string
def _gen_param_code(name, doc, defaultValueStr): """ Generates Python code for a shared param class. :param name: param name :param doc: param doc :param defaultValueStr: string representation of the default value :return: code string """ # TODO: How to correctly inherit instance attrib...
Runs the bisecting k-means algorithm return the model. :param rdd: Training points as an `RDD` of `Vector` or convertible sequence types. :param k: The desired number of leaf clusters. The actual number could be smaller if there are no divisible leaf clusters. ...
def train(self, rdd, k=4, maxIterations=20, minDivisibleClusterSize=1.0, seed=-1888008604): """ Runs the bisecting k-means algorithm return the model. :param rdd: Training points as an `RDD` of `Vector` or convertible sequence types. :param k: The desired n...
Train a k-means clustering model. :param rdd: Training points as an `RDD` of `Vector` or convertible sequence types. :param k: Number of clusters to create. :param maxIterations: Maximum number of iterations allowed. (default: 100) :para...
def train(cls, rdd, k, maxIterations=100, runs=1, initializationMode="k-means||", seed=None, initializationSteps=2, epsilon=1e-4, initialModel=None): """ Train a k-means clustering model. :param rdd: Training points as an `RDD` of `Vector` or convertible sequen...
Train a Gaussian Mixture clustering model. :param rdd: Training points as an `RDD` of `Vector` or convertible sequence types. :param k: Number of independent Gaussians in the mixture model. :param convergenceTol: Maximum change in log-likelihood at which ...
def train(cls, rdd, k, convergenceTol=1e-3, maxIterations=100, seed=None, initialModel=None): """ Train a Gaussian Mixture clustering model. :param rdd: Training points as an `RDD` of `Vector` or convertible sequence types. :param k: Number of independent G...
Load a model from the given path.
def load(cls, sc, path): """ Load a model from the given path. """ model = cls._load_java(sc, path) wrapper =\ sc._jvm.org.apache.spark.mllib.api.python.PowerIterationClusteringModelWrapper(model) return PowerIterationClusteringModel(wrapper)
r""" :param rdd: An RDD of (i, j, s\ :sub:`ij`\) tuples representing the affinity matrix, which is the matrix A in the PIC paper. The similarity s\ :sub:`ij`\ must be nonnegative. This is a symmetric matrix and hence s\ :sub:`ij`\ = s\ :sub:`ji`\ For any (i, j) with ...
def train(cls, rdd, k, maxIterations=100, initMode="random"): r""" :param rdd: An RDD of (i, j, s\ :sub:`ij`\) tuples representing the affinity matrix, which is the matrix A in the PIC paper. The similarity s\ :sub:`ij`\ must be nonnegative. This is a symmetric ...
Update the centroids, according to data :param data: RDD with new data for the model update. :param decayFactor: Forgetfulness of the previous centroids. :param timeUnit: Can be "batches" or "points". If points, then the decay factor is raised to the powe...
def update(self, data, decayFactor, timeUnit): """Update the centroids, according to data :param data: RDD with new data for the model update. :param decayFactor: Forgetfulness of the previous centroids. :param timeUnit: Can be "batches" or "points". If poi...
Set number of batches after which the centroids of that particular batch has half the weightage.
def setHalfLife(self, halfLife, timeUnit): """ Set number of batches after which the centroids of that particular batch has half the weightage. """ self._timeUnit = timeUnit self._decayFactor = exp(log(0.5) / halfLife) return self
Set initial centers. Should be set before calling trainOn.
def setInitialCenters(self, centers, weights): """ Set initial centers. Should be set before calling trainOn. """ self._model = StreamingKMeansModel(centers, weights) return self
Set the initial centres to be random samples from a gaussian population with constant weights.
def setRandomCenters(self, dim, weight, seed): """ Set the initial centres to be random samples from a gaussian population with constant weights. """ rng = random.RandomState(seed) clusterCenters = rng.randn(self._k, dim) clusterWeights = tile(weight, self._k) ...
Train the model on the incoming dstream.
def trainOn(self, dstream): """Train the model on the incoming dstream.""" self._validate(dstream) def update(rdd): self._model.update(rdd, self._decayFactor, self._timeUnit) dstream.foreachRDD(update)
Make predictions on a dstream. Returns a transformed dstream object
def predictOn(self, dstream): """ Make predictions on a dstream. Returns a transformed dstream object """ self._validate(dstream) return dstream.map(lambda x: self._model.predict(x))
Make predictions on a keyed dstream. Returns a transformed dstream object.
def predictOnValues(self, dstream): """ Make predictions on a keyed dstream. Returns a transformed dstream object. """ self._validate(dstream) return dstream.mapValues(lambda x: self._model.predict(x))
Return the topics described by weighted terms. WARNING: If vocabSize and k are large, this can return a large object! :param maxTermsPerTopic: Maximum number of terms to collect for each topic. (default: vocabulary size) :return: Array over topics. Each topic is r...
def describeTopics(self, maxTermsPerTopic=None): """Return the topics described by weighted terms. WARNING: If vocabSize and k are large, this can return a large object! :param maxTermsPerTopic: Maximum number of terms to collect for each topic. (default: vocabulary size) ...
Load the LDAModel from disk. :param sc: SparkContext. :param path: Path to where the model is stored.
def load(cls, sc, path): """Load the LDAModel from disk. :param sc: SparkContext. :param path: Path to where the model is stored. """ if not isinstance(sc, SparkContext): raise TypeError("sc should be a SparkContext, got type %s" % type(sc)) ...
Train a LDA model. :param rdd: RDD of documents, which are tuples of document IDs and term (word) count vectors. The term count vectors are "bags of words" with a fixed-size vocabulary (where the vocabulary size is the length of the vector). Document IDs must be unique ...
def train(cls, rdd, k=10, maxIterations=20, docConcentration=-1.0, topicConcentration=-1.0, seed=None, checkpointInterval=10, optimizer="em"): """Train a LDA model. :param rdd: RDD of documents, which are tuples of document IDs and term (word) count vectors. The term c...
Return a JavaRDD of Object by unpickling It will convert each Python object into Java object by Pyrolite, whenever the RDD is serialized in batch or not.
def _to_java_object_rdd(rdd): """ Return a JavaRDD of Object by unpickling It will convert each Python object into Java object by Pyrolite, whenever the RDD is serialized in batch or not. """ rdd = rdd._reserialize(AutoBatchedSerializer(PickleSerializer())) return rdd.ctx._jvm.org.apache.spark....
Convert Python object into Java
def _py2java(sc, obj): """ Convert Python object into Java """ if isinstance(obj, RDD): obj = _to_java_object_rdd(obj) elif isinstance(obj, DataFrame): obj = obj._jdf elif isinstance(obj, SparkContext): obj = obj._jsc elif isinstance(obj, list): obj = [_py2java(sc, x)...
Call Java Function
def callJavaFunc(sc, func, *args): """ Call Java Function """ args = [_py2java(sc, a) for a in args] return _java2py(sc, func(*args))
Call API in PythonMLLibAPI
def callMLlibFunc(name, *args): """ Call API in PythonMLLibAPI """ sc = SparkContext.getOrCreate() api = getattr(sc._jvm.PythonMLLibAPI(), name) return callJavaFunc(sc, api, *args)
A decorator that makes a class inherit documentation from its parents.
def inherit_doc(cls): """ A decorator that makes a class inherit documentation from its parents. """ for name, func in vars(cls).items(): # only inherit docstring for public functions if name.startswith("_"): continue if not func.__doc__: for parent in cls...
Call method of java_model
def call(self, name, *a): """Call method of java_model""" return callJavaFunc(self._sc, getattr(self._java_model, name), *a)
Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream.
def count(self): """ Return a new DStream in which each RDD has a single element generated by counting each RDD of this DStream. """ return self.mapPartitions(lambda i: [sum(1 for _ in i)]).reduce(operator.add)
Return a new DStream containing only the elements that satisfy predicate.
def filter(self, f): """ Return a new DStream containing only the elements that satisfy predicate. """ def func(iterator): return filter(f, iterator) return self.mapPartitions(func, True)
Return a new DStream by applying a function to each element of DStream.
def map(self, f, preservesPartitioning=False): """ Return a new DStream by applying a function to each element of DStream. """ def func(iterator): return map(f, iterator) return self.mapPartitions(func, preservesPartitioning)
Return a new DStream in which each RDD is generated by applying mapPartitionsWithIndex() to each RDDs of this DStream.
def mapPartitionsWithIndex(self, f, preservesPartitioning=False): """ Return a new DStream in which each RDD is generated by applying mapPartitionsWithIndex() to each RDDs of this DStream. """ return self.transform(lambda rdd: rdd.mapPartitionsWithIndex(f, preservesPartitioning))
Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream.
def reduce(self, func): """ Return a new DStream in which each RDD has a single element generated by reducing each RDD of this DStream. """ return self.map(lambda x: (None, x)).reduceByKey(func, 1).map(lambda x: x[1])
Return a new DStream by applying reduceByKey to each RDD.
def reduceByKey(self, func, numPartitions=None): """ Return a new DStream by applying reduceByKey to each RDD. """ if numPartitions is None: numPartitions = self._sc.defaultParallelism return self.combineByKey(lambda x: x, func, func, numPartitions)
Return a new DStream by applying combineByKey to each RDD.
def combineByKey(self, createCombiner, mergeValue, mergeCombiners, numPartitions=None): """ Return a new DStream by applying combineByKey to each RDD. """ if numPartitions is None: numPartitions = self._sc.defaultParallelism def func(rdd): ...
Return a copy of the DStream in which each RDD are partitioned using the specified partitioner.
def partitionBy(self, numPartitions, partitionFunc=portable_hash): """ Return a copy of the DStream in which each RDD are partitioned using the specified partitioner. """ return self.transform(lambda rdd: rdd.partitionBy(numPartitions, partitionFunc))