body_hash stringlengths 64 64 | body stringlengths 23 109k | docstring stringlengths 1 57k | path stringlengths 4 198 | name stringlengths 1 115 | repository_name stringlengths 7 111 | repository_stars float64 0 191k | lang stringclasses 1 value | body_without_docstring stringlengths 14 108k | unified stringlengths 45 133k |
|---|---|---|---|---|---|---|---|---|---|
d1940d7e371dd7d43444fc68de485870bda807ec6ee4c109a7281f65e6c959d9 | @property
def topology_simplification_activity(self) -> bool:
'\n .. note::\n :class: toggle\n\n CAA V5 Visual Basic Help (2020-07-06 14:02:20.222384)\n | o Property TopologySimplificationActivity() As boolean\n | \n | Returns or sets the TopologySimplificationActivity.\n | \n | Example: This example retrieves the TopologySimplificationActivity of the\n | hybShpCurveSmooth in TopSimplifyAct.\n | \n | Dim TopSimplifyAct as boolean \n | TopSimplifyAct = hybShpCurvePar.TogologySimplificationActivity \n | \n | \n | Methods\n | \n | o Sub AddFrozenCurveSegment(Reference iCurve)\n | \n | Adds a frozen curve to the hybrid shape curve smooth feature\n | object.\n | \n | Parameters:\n | \n | iCurve\n | The curve to be added to the hybrid shape curve smooth feature\n | object. \n | \n | Example:\n | The following example adds the iCurve curve to the hybShpCurveSmooth\n | object.\n | \n | hybShpCurveSmooth.AddFrozenCurveSegment iCurve\n | \n | \n | o Sub AddFrozenPoint(Reference iPoint)\n | \n | Adds a frozen points to the hybrid shape curve smooth feature\n | object.\n | \n | Parameters:\n | \n | iPoint\n | The frozen point to be added to the hybrid shape curve smooth\n | feature object. \n | \n | Example:\n | The following example adds the iPoint frozen point to the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.AddFrozenPoint iPoint\n | \n | \n | o Func GetFrozenCurveSegment(long iPos) As Reference\n | \n | Retrieves the Frozen Curve Segment at specified position in the hybrid\n | shape curve smooth object.\n | \n | Parameters:\n | \n | iPos\n | The position of the Frozen Curve Segment to retrieve.\n | \n | \n | Example:\n | The following example gets the oCurve Frozen Curve Segment of the\n | hybShpCurveSmooth object at the position iPos.\n | \n | Dim oCurve As Reference\n | Set oCurve = hybShpCurveSmooth.GetFrozenCurveSegment (iPos).\n | \n | \n | o Func GetFrozenCurveSegmentsSize() As long\n | \n | Returns the number of frozen curve segments in the curve smooth\n | object.\n | \n | Parameters:\n | \n | oSize\n | Number of frozen curve segments in the curve\n | smooth.\n | \n | Example:\n | This example retrieves the number of frozen curve segments. in\n | the hybShpCurveSmooth hybrid shape curve\n | smooth.\n | \n | Dim oSize As long\n | oSize = hybShpCurveSmooth.GetFrozenCurveSegmentsSize\n | \n | \n | o Func GetFrozenPoint(long iPos) As Reference\n | \n | Retrieves the Frozen Point at specified position in the hybrid shape curve\n | smooth object.\n | \n | Parameters:\n | \n | iPos\n | The position of the Frozen Point to retrieve. \n | \n | Example:\n | The following example gets the oPoint Frozen Point of the\n | hybShpCurveSmooth object at the position iPos.\n | \n | Dim oPoint As Reference\n | Set oPoint = hybShpCurveSmooth.GetFrozenPoint (iPos).\n | \n | \n | o Func GetFrozenPointsSize() As long\n | \n | Returns the number of Frozen Points in the curve smooth\n | object.\n | \n | Parameters:\n | \n | oSize\n | Number of Frozen Points in the curve smooth.\n | \n | Example:\n | This example retrieves the number of Frozen Points. in the\n | hybShpCurveSmooth hybrid shape curve smooth.\n | \n | Dim oSize As long\n | oSize = hybShpCurveSmooth.GetFrozenPointsSize\n | \n | \n | o Sub RemoveAllFrozenCurveSegments()\n | \n | Removes all Frozen Curve Segment of the hybrid shape curve smooth object.\n | \n | \n | Example:\n | The following example removes all Frozen Curve Segments of the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveAllFrozenCurveSegments\n | \n | \n | o Sub RemoveAllFrozenPoints()\n | \n | Removes all Frozen Points of the hybrid shape curve smooth object.\n | \n | \n | Example:\n | The following example removes all Frozen Points of the hybShpCurveSmooth\n | object.\n | \n | hybShpCurveSmooth.RemoveAllFrozenPoints\n | \n | \n | o Sub RemoveFrozenCurveSegment(Reference iCurve)\n | \n | Removes Frozen Curve Segment from the list of Forzen curves in hybrid shape\n | curve smooth object.\n | \n | Parameters:\n | \n | iCurve\n | The Frozen Curve Segment to remove. \n | \n | Example:\n | The following example removes the Frozen Curve Segment from the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveFrozenCurveSegment iCurve.\n | \n | \n | o Sub RemoveFrozenPoint(Reference iPoint)\n | \n | Removes Frozen Point from the list of frozen points in hybrid shape curve\n | smooth object.\n | \n | Parameters:\n | \n | iPoint\n | The Frozen Point to remove. \n | \n | Example:\n | The following example removes the Frozen Point from the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveFrozenPoint iPoint.\n | \n | \n | o Sub SetMaximumDeviation(double iMaxDeviation)\n | \n | Sets the maximum deviation.\n | \n | Parameters:\n | \n | iMaxDeviation\n | The maximium deviation\n | \n | o Sub SetTangencyThreshold(double iTangencyThreshold)\n | \n | Sets the tangency threshold.\n | \n | Parameters:\n | \n | iTangencyThreshold\n | The tangency threshold\n\n :return: bool\n :rtype: bool\n '
return self.hybrid_shape_curve_smooth.TopologySimplificationActivity | .. note::
:class: toggle
CAA V5 Visual Basic Help (2020-07-06 14:02:20.222384)
| o Property TopologySimplificationActivity() As boolean
|
| Returns or sets the TopologySimplificationActivity.
|
| Example: This example retrieves the TopologySimplificationActivity of the
| hybShpCurveSmooth in TopSimplifyAct.
|
| Dim TopSimplifyAct as boolean
| TopSimplifyAct = hybShpCurvePar.TogologySimplificationActivity
|
|
| Methods
|
| o Sub AddFrozenCurveSegment(Reference iCurve)
|
| Adds a frozen curve to the hybrid shape curve smooth feature
| object.
|
| Parameters:
|
| iCurve
| The curve to be added to the hybrid shape curve smooth feature
| object.
|
| Example:
| The following example adds the iCurve curve to the hybShpCurveSmooth
| object.
|
| hybShpCurveSmooth.AddFrozenCurveSegment iCurve
|
|
| o Sub AddFrozenPoint(Reference iPoint)
|
| Adds a frozen points to the hybrid shape curve smooth feature
| object.
|
| Parameters:
|
| iPoint
| The frozen point to be added to the hybrid shape curve smooth
| feature object.
|
| Example:
| The following example adds the iPoint frozen point to the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.AddFrozenPoint iPoint
|
|
| o Func GetFrozenCurveSegment(long iPos) As Reference
|
| Retrieves the Frozen Curve Segment at specified position in the hybrid
| shape curve smooth object.
|
| Parameters:
|
| iPos
| The position of the Frozen Curve Segment to retrieve.
|
|
| Example:
| The following example gets the oCurve Frozen Curve Segment of the
| hybShpCurveSmooth object at the position iPos.
|
| Dim oCurve As Reference
| Set oCurve = hybShpCurveSmooth.GetFrozenCurveSegment (iPos).
|
|
| o Func GetFrozenCurveSegmentsSize() As long
|
| Returns the number of frozen curve segments in the curve smooth
| object.
|
| Parameters:
|
| oSize
| Number of frozen curve segments in the curve
| smooth.
|
| Example:
| This example retrieves the number of frozen curve segments. in
| the hybShpCurveSmooth hybrid shape curve
| smooth.
|
| Dim oSize As long
| oSize = hybShpCurveSmooth.GetFrozenCurveSegmentsSize
|
|
| o Func GetFrozenPoint(long iPos) As Reference
|
| Retrieves the Frozen Point at specified position in the hybrid shape curve
| smooth object.
|
| Parameters:
|
| iPos
| The position of the Frozen Point to retrieve.
|
| Example:
| The following example gets the oPoint Frozen Point of the
| hybShpCurveSmooth object at the position iPos.
|
| Dim oPoint As Reference
| Set oPoint = hybShpCurveSmooth.GetFrozenPoint (iPos).
|
|
| o Func GetFrozenPointsSize() As long
|
| Returns the number of Frozen Points in the curve smooth
| object.
|
| Parameters:
|
| oSize
| Number of Frozen Points in the curve smooth.
|
| Example:
| This example retrieves the number of Frozen Points. in the
| hybShpCurveSmooth hybrid shape curve smooth.
|
| Dim oSize As long
| oSize = hybShpCurveSmooth.GetFrozenPointsSize
|
|
| o Sub RemoveAllFrozenCurveSegments()
|
| Removes all Frozen Curve Segment of the hybrid shape curve smooth object.
|
|
| Example:
| The following example removes all Frozen Curve Segments of the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.RemoveAllFrozenCurveSegments
|
|
| o Sub RemoveAllFrozenPoints()
|
| Removes all Frozen Points of the hybrid shape curve smooth object.
|
|
| Example:
| The following example removes all Frozen Points of the hybShpCurveSmooth
| object.
|
| hybShpCurveSmooth.RemoveAllFrozenPoints
|
|
| o Sub RemoveFrozenCurveSegment(Reference iCurve)
|
| Removes Frozen Curve Segment from the list of Forzen curves in hybrid shape
| curve smooth object.
|
| Parameters:
|
| iCurve
| The Frozen Curve Segment to remove.
|
| Example:
| The following example removes the Frozen Curve Segment from the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.RemoveFrozenCurveSegment iCurve.
|
|
| o Sub RemoveFrozenPoint(Reference iPoint)
|
| Removes Frozen Point from the list of frozen points in hybrid shape curve
| smooth object.
|
| Parameters:
|
| iPoint
| The Frozen Point to remove.
|
| Example:
| The following example removes the Frozen Point from the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.RemoveFrozenPoint iPoint.
|
|
| o Sub SetMaximumDeviation(double iMaxDeviation)
|
| Sets the maximum deviation.
|
| Parameters:
|
| iMaxDeviation
| The maximium deviation
|
| o Sub SetTangencyThreshold(double iTangencyThreshold)
|
| Sets the tangency threshold.
|
| Parameters:
|
| iTangencyThreshold
| The tangency threshold
:return: bool
:rtype: bool | pycatia/hybrid_shape_interfaces/hybrid_shape_curve_smooth.py | topology_simplification_activity | Luanee/pycatia | 1 | python | @property
def topology_simplification_activity(self) -> bool:
'\n .. note::\n :class: toggle\n\n CAA V5 Visual Basic Help (2020-07-06 14:02:20.222384)\n | o Property TopologySimplificationActivity() As boolean\n | \n | Returns or sets the TopologySimplificationActivity.\n | \n | Example: This example retrieves the TopologySimplificationActivity of the\n | hybShpCurveSmooth in TopSimplifyAct.\n | \n | Dim TopSimplifyAct as boolean \n | TopSimplifyAct = hybShpCurvePar.TogologySimplificationActivity \n | \n | \n | Methods\n | \n | o Sub AddFrozenCurveSegment(Reference iCurve)\n | \n | Adds a frozen curve to the hybrid shape curve smooth feature\n | object.\n | \n | Parameters:\n | \n | iCurve\n | The curve to be added to the hybrid shape curve smooth feature\n | object. \n | \n | Example:\n | The following example adds the iCurve curve to the hybShpCurveSmooth\n | object.\n | \n | hybShpCurveSmooth.AddFrozenCurveSegment iCurve\n | \n | \n | o Sub AddFrozenPoint(Reference iPoint)\n | \n | Adds a frozen points to the hybrid shape curve smooth feature\n | object.\n | \n | Parameters:\n | \n | iPoint\n | The frozen point to be added to the hybrid shape curve smooth\n | feature object. \n | \n | Example:\n | The following example adds the iPoint frozen point to the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.AddFrozenPoint iPoint\n | \n | \n | o Func GetFrozenCurveSegment(long iPos) As Reference\n | \n | Retrieves the Frozen Curve Segment at specified position in the hybrid\n | shape curve smooth object.\n | \n | Parameters:\n | \n | iPos\n | The position of the Frozen Curve Segment to retrieve.\n | \n | \n | Example:\n | The following example gets the oCurve Frozen Curve Segment of the\n | hybShpCurveSmooth object at the position iPos.\n | \n | Dim oCurve As Reference\n | Set oCurve = hybShpCurveSmooth.GetFrozenCurveSegment (iPos).\n | \n | \n | o Func GetFrozenCurveSegmentsSize() As long\n | \n | Returns the number of frozen curve segments in the curve smooth\n | object.\n | \n | Parameters:\n | \n | oSize\n | Number of frozen curve segments in the curve\n | smooth.\n | \n | Example:\n | This example retrieves the number of frozen curve segments. in\n | the hybShpCurveSmooth hybrid shape curve\n | smooth.\n | \n | Dim oSize As long\n | oSize = hybShpCurveSmooth.GetFrozenCurveSegmentsSize\n | \n | \n | o Func GetFrozenPoint(long iPos) As Reference\n | \n | Retrieves the Frozen Point at specified position in the hybrid shape curve\n | smooth object.\n | \n | Parameters:\n | \n | iPos\n | The position of the Frozen Point to retrieve. \n | \n | Example:\n | The following example gets the oPoint Frozen Point of the\n | hybShpCurveSmooth object at the position iPos.\n | \n | Dim oPoint As Reference\n | Set oPoint = hybShpCurveSmooth.GetFrozenPoint (iPos).\n | \n | \n | o Func GetFrozenPointsSize() As long\n | \n | Returns the number of Frozen Points in the curve smooth\n | object.\n | \n | Parameters:\n | \n | oSize\n | Number of Frozen Points in the curve smooth.\n | \n | Example:\n | This example retrieves the number of Frozen Points. in the\n | hybShpCurveSmooth hybrid shape curve smooth.\n | \n | Dim oSize As long\n | oSize = hybShpCurveSmooth.GetFrozenPointsSize\n | \n | \n | o Sub RemoveAllFrozenCurveSegments()\n | \n | Removes all Frozen Curve Segment of the hybrid shape curve smooth object.\n | \n | \n | Example:\n | The following example removes all Frozen Curve Segments of the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveAllFrozenCurveSegments\n | \n | \n | o Sub RemoveAllFrozenPoints()\n | \n | Removes all Frozen Points of the hybrid shape curve smooth object.\n | \n | \n | Example:\n | The following example removes all Frozen Points of the hybShpCurveSmooth\n | object.\n | \n | hybShpCurveSmooth.RemoveAllFrozenPoints\n | \n | \n | o Sub RemoveFrozenCurveSegment(Reference iCurve)\n | \n | Removes Frozen Curve Segment from the list of Forzen curves in hybrid shape\n | curve smooth object.\n | \n | Parameters:\n | \n | iCurve\n | The Frozen Curve Segment to remove. \n | \n | Example:\n | The following example removes the Frozen Curve Segment from the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveFrozenCurveSegment iCurve.\n | \n | \n | o Sub RemoveFrozenPoint(Reference iPoint)\n | \n | Removes Frozen Point from the list of frozen points in hybrid shape curve\n | smooth object.\n | \n | Parameters:\n | \n | iPoint\n | The Frozen Point to remove. \n | \n | Example:\n | The following example removes the Frozen Point from the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveFrozenPoint iPoint.\n | \n | \n | o Sub SetMaximumDeviation(double iMaxDeviation)\n | \n | Sets the maximum deviation.\n | \n | Parameters:\n | \n | iMaxDeviation\n | The maximium deviation\n | \n | o Sub SetTangencyThreshold(double iTangencyThreshold)\n | \n | Sets the tangency threshold.\n | \n | Parameters:\n | \n | iTangencyThreshold\n | The tangency threshold\n\n :return: bool\n :rtype: bool\n '
return self.hybrid_shape_curve_smooth.TopologySimplificationActivity | @property
def topology_simplification_activity(self) -> bool:
'\n .. note::\n :class: toggle\n\n CAA V5 Visual Basic Help (2020-07-06 14:02:20.222384)\n | o Property TopologySimplificationActivity() As boolean\n | \n | Returns or sets the TopologySimplificationActivity.\n | \n | Example: This example retrieves the TopologySimplificationActivity of the\n | hybShpCurveSmooth in TopSimplifyAct.\n | \n | Dim TopSimplifyAct as boolean \n | TopSimplifyAct = hybShpCurvePar.TogologySimplificationActivity \n | \n | \n | Methods\n | \n | o Sub AddFrozenCurveSegment(Reference iCurve)\n | \n | Adds a frozen curve to the hybrid shape curve smooth feature\n | object.\n | \n | Parameters:\n | \n | iCurve\n | The curve to be added to the hybrid shape curve smooth feature\n | object. \n | \n | Example:\n | The following example adds the iCurve curve to the hybShpCurveSmooth\n | object.\n | \n | hybShpCurveSmooth.AddFrozenCurveSegment iCurve\n | \n | \n | o Sub AddFrozenPoint(Reference iPoint)\n | \n | Adds a frozen points to the hybrid shape curve smooth feature\n | object.\n | \n | Parameters:\n | \n | iPoint\n | The frozen point to be added to the hybrid shape curve smooth\n | feature object. \n | \n | Example:\n | The following example adds the iPoint frozen point to the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.AddFrozenPoint iPoint\n | \n | \n | o Func GetFrozenCurveSegment(long iPos) As Reference\n | \n | Retrieves the Frozen Curve Segment at specified position in the hybrid\n | shape curve smooth object.\n | \n | Parameters:\n | \n | iPos\n | The position of the Frozen Curve Segment to retrieve.\n | \n | \n | Example:\n | The following example gets the oCurve Frozen Curve Segment of the\n | hybShpCurveSmooth object at the position iPos.\n | \n | Dim oCurve As Reference\n | Set oCurve = hybShpCurveSmooth.GetFrozenCurveSegment (iPos).\n | \n | \n | o Func GetFrozenCurveSegmentsSize() As long\n | \n | Returns the number of frozen curve segments in the curve smooth\n | object.\n | \n | Parameters:\n | \n | oSize\n | Number of frozen curve segments in the curve\n | smooth.\n | \n | Example:\n | This example retrieves the number of frozen curve segments. in\n | the hybShpCurveSmooth hybrid shape curve\n | smooth.\n | \n | Dim oSize As long\n | oSize = hybShpCurveSmooth.GetFrozenCurveSegmentsSize\n | \n | \n | o Func GetFrozenPoint(long iPos) As Reference\n | \n | Retrieves the Frozen Point at specified position in the hybrid shape curve\n | smooth object.\n | \n | Parameters:\n | \n | iPos\n | The position of the Frozen Point to retrieve. \n | \n | Example:\n | The following example gets the oPoint Frozen Point of the\n | hybShpCurveSmooth object at the position iPos.\n | \n | Dim oPoint As Reference\n | Set oPoint = hybShpCurveSmooth.GetFrozenPoint (iPos).\n | \n | \n | o Func GetFrozenPointsSize() As long\n | \n | Returns the number of Frozen Points in the curve smooth\n | object.\n | \n | Parameters:\n | \n | oSize\n | Number of Frozen Points in the curve smooth.\n | \n | Example:\n | This example retrieves the number of Frozen Points. in the\n | hybShpCurveSmooth hybrid shape curve smooth.\n | \n | Dim oSize As long\n | oSize = hybShpCurveSmooth.GetFrozenPointsSize\n | \n | \n | o Sub RemoveAllFrozenCurveSegments()\n | \n | Removes all Frozen Curve Segment of the hybrid shape curve smooth object.\n | \n | \n | Example:\n | The following example removes all Frozen Curve Segments of the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveAllFrozenCurveSegments\n | \n | \n | o Sub RemoveAllFrozenPoints()\n | \n | Removes all Frozen Points of the hybrid shape curve smooth object.\n | \n | \n | Example:\n | The following example removes all Frozen Points of the hybShpCurveSmooth\n | object.\n | \n | hybShpCurveSmooth.RemoveAllFrozenPoints\n | \n | \n | o Sub RemoveFrozenCurveSegment(Reference iCurve)\n | \n | Removes Frozen Curve Segment from the list of Forzen curves in hybrid shape\n | curve smooth object.\n | \n | Parameters:\n | \n | iCurve\n | The Frozen Curve Segment to remove. \n | \n | Example:\n | The following example removes the Frozen Curve Segment from the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveFrozenCurveSegment iCurve.\n | \n | \n | o Sub RemoveFrozenPoint(Reference iPoint)\n | \n | Removes Frozen Point from the list of frozen points in hybrid shape curve\n | smooth object.\n | \n | Parameters:\n | \n | iPoint\n | The Frozen Point to remove. \n | \n | Example:\n | The following example removes the Frozen Point from the\n | hybShpCurveSmooth object.\n | \n | hybShpCurveSmooth.RemoveFrozenPoint iPoint.\n | \n | \n | o Sub SetMaximumDeviation(double iMaxDeviation)\n | \n | Sets the maximum deviation.\n | \n | Parameters:\n | \n | iMaxDeviation\n | The maximium deviation\n | \n | o Sub SetTangencyThreshold(double iTangencyThreshold)\n | \n | Sets the tangency threshold.\n | \n | Parameters:\n | \n | iTangencyThreshold\n | The tangency threshold\n\n :return: bool\n :rtype: bool\n '
return self.hybrid_shape_curve_smooth.TopologySimplificationActivity<|docstring|>.. note::
:class: toggle
CAA V5 Visual Basic Help (2020-07-06 14:02:20.222384)
| o Property TopologySimplificationActivity() As boolean
|
| Returns or sets the TopologySimplificationActivity.
|
| Example: This example retrieves the TopologySimplificationActivity of the
| hybShpCurveSmooth in TopSimplifyAct.
|
| Dim TopSimplifyAct as boolean
| TopSimplifyAct = hybShpCurvePar.TogologySimplificationActivity
|
|
| Methods
|
| o Sub AddFrozenCurveSegment(Reference iCurve)
|
| Adds a frozen curve to the hybrid shape curve smooth feature
| object.
|
| Parameters:
|
| iCurve
| The curve to be added to the hybrid shape curve smooth feature
| object.
|
| Example:
| The following example adds the iCurve curve to the hybShpCurveSmooth
| object.
|
| hybShpCurveSmooth.AddFrozenCurveSegment iCurve
|
|
| o Sub AddFrozenPoint(Reference iPoint)
|
| Adds a frozen points to the hybrid shape curve smooth feature
| object.
|
| Parameters:
|
| iPoint
| The frozen point to be added to the hybrid shape curve smooth
| feature object.
|
| Example:
| The following example adds the iPoint frozen point to the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.AddFrozenPoint iPoint
|
|
| o Func GetFrozenCurveSegment(long iPos) As Reference
|
| Retrieves the Frozen Curve Segment at specified position in the hybrid
| shape curve smooth object.
|
| Parameters:
|
| iPos
| The position of the Frozen Curve Segment to retrieve.
|
|
| Example:
| The following example gets the oCurve Frozen Curve Segment of the
| hybShpCurveSmooth object at the position iPos.
|
| Dim oCurve As Reference
| Set oCurve = hybShpCurveSmooth.GetFrozenCurveSegment (iPos).
|
|
| o Func GetFrozenCurveSegmentsSize() As long
|
| Returns the number of frozen curve segments in the curve smooth
| object.
|
| Parameters:
|
| oSize
| Number of frozen curve segments in the curve
| smooth.
|
| Example:
| This example retrieves the number of frozen curve segments. in
| the hybShpCurveSmooth hybrid shape curve
| smooth.
|
| Dim oSize As long
| oSize = hybShpCurveSmooth.GetFrozenCurveSegmentsSize
|
|
| o Func GetFrozenPoint(long iPos) As Reference
|
| Retrieves the Frozen Point at specified position in the hybrid shape curve
| smooth object.
|
| Parameters:
|
| iPos
| The position of the Frozen Point to retrieve.
|
| Example:
| The following example gets the oPoint Frozen Point of the
| hybShpCurveSmooth object at the position iPos.
|
| Dim oPoint As Reference
| Set oPoint = hybShpCurveSmooth.GetFrozenPoint (iPos).
|
|
| o Func GetFrozenPointsSize() As long
|
| Returns the number of Frozen Points in the curve smooth
| object.
|
| Parameters:
|
| oSize
| Number of Frozen Points in the curve smooth.
|
| Example:
| This example retrieves the number of Frozen Points. in the
| hybShpCurveSmooth hybrid shape curve smooth.
|
| Dim oSize As long
| oSize = hybShpCurveSmooth.GetFrozenPointsSize
|
|
| o Sub RemoveAllFrozenCurveSegments()
|
| Removes all Frozen Curve Segment of the hybrid shape curve smooth object.
|
|
| Example:
| The following example removes all Frozen Curve Segments of the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.RemoveAllFrozenCurveSegments
|
|
| o Sub RemoveAllFrozenPoints()
|
| Removes all Frozen Points of the hybrid shape curve smooth object.
|
|
| Example:
| The following example removes all Frozen Points of the hybShpCurveSmooth
| object.
|
| hybShpCurveSmooth.RemoveAllFrozenPoints
|
|
| o Sub RemoveFrozenCurveSegment(Reference iCurve)
|
| Removes Frozen Curve Segment from the list of Forzen curves in hybrid shape
| curve smooth object.
|
| Parameters:
|
| iCurve
| The Frozen Curve Segment to remove.
|
| Example:
| The following example removes the Frozen Curve Segment from the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.RemoveFrozenCurveSegment iCurve.
|
|
| o Sub RemoveFrozenPoint(Reference iPoint)
|
| Removes Frozen Point from the list of frozen points in hybrid shape curve
| smooth object.
|
| Parameters:
|
| iPoint
| The Frozen Point to remove.
|
| Example:
| The following example removes the Frozen Point from the
| hybShpCurveSmooth object.
|
| hybShpCurveSmooth.RemoveFrozenPoint iPoint.
|
|
| o Sub SetMaximumDeviation(double iMaxDeviation)
|
| Sets the maximum deviation.
|
| Parameters:
|
| iMaxDeviation
| The maximium deviation
|
| o Sub SetTangencyThreshold(double iTangencyThreshold)
|
| Sets the tangency threshold.
|
| Parameters:
|
| iTangencyThreshold
| The tangency threshold
:return: bool
:rtype: bool<|endoftext|> |
fe7c3fb7b75e9fd085876d6f293ca59551a7477b7d6763cb6900c690ca5d0ae4 | @topology_simplification_activity.setter
def topology_simplification_activity(self, value: bool):
'\n :param bool value:\n '
self.hybrid_shape_curve_smooth.TopologySimplificationActivity = value | :param bool value: | pycatia/hybrid_shape_interfaces/hybrid_shape_curve_smooth.py | topology_simplification_activity | Luanee/pycatia | 1 | python | @topology_simplification_activity.setter
def topology_simplification_activity(self, value: bool):
'\n \n '
self.hybrid_shape_curve_smooth.TopologySimplificationActivity = value | @topology_simplification_activity.setter
def topology_simplification_activity(self, value: bool):
'\n \n '
self.hybrid_shape_curve_smooth.TopologySimplificationActivity = value<|docstring|>:param bool value:<|endoftext|> |
a03218b13d56eb4791258c725db8c1bfca7190f3acd2199af0f8f7f12c452d89 | def setUp(self):
'\n This setUp() method allows us to define instructions that will be\n executed before each test method.\n\n So below we are going to instruct our method to create a new instance\n of the Passwords class before each test method is declared.\n\n\n We then store it as an instance variable in the test class as:\n self.new_password\n '
self.new_profile = Passwords('CIA', 'myowncreativepass', '17') | This setUp() method allows us to define instructions that will be
executed before each test method.
So below we are going to instruct our method to create a new instance
of the Passwords class before each test method is declared.
We then store it as an instance variable in the test class as:
self.new_password | test_p-m.py | setUp | SamNgigi/Password-Locker | 0 | python | def setUp(self):
'\n This setUp() method allows us to define instructions that will be\n executed before each test method.\n\n So below we are going to instruct our method to create a new instance\n of the Passwords class before each test method is declared.\n\n\n We then store it as an instance variable in the test class as:\n self.new_password\n '
self.new_profile = Passwords('CIA', 'myowncreativepass', '17') | def setUp(self):
'\n This setUp() method allows us to define instructions that will be\n executed before each test method.\n\n So below we are going to instruct our method to create a new instance\n of the Passwords class before each test method is declared.\n\n\n We then store it as an instance variable in the test class as:\n self.new_password\n '
self.new_profile = Passwords('CIA', 'myowncreativepass', '17')<|docstring|>This setUp() method allows us to define instructions that will be
executed before each test method.
So below we are going to instruct our method to create a new instance
of the Passwords class before each test method is declared.
We then store it as an instance variable in the test class as:
self.new_password<|endoftext|> |
70f351b033700038d2fa946811283d568d8828820db5ea656756a9e6c1c74740 | def tearDown(self):
'\n This tearDown function cleans up after every test case.\n\n For example in this case...what we want is to return our password_list\n array to default even after multiple saves.\n '
Passwords.password_list = [] | This tearDown function cleans up after every test case.
For example in this case...what we want is to return our password_list
array to default even after multiple saves. | test_p-m.py | tearDown | SamNgigi/Password-Locker | 0 | python | def tearDown(self):
'\n This tearDown function cleans up after every test case.\n\n For example in this case...what we want is to return our password_list\n array to default even after multiple saves.\n '
Passwords.password_list = [] | def tearDown(self):
'\n This tearDown function cleans up after every test case.\n\n For example in this case...what we want is to return our password_list\n array to default even after multiple saves.\n '
Passwords.password_list = []<|docstring|>This tearDown function cleans up after every test case.
For example in this case...what we want is to return our password_list
array to default even after multiple saves.<|endoftext|> |
dd56ca2f7cd312b8e16db62d61cf3f08801bd396cd971e13bfdf282dec9fef2c | def test_instance(self):
'\n test_instance tests if a the object created in setUp is initialized/\n instanciated properly.\n '
self.assertEqual(self.new_profile.account_name, 'CIA')
self.assertEqual(self.new_profile.account_password, 'myowncreativepass')
self.assertEqual(self.new_profile.password_length, '17') | test_instance tests if a the object created in setUp is initialized/
instanciated properly. | test_p-m.py | test_instance | SamNgigi/Password-Locker | 0 | python | def test_instance(self):
'\n test_instance tests if a the object created in setUp is initialized/\n instanciated properly.\n '
self.assertEqual(self.new_profile.account_name, 'CIA')
self.assertEqual(self.new_profile.account_password, 'myowncreativepass')
self.assertEqual(self.new_profile.password_length, '17') | def test_instance(self):
'\n test_instance tests if a the object created in setUp is initialized/\n instanciated properly.\n '
self.assertEqual(self.new_profile.account_name, 'CIA')
self.assertEqual(self.new_profile.account_password, 'myowncreativepass')
self.assertEqual(self.new_profile.password_length, '17')<|docstring|>test_instance tests if a the object created in setUp is initialized/
instanciated properly.<|endoftext|> |
194c71f938d44bd5c975238bf36355b9fbcff05ac158228d97016a4ebe4cf98d | def test_save_profile(self):
'\n Test Case to test if the contact object is saved.\n\n So here it seems like we save try save our profile using a function on\n locker that we have not built.\n This is what causes our test to fail and will only work when we build\n it and then import the working one\n '
self.new_profile.save_profile()
self.assertEqual(len(Passwords.password_list), 1) | Test Case to test if the contact object is saved.
So here it seems like we save try save our profile using a function on
locker that we have not built.
This is what causes our test to fail and will only work when we build
it and then import the working one | test_p-m.py | test_save_profile | SamNgigi/Password-Locker | 0 | python | def test_save_profile(self):
'\n Test Case to test if the contact object is saved.\n\n So here it seems like we save try save our profile using a function on\n locker that we have not built.\n This is what causes our test to fail and will only work when we build\n it and then import the working one\n '
self.new_profile.save_profile()
self.assertEqual(len(Passwords.password_list), 1) | def test_save_profile(self):
'\n Test Case to test if the contact object is saved.\n\n So here it seems like we save try save our profile using a function on\n locker that we have not built.\n This is what causes our test to fail and will only work when we build\n it and then import the working one\n '
self.new_profile.save_profile()
self.assertEqual(len(Passwords.password_list), 1)<|docstring|>Test Case to test if the contact object is saved.
So here it seems like we save try save our profile using a function on
locker that we have not built.
This is what causes our test to fail and will only work when we build
it and then import the working one<|endoftext|> |
9b1cd7de7e6c02c129f991b7d7fdaf03bdb511e9b9fecb7d1d65a478452fb696 | def test_save_multiple_profiles(self):
'\n Test to see if our function can save multiple contacts.\n '
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
'\n test_profile does not need "self". Its a local variable\n '
test_profile.save_profile()
self.new_profile.save_profile()
self.assertEqual(len(Passwords.password_list), 2) | Test to see if our function can save multiple contacts. | test_p-m.py | test_save_multiple_profiles | SamNgigi/Password-Locker | 0 | python | def test_save_multiple_profiles(self):
'\n \n '
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
'\n test_profile does not need "self". Its a local variable\n '
test_profile.save_profile()
self.new_profile.save_profile()
self.assertEqual(len(Passwords.password_list), 2) | def test_save_multiple_profiles(self):
'\n \n '
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
'\n test_profile does not need "self". Its a local variable\n '
test_profile.save_profile()
self.new_profile.save_profile()
self.assertEqual(len(Passwords.password_list), 2)<|docstring|>Test to see if our function can save multiple contacts.<|endoftext|> |
902c0063cdb71502437f2819ef7304f3c6dec63f81b26b3e5127ad83167feb53 | def test_find_by_account(self):
'\n Test to check if we can find our passwords by account and display.\n '
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
test_profile.save_profile()
'\n Below we equate a new variable:\n found_profile to the profile we\n want tofind using the function we want to create.\n We pass in the test account.\n '
found_profile = Passwords.find_by_account('Twitter')
self.new_profile.save_profile()
self.assertEqual(found_profile.account_password, test_profile.account_password) | Test to check if we can find our passwords by account and display. | test_p-m.py | test_find_by_account | SamNgigi/Password-Locker | 0 | python | def test_find_by_account(self):
'\n \n '
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
test_profile.save_profile()
'\n Below we equate a new variable:\n found_profile to the profile we\n want tofind using the function we want to create.\n We pass in the test account.\n '
found_profile = Passwords.find_by_account('Twitter')
self.new_profile.save_profile()
self.assertEqual(found_profile.account_password, test_profile.account_password) | def test_find_by_account(self):
'\n \n '
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
test_profile.save_profile()
'\n Below we equate a new variable:\n found_profile to the profile we\n want tofind using the function we want to create.\n We pass in the test account.\n '
found_profile = Passwords.find_by_account('Twitter')
self.new_profile.save_profile()
self.assertEqual(found_profile.account_password, test_profile.account_password)<|docstring|>Test to check if we can find our passwords by account and display.<|endoftext|> |
23bdc7c6eb11edcb7e1411f598abe9cd10cb2b70c139d8afee6838ea039fe04c | def test_profile_exists(self):
'\n Test to check if we can return a boolean if we cannot find a profile\n '
self.new_profile.save_profile()
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
test_profile.save_profile()
test_exists = Passwords.profile_exists('Twitter')
self.assertTrue(test_exists) | Test to check if we can return a boolean if we cannot find a profile | test_p-m.py | test_profile_exists | SamNgigi/Password-Locker | 0 | python | def test_profile_exists(self):
'\n \n '
self.new_profile.save_profile()
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
test_profile.save_profile()
test_exists = Passwords.profile_exists('Twitter')
self.assertTrue(test_exists) | def test_profile_exists(self):
'\n \n '
self.new_profile.save_profile()
test_profile = Passwords('Twitter', 'newtwitteruser', '14')
test_profile.save_profile()
test_exists = Passwords.profile_exists('Twitter')
self.assertTrue(test_exists)<|docstring|>Test to check if we can return a boolean if we cannot find a profile<|endoftext|> |
f1d898d021c09c6f6a2723ffaafa0e9e3975720a67ed94751f76132eec4b5245 | def test_display_profiles(self):
'\n Method that displays the list of all the profiles saved\n '
self.assertEqual(Passwords.display_profiles(), Passwords.password_list) | Method that displays the list of all the profiles saved | test_p-m.py | test_display_profiles | SamNgigi/Password-Locker | 0 | python | def test_display_profiles(self):
'\n \n '
self.assertEqual(Passwords.display_profiles(), Passwords.password_list) | def test_display_profiles(self):
'\n \n '
self.assertEqual(Passwords.display_profiles(), Passwords.password_list)<|docstring|>Method that displays the list of all the profiles saved<|endoftext|> |
9bc3d11ddfbdbf0a4ca9661c76a1b6bb0a5ed777a5664e6d435fd65d43f59213 | def test_copy_password(self):
'\n Method that confirms we are copying the password from a profile\n '
self.new_profile.save_profile()
Passwords.copy_password('CIA')
self.assertEqual(self.new_profile.account_password, pyperclip.paste()) | Method that confirms we are copying the password from a profile | test_p-m.py | test_copy_password | SamNgigi/Password-Locker | 0 | python | def test_copy_password(self):
'\n \n '
self.new_profile.save_profile()
Passwords.copy_password('CIA')
self.assertEqual(self.new_profile.account_password, pyperclip.paste()) | def test_copy_password(self):
'\n \n '
self.new_profile.save_profile()
Passwords.copy_password('CIA')
self.assertEqual(self.new_profile.account_password, pyperclip.paste())<|docstring|>Method that confirms we are copying the password from a profile<|endoftext|> |
45e2ed886f26cacccceb4409fff0e7d1471d870789730b224d172125c24833e3 | def test_password_gen(self):
'\n We want to test if our password generator will work.\n '
self.new_profile.save_profile()
random_password = self.new_profile.password_gen('17')
self.assertNotEqual(random_password, self.new_profile.account_password) | We want to test if our password generator will work. | test_p-m.py | test_password_gen | SamNgigi/Password-Locker | 0 | python | def test_password_gen(self):
'\n \n '
self.new_profile.save_profile()
random_password = self.new_profile.password_gen('17')
self.assertNotEqual(random_password, self.new_profile.account_password) | def test_password_gen(self):
'\n \n '
self.new_profile.save_profile()
random_password = self.new_profile.password_gen('17')
self.assertNotEqual(random_password, self.new_profile.account_password)<|docstring|>We want to test if our password generator will work.<|endoftext|> |
70468849f8a257c346e3274b7998f3b16dd3ed8d218b8e0ba9e76699e7905d55 | def query(query_string, select=None):
' Returns the results of executing a SPARQL query on the Data Commons graph.\n\n Args:\n query_string (:obj:`str`): The SPARQL query string.\n select (:obj:`func` accepting a row in the query result): A function that\n selects rows to be returned by :code:`query`. This function accepts a row\n in the results of executing :code:`query_string` and return True if and\n only if the row is to be returned by :code:`query`. The row passed in as\n an argument is represented as a :obj:`dict` that maps a query variable in\n :code:`query_string` to its value in the given row.\n\n Returns:\n A table, represented as a :obj:`list` of rows, resulting from executing the\n given SPARQL query. Each row is a :obj:`dict` mapping query variable to its\n value in the row. If `select` is not `None`, then a row is included in the\n returned :obj:`list` if and only if `select` returns :obj:`True` for that\n row.\n\n Raises:\n ValueError: If the payload returned by the Data Commons REST API is\n malformed.\n\n Examples:\n We would like to query for the name associated with three states identified\n by their dcids\n `California <https://browser.datacommons.org/kg?dcid=geoId/06>`_,\n `Kentucky <https://browser.datacommons.org/kg?dcid=geoId/21>`_, and\n `Maryland <https://browser.datacommons.org/kg?dcid=geoId/24>`_.\n\n >>> query_str = \'\'\'\n ... SELECT ?name ?dcid\n ... WHERE {\n ... ?a typeOf Place .\n ... ?a name ?name .\n ... ?a dcid ("geoId/06" "geoId/21" "geoId/24") .\n ... ?a dcid ?dcid\n ... }\n ... \'\'\'\n >>> result = query(query_str)\n >>> for r in result:\n ... print(r)\n {"?name": "Maryland", "?dcid": "geoId/24"}\n {"?name": "Kentucky", "?dcid": "geoId/21"}\n {"?name": "California", "?dcid": "geoId/06"}\n\n Optionally, we can specify which rows are returned by setting :code:`select`\n like so. The following returns all rows where the name is "Maryland".\n\n >>> selector = lambda row: row[\'?name\'] == \'Maryland\'\n >>> result = query(query_str, select=selector)\n >>> for r in result:\n ... print(r)\n {"?name": "Maryland", "?dcid": "geoId/24"}\n '
if (not os.environ.get(_ENV_VAR_API_KEY, None)):
raise ValueError('Request error: Must set an API key before using the API!')
url = (_API_ROOT + _API_ENDPOINTS['query'])
res = requests.post(url, json={'sparql': query_string}, headers={'x-api-key': os.environ[_ENV_VAR_API_KEY]})
if (res.status_code != 200):
raise ValueError('Response error: An HTTP {} code was returned by the mixer. Printing response\n\n{}'.format(res.status_code, res.text))
res_json = res.json()
header = res_json['header']
result_rows = []
for row in res_json['rows']:
row_map = {}
for (idx, cell) in enumerate(row['cells']):
if (idx > len(header)):
raise ValueError('Query error: unexpected cell {}'.format(cell))
if ('value' not in cell):
raise ValueError('Query error: cell missing value {}'.format(cell))
cell_var = header[idx]
row_map[cell_var] = cell['value']
if ((select is None) or select(row_map)):
result_rows.append(row_map)
return result_rows | Returns the results of executing a SPARQL query on the Data Commons graph.
Args:
query_string (:obj:`str`): The SPARQL query string.
select (:obj:`func` accepting a row in the query result): A function that
selects rows to be returned by :code:`query`. This function accepts a row
in the results of executing :code:`query_string` and return True if and
only if the row is to be returned by :code:`query`. The row passed in as
an argument is represented as a :obj:`dict` that maps a query variable in
:code:`query_string` to its value in the given row.
Returns:
A table, represented as a :obj:`list` of rows, resulting from executing the
given SPARQL query. Each row is a :obj:`dict` mapping query variable to its
value in the row. If `select` is not `None`, then a row is included in the
returned :obj:`list` if and only if `select` returns :obj:`True` for that
row.
Raises:
ValueError: If the payload returned by the Data Commons REST API is
malformed.
Examples:
We would like to query for the name associated with three states identified
by their dcids
`California <https://browser.datacommons.org/kg?dcid=geoId/06>`_,
`Kentucky <https://browser.datacommons.org/kg?dcid=geoId/21>`_, and
`Maryland <https://browser.datacommons.org/kg?dcid=geoId/24>`_.
>>> query_str = '''
... SELECT ?name ?dcid
... WHERE {
... ?a typeOf Place .
... ?a name ?name .
... ?a dcid ("geoId/06" "geoId/21" "geoId/24") .
... ?a dcid ?dcid
... }
... '''
>>> result = query(query_str)
>>> for r in result:
... print(r)
{"?name": "Maryland", "?dcid": "geoId/24"}
{"?name": "Kentucky", "?dcid": "geoId/21"}
{"?name": "California", "?dcid": "geoId/06"}
Optionally, we can specify which rows are returned by setting :code:`select`
like so. The following returns all rows where the name is "Maryland".
>>> selector = lambda row: row['?name'] == 'Maryland'
>>> result = query(query_str, select=selector)
>>> for r in result:
... print(r)
{"?name": "Maryland", "?dcid": "geoId/24"} | datacommons/query.py | query | ACscooter/datacommons | 1 | python | def query(query_string, select=None):
' Returns the results of executing a SPARQL query on the Data Commons graph.\n\n Args:\n query_string (:obj:`str`): The SPARQL query string.\n select (:obj:`func` accepting a row in the query result): A function that\n selects rows to be returned by :code:`query`. This function accepts a row\n in the results of executing :code:`query_string` and return True if and\n only if the row is to be returned by :code:`query`. The row passed in as\n an argument is represented as a :obj:`dict` that maps a query variable in\n :code:`query_string` to its value in the given row.\n\n Returns:\n A table, represented as a :obj:`list` of rows, resulting from executing the\n given SPARQL query. Each row is a :obj:`dict` mapping query variable to its\n value in the row. If `select` is not `None`, then a row is included in the\n returned :obj:`list` if and only if `select` returns :obj:`True` for that\n row.\n\n Raises:\n ValueError: If the payload returned by the Data Commons REST API is\n malformed.\n\n Examples:\n We would like to query for the name associated with three states identified\n by their dcids\n `California <https://browser.datacommons.org/kg?dcid=geoId/06>`_,\n `Kentucky <https://browser.datacommons.org/kg?dcid=geoId/21>`_, and\n `Maryland <https://browser.datacommons.org/kg?dcid=geoId/24>`_.\n\n >>> query_str = \'\'\'\n ... SELECT ?name ?dcid\n ... WHERE {\n ... ?a typeOf Place .\n ... ?a name ?name .\n ... ?a dcid ("geoId/06" "geoId/21" "geoId/24") .\n ... ?a dcid ?dcid\n ... }\n ... \'\'\'\n >>> result = query(query_str)\n >>> for r in result:\n ... print(r)\n {"?name": "Maryland", "?dcid": "geoId/24"}\n {"?name": "Kentucky", "?dcid": "geoId/21"}\n {"?name": "California", "?dcid": "geoId/06"}\n\n Optionally, we can specify which rows are returned by setting :code:`select`\n like so. The following returns all rows where the name is "Maryland".\n\n >>> selector = lambda row: row[\'?name\'] == \'Maryland\'\n >>> result = query(query_str, select=selector)\n >>> for r in result:\n ... print(r)\n {"?name": "Maryland", "?dcid": "geoId/24"}\n '
if (not os.environ.get(_ENV_VAR_API_KEY, None)):
raise ValueError('Request error: Must set an API key before using the API!')
url = (_API_ROOT + _API_ENDPOINTS['query'])
res = requests.post(url, json={'sparql': query_string}, headers={'x-api-key': os.environ[_ENV_VAR_API_KEY]})
if (res.status_code != 200):
raise ValueError('Response error: An HTTP {} code was returned by the mixer. Printing response\n\n{}'.format(res.status_code, res.text))
res_json = res.json()
header = res_json['header']
result_rows = []
for row in res_json['rows']:
row_map = {}
for (idx, cell) in enumerate(row['cells']):
if (idx > len(header)):
raise ValueError('Query error: unexpected cell {}'.format(cell))
if ('value' not in cell):
raise ValueError('Query error: cell missing value {}'.format(cell))
cell_var = header[idx]
row_map[cell_var] = cell['value']
if ((select is None) or select(row_map)):
result_rows.append(row_map)
return result_rows | def query(query_string, select=None):
' Returns the results of executing a SPARQL query on the Data Commons graph.\n\n Args:\n query_string (:obj:`str`): The SPARQL query string.\n select (:obj:`func` accepting a row in the query result): A function that\n selects rows to be returned by :code:`query`. This function accepts a row\n in the results of executing :code:`query_string` and return True if and\n only if the row is to be returned by :code:`query`. The row passed in as\n an argument is represented as a :obj:`dict` that maps a query variable in\n :code:`query_string` to its value in the given row.\n\n Returns:\n A table, represented as a :obj:`list` of rows, resulting from executing the\n given SPARQL query. Each row is a :obj:`dict` mapping query variable to its\n value in the row. If `select` is not `None`, then a row is included in the\n returned :obj:`list` if and only if `select` returns :obj:`True` for that\n row.\n\n Raises:\n ValueError: If the payload returned by the Data Commons REST API is\n malformed.\n\n Examples:\n We would like to query for the name associated with three states identified\n by their dcids\n `California <https://browser.datacommons.org/kg?dcid=geoId/06>`_,\n `Kentucky <https://browser.datacommons.org/kg?dcid=geoId/21>`_, and\n `Maryland <https://browser.datacommons.org/kg?dcid=geoId/24>`_.\n\n >>> query_str = \'\'\'\n ... SELECT ?name ?dcid\n ... WHERE {\n ... ?a typeOf Place .\n ... ?a name ?name .\n ... ?a dcid ("geoId/06" "geoId/21" "geoId/24") .\n ... ?a dcid ?dcid\n ... }\n ... \'\'\'\n >>> result = query(query_str)\n >>> for r in result:\n ... print(r)\n {"?name": "Maryland", "?dcid": "geoId/24"}\n {"?name": "Kentucky", "?dcid": "geoId/21"}\n {"?name": "California", "?dcid": "geoId/06"}\n\n Optionally, we can specify which rows are returned by setting :code:`select`\n like so. The following returns all rows where the name is "Maryland".\n\n >>> selector = lambda row: row[\'?name\'] == \'Maryland\'\n >>> result = query(query_str, select=selector)\n >>> for r in result:\n ... print(r)\n {"?name": "Maryland", "?dcid": "geoId/24"}\n '
if (not os.environ.get(_ENV_VAR_API_KEY, None)):
raise ValueError('Request error: Must set an API key before using the API!')
url = (_API_ROOT + _API_ENDPOINTS['query'])
res = requests.post(url, json={'sparql': query_string}, headers={'x-api-key': os.environ[_ENV_VAR_API_KEY]})
if (res.status_code != 200):
raise ValueError('Response error: An HTTP {} code was returned by the mixer. Printing response\n\n{}'.format(res.status_code, res.text))
res_json = res.json()
header = res_json['header']
result_rows = []
for row in res_json['rows']:
row_map = {}
for (idx, cell) in enumerate(row['cells']):
if (idx > len(header)):
raise ValueError('Query error: unexpected cell {}'.format(cell))
if ('value' not in cell):
raise ValueError('Query error: cell missing value {}'.format(cell))
cell_var = header[idx]
row_map[cell_var] = cell['value']
if ((select is None) or select(row_map)):
result_rows.append(row_map)
return result_rows<|docstring|>Returns the results of executing a SPARQL query on the Data Commons graph.
Args:
query_string (:obj:`str`): The SPARQL query string.
select (:obj:`func` accepting a row in the query result): A function that
selects rows to be returned by :code:`query`. This function accepts a row
in the results of executing :code:`query_string` and return True if and
only if the row is to be returned by :code:`query`. The row passed in as
an argument is represented as a :obj:`dict` that maps a query variable in
:code:`query_string` to its value in the given row.
Returns:
A table, represented as a :obj:`list` of rows, resulting from executing the
given SPARQL query. Each row is a :obj:`dict` mapping query variable to its
value in the row. If `select` is not `None`, then a row is included in the
returned :obj:`list` if and only if `select` returns :obj:`True` for that
row.
Raises:
ValueError: If the payload returned by the Data Commons REST API is
malformed.
Examples:
We would like to query for the name associated with three states identified
by their dcids
`California <https://browser.datacommons.org/kg?dcid=geoId/06>`_,
`Kentucky <https://browser.datacommons.org/kg?dcid=geoId/21>`_, and
`Maryland <https://browser.datacommons.org/kg?dcid=geoId/24>`_.
>>> query_str = '''
... SELECT ?name ?dcid
... WHERE {
... ?a typeOf Place .
... ?a name ?name .
... ?a dcid ("geoId/06" "geoId/21" "geoId/24") .
... ?a dcid ?dcid
... }
... '''
>>> result = query(query_str)
>>> for r in result:
... print(r)
{"?name": "Maryland", "?dcid": "geoId/24"}
{"?name": "Kentucky", "?dcid": "geoId/21"}
{"?name": "California", "?dcid": "geoId/06"}
Optionally, we can specify which rows are returned by setting :code:`select`
like so. The following returns all rows where the name is "Maryland".
>>> selector = lambda row: row['?name'] == 'Maryland'
>>> result = query(query_str, select=selector)
>>> for r in result:
... print(r)
{"?name": "Maryland", "?dcid": "geoId/24"}<|endoftext|> |
dbbc9cb2f6e577967fb18531f30001a280916724551bd772b08a6278573cbecb | def setUp(self):
'Sets up before each test'
logging.debug('setting up TestES') | Sets up before each test | tests/test_es_client.py | setUp | fanxchange/fanx-service-clients | 1 | python | def setUp(self):
logging.debug('setting up TestES') | def setUp(self):
logging.debug('setting up TestES')<|docstring|>Sets up before each test<|endoftext|> |
448b8e86c9941a47c28e10c73f12d947cadee5edcf346dde753ededa20ebc977 | def tearDown(self):
'Tears down after each test'
logging.debug('tearing down TestES') | Tears down after each test | tests/test_es_client.py | tearDown | fanxchange/fanx-service-clients | 1 | python | def tearDown(self):
logging.debug('tearing down TestES') | def tearDown(self):
logging.debug('tearing down TestES')<|docstring|>Tears down after each test<|endoftext|> |
3f52e09bdf8c6b52d305e20e12e98d79e855cb07fe1c31c3983728cf9caa50c0 | @classmethod
def setup_class(cls):
'setup_class() before any methods in this class, init class'
cls.es_client = ESClient(ES_CONN_PARAMS) | setup_class() before any methods in this class, init class | tests/test_es_client.py | setup_class | fanxchange/fanx-service-clients | 1 | python | @classmethod
def setup_class(cls):
cls.es_client = ESClient(ES_CONN_PARAMS) | @classmethod
def setup_class(cls):
cls.es_client = ESClient(ES_CONN_PARAMS)<|docstring|>setup_class() before any methods in this class, init class<|endoftext|> |
9e17d1ff6541f59f1b67778dabe2686e9e89a4fe62f0c591377dae123bb79fd7 | def test_search1(self):
'\n Single query search\n '
self.es_client.create_index(TEST_ES_INDEX)
q = '\n {\n "min_score": 2.0,\n "track_scores": true,\n "query": {\n "bool": {\n "must": [\n {\n "match": {\n "venue_name": {\n "query": "dodger stadium",\n "operator": "and"\n }\n }\n },\n {\n "bool": {\n "should": [\n {\n "match": {\n "name": {\n "query": "ironman",\n "minimum_should_match": "33%",\n "fuzziness": "AUTO"\n }\n }\n }\n ]\n }\n }\n ]\n }\n }\n }\n '
assert isinstance(self.es_client.search(q, index_name=TEST_ES_INDEX), list) | Single query search | tests/test_es_client.py | test_search1 | fanxchange/fanx-service-clients | 1 | python | def test_search1(self):
'\n \n '
self.es_client.create_index(TEST_ES_INDEX)
q = '\n {\n "min_score": 2.0,\n "track_scores": true,\n "query": {\n "bool": {\n "must": [\n {\n "match": {\n "venue_name": {\n "query": "dodger stadium",\n "operator": "and"\n }\n }\n },\n {\n "bool": {\n "should": [\n {\n "match": {\n "name": {\n "query": "ironman",\n "minimum_should_match": "33%",\n "fuzziness": "AUTO"\n }\n }\n }\n ]\n }\n }\n ]\n }\n }\n }\n '
assert isinstance(self.es_client.search(q, index_name=TEST_ES_INDEX), list) | def test_search1(self):
'\n \n '
self.es_client.create_index(TEST_ES_INDEX)
q = '\n {\n "min_score": 2.0,\n "track_scores": true,\n "query": {\n "bool": {\n "must": [\n {\n "match": {\n "venue_name": {\n "query": "dodger stadium",\n "operator": "and"\n }\n }\n },\n {\n "bool": {\n "should": [\n {\n "match": {\n "name": {\n "query": "ironman",\n "minimum_should_match": "33%",\n "fuzziness": "AUTO"\n }\n }\n }\n ]\n }\n }\n ]\n }\n }\n }\n '
assert isinstance(self.es_client.search(q, index_name=TEST_ES_INDEX), list)<|docstring|>Single query search<|endoftext|> |
98f865a68565ae777334215e5452b5bee754d5feb493545ce855e51ad727a863 | def test_msearch1(self):
'\n Test multi-search\n '
self.es_client.create_index(TEST_ES_INDEX)
queries = []
queries.append({'min_score': 2.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'batman'}}}]}}})
queries.append({'min_score': 1.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'ironman'}}}]}}})
queries.append({'track_scores': True, 'min_score': 9.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'not-findable'}}}]}}})
q_results = self.es_client.msearch(queries, index_name=TEST_ES_INDEX, doc_type='event')
assert (len(q_results) == 3) | Test multi-search | tests/test_es_client.py | test_msearch1 | fanxchange/fanx-service-clients | 1 | python | def test_msearch1(self):
'\n \n '
self.es_client.create_index(TEST_ES_INDEX)
queries = []
queries.append({'min_score': 2.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'batman'}}}]}}})
queries.append({'min_score': 1.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'ironman'}}}]}}})
queries.append({'track_scores': True, 'min_score': 9.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'not-findable'}}}]}}})
q_results = self.es_client.msearch(queries, index_name=TEST_ES_INDEX, doc_type='event')
assert (len(q_results) == 3) | def test_msearch1(self):
'\n \n '
self.es_client.create_index(TEST_ES_INDEX)
queries = []
queries.append({'min_score': 2.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'batman'}}}]}}})
queries.append({'min_score': 1.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'ironman'}}}]}}})
queries.append({'track_scores': True, 'min_score': 9.0, 'query': {'bool': {'should': [{'match': {'name': {'query': 'not-findable'}}}]}}})
q_results = self.es_client.msearch(queries, index_name=TEST_ES_INDEX, doc_type='event')
assert (len(q_results) == 3)<|docstring|>Test multi-search<|endoftext|> |
3e3e6b08c93be41eb0fb7b71ea04252862ec10242bea8906c510a78c8f0c489f | def test_upsert1(self):
'\n Test inserting doc, then updating it\n '
doc_id = 1
event = {'event_id': doc_id, 'event_name': 'Ryder Cup Golf', 'event_alt_names': '', 'event_date': '2017-12-12 22:00:00', 'event_time': '10:00 pm', 'venue_name': 'Hazeltine National Golf Club'}
self.es_client.create_index(TEST_ES_INDEX)
update = self.es_client.upsert_doc(doc_id, event, index_name=TEST_ES_INDEX)
assert isinstance(update, dict)
assert (int(update['_id']) == doc_id), 'Got {}'.format(doc_id)
event['event_name'] = 'Ryder Cup Golf Test'
update = self.es_client.upsert_doc(doc_id, event, index_name=TEST_ES_INDEX)
assert isinstance(update, dict)
assert (int(update['_id']) == doc_id), 'Got {}'.format(doc_id) | Test inserting doc, then updating it | tests/test_es_client.py | test_upsert1 | fanxchange/fanx-service-clients | 1 | python | def test_upsert1(self):
'\n \n '
doc_id = 1
event = {'event_id': doc_id, 'event_name': 'Ryder Cup Golf', 'event_alt_names': , 'event_date': '2017-12-12 22:00:00', 'event_time': '10:00 pm', 'venue_name': 'Hazeltine National Golf Club'}
self.es_client.create_index(TEST_ES_INDEX)
update = self.es_client.upsert_doc(doc_id, event, index_name=TEST_ES_INDEX)
assert isinstance(update, dict)
assert (int(update['_id']) == doc_id), 'Got {}'.format(doc_id)
event['event_name'] = 'Ryder Cup Golf Test'
update = self.es_client.upsert_doc(doc_id, event, index_name=TEST_ES_INDEX)
assert isinstance(update, dict)
assert (int(update['_id']) == doc_id), 'Got {}'.format(doc_id) | def test_upsert1(self):
'\n \n '
doc_id = 1
event = {'event_id': doc_id, 'event_name': 'Ryder Cup Golf', 'event_alt_names': , 'event_date': '2017-12-12 22:00:00', 'event_time': '10:00 pm', 'venue_name': 'Hazeltine National Golf Club'}
self.es_client.create_index(TEST_ES_INDEX)
update = self.es_client.upsert_doc(doc_id, event, index_name=TEST_ES_INDEX)
assert isinstance(update, dict)
assert (int(update['_id']) == doc_id), 'Got {}'.format(doc_id)
event['event_name'] = 'Ryder Cup Golf Test'
update = self.es_client.upsert_doc(doc_id, event, index_name=TEST_ES_INDEX)
assert isinstance(update, dict)
assert (int(update['_id']) == doc_id), 'Got {}'.format(doc_id)<|docstring|>Test inserting doc, then updating it<|endoftext|> |
b34d618fe1e886c3af407d26a1f5bfbd7d81afd5dc547d742370666e674a0e2b | def test_add_remove_alias1(self):
'\n Add as a list, multi-indexes\n '
self.es_client.create_index(TEST_ES_INDEX)
alias = 'test_alias1'
self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
result = self.es_client.add_alias(indexes=[TEST_ES_INDEX], alias_name=alias)
assert result['acknowledged']
result = self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged'] | Add as a list, multi-indexes | tests/test_es_client.py | test_add_remove_alias1 | fanxchange/fanx-service-clients | 1 | python | def test_add_remove_alias1(self):
'\n \n '
self.es_client.create_index(TEST_ES_INDEX)
alias = 'test_alias1'
self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
result = self.es_client.add_alias(indexes=[TEST_ES_INDEX], alias_name=alias)
assert result['acknowledged']
result = self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged'] | def test_add_remove_alias1(self):
'\n \n '
self.es_client.create_index(TEST_ES_INDEX)
alias = 'test_alias1'
self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
result = self.es_client.add_alias(indexes=[TEST_ES_INDEX], alias_name=alias)
assert result['acknowledged']
result = self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged']<|docstring|>Add as a list, multi-indexes<|endoftext|> |
4548e1e1a5f0416a4c5e3cea2e5973996dff71648ba715ecb7077ca4e2274258 | def test_add_remove_alias2(self):
'\n Add as an alias for single index\n '
alias = 'test_alias1'
self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
result = self.es_client.add_alias(indexes=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged']
result = self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged'] | Add as an alias for single index | tests/test_es_client.py | test_add_remove_alias2 | fanxchange/fanx-service-clients | 1 | python | def test_add_remove_alias2(self):
'\n \n '
alias = 'test_alias1'
self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
result = self.es_client.add_alias(indexes=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged']
result = self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged'] | def test_add_remove_alias2(self):
'\n \n '
alias = 'test_alias1'
self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
result = self.es_client.add_alias(indexes=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged']
result = self.es_client.delete_alias(index_name=TEST_ES_INDEX, alias_name=alias)
assert result['acknowledged']<|docstring|>Add as an alias for single index<|endoftext|> |
f71a95570848d01315786e81e12d92af31772f675c00c7a81b49f8f70761cfce | def __getattr__(self, key):
'Converts all method calls to use the schedule method'
return functools.partial(self._schedule, key) | Converts all method calls to use the schedule method | nova/scheduler/manager.py | __getattr__ | tqrg-bot/nova | 0 | python | def __getattr__(self, key):
return functools.partial(self._schedule, key) | def __getattr__(self, key):
return functools.partial(self._schedule, key)<|docstring|>Converts all method calls to use the schedule method<|endoftext|> |
e6fd1c9c9335da1a15e5d125fe3f4c7e918b3a065f4757cf41725179fe9e58fc | def _schedule(self, method, context, topic, *args, **kwargs):
"Tries to call schedule_* method on the driver to retrieve host.\n\n Falls back to schedule(context, topic) if method doesn't exist.\n "
driver_method = ('schedule_%s' % method)
elevated = context.elevated()
try:
host = getattr(self.driver, driver_method)(elevated, *args, **kwargs)
except AttributeError:
host = self.driver.schedule(elevated, topic, *args, **kwargs)
rpc.cast(context, db.queue_get_for(context, topic, host), {'method': method, 'args': kwargs})
LOG.debug((_('Casting to %(topic)s %(host)s for %(method)s') % locals())) | Tries to call schedule_* method on the driver to retrieve host.
Falls back to schedule(context, topic) if method doesn't exist. | nova/scheduler/manager.py | _schedule | tqrg-bot/nova | 0 | python | def _schedule(self, method, context, topic, *args, **kwargs):
"Tries to call schedule_* method on the driver to retrieve host.\n\n Falls back to schedule(context, topic) if method doesn't exist.\n "
driver_method = ('schedule_%s' % method)
elevated = context.elevated()
try:
host = getattr(self.driver, driver_method)(elevated, *args, **kwargs)
except AttributeError:
host = self.driver.schedule(elevated, topic, *args, **kwargs)
rpc.cast(context, db.queue_get_for(context, topic, host), {'method': method, 'args': kwargs})
LOG.debug((_('Casting to %(topic)s %(host)s for %(method)s') % locals())) | def _schedule(self, method, context, topic, *args, **kwargs):
"Tries to call schedule_* method on the driver to retrieve host.\n\n Falls back to schedule(context, topic) if method doesn't exist.\n "
driver_method = ('schedule_%s' % method)
elevated = context.elevated()
try:
host = getattr(self.driver, driver_method)(elevated, *args, **kwargs)
except AttributeError:
host = self.driver.schedule(elevated, topic, *args, **kwargs)
rpc.cast(context, db.queue_get_for(context, topic, host), {'method': method, 'args': kwargs})
LOG.debug((_('Casting to %(topic)s %(host)s for %(method)s') % locals()))<|docstring|>Tries to call schedule_* method on the driver to retrieve host.
Falls back to schedule(context, topic) if method doesn't exist.<|endoftext|> |
1413353caf3153a4ba12dbc7fb84ec424eba69d003fa42c3b1ac6774a3a163eb | def base58_encode(num, alphabet=ALPHABET):
'Encode a number in Base X\n\n `num`: The number to encode\n `alphabet`: The alphabet to use for encoding\n '
if (num == 0):
return alphabet[0]
arr = []
base = len(alphabet)
while num:
rem = (num % base)
num = (num // base)
arr.append(alphabet[rem])
arr.reverse()
return ''.join(arr) | Encode a number in Base X
`num`: The number to encode
`alphabet`: The alphabet to use for encoding | cola/core/utils.py | base58_encode | leafgray/cola | 1,061 | python | def base58_encode(num, alphabet=ALPHABET):
'Encode a number in Base X\n\n `num`: The number to encode\n `alphabet`: The alphabet to use for encoding\n '
if (num == 0):
return alphabet[0]
arr = []
base = len(alphabet)
while num:
rem = (num % base)
num = (num // base)
arr.append(alphabet[rem])
arr.reverse()
return .join(arr) | def base58_encode(num, alphabet=ALPHABET):
'Encode a number in Base X\n\n `num`: The number to encode\n `alphabet`: The alphabet to use for encoding\n '
if (num == 0):
return alphabet[0]
arr = []
base = len(alphabet)
while num:
rem = (num % base)
num = (num // base)
arr.append(alphabet[rem])
arr.reverse()
return .join(arr)<|docstring|>Encode a number in Base X
`num`: The number to encode
`alphabet`: The alphabet to use for encoding<|endoftext|> |
6f67244e2022669bb27b27c8734a7267522ae564388ef923d9874a767916b3de | def change_gpu(val):
'\n Args:\n val: an integer, the index of the GPU or -1 to disable GPU.\n\n Returns:\n a context where ``CUDA_VISIBLE_DEVICES=val``.\n '
val = str(val)
if (val == '-1'):
val = ''
return change_env('CUDA_VISIBLE_DEVICES', val) | Args:
val: an integer, the index of the GPU or -1 to disable GPU.
Returns:
a context where ``CUDA_VISIBLE_DEVICES=val``. | tensorpack/utils/gpu.py | change_gpu | gopalakrishna-r/tensorpack | 4,404 | python | def change_gpu(val):
'\n Args:\n val: an integer, the index of the GPU or -1 to disable GPU.\n\n Returns:\n a context where ``CUDA_VISIBLE_DEVICES=val``.\n '
val = str(val)
if (val == '-1'):
val =
return change_env('CUDA_VISIBLE_DEVICES', val) | def change_gpu(val):
'\n Args:\n val: an integer, the index of the GPU or -1 to disable GPU.\n\n Returns:\n a context where ``CUDA_VISIBLE_DEVICES=val``.\n '
val = str(val)
if (val == '-1'):
val =
return change_env('CUDA_VISIBLE_DEVICES', val)<|docstring|>Args:
val: an integer, the index of the GPU or -1 to disable GPU.
Returns:
a context where ``CUDA_VISIBLE_DEVICES=val``.<|endoftext|> |
6d37eb74c6f43dc6d416555a81a6a705a356b72bfb9d7253cc12f9eebe9fdd26 | def get_num_gpu():
'\n Returns:\n int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.\n '
def warn_return(ret, message):
try:
import tensorflow as tf
except ImportError:
return ret
built_with_cuda = tf.test.is_built_with_cuda()
if ((not built_with_cuda) and (ret > 0)):
logger.warn((message + 'But TensorFlow was not built with CUDA support and could not use GPUs!'))
return ret
try:
with NVMLContext() as ctx:
nvml_num_dev = ctx.num_devices()
except Exception:
nvml_num_dev = None
env = os.environ.get('CUDA_VISIBLE_DEVICES', None)
if env:
num_dev = len(env.split(','))
assert (num_dev <= nvml_num_dev), 'Only {} GPU(s) available, but CUDA_VISIBLE_DEVICES is set to {}'.format(nvml_num_dev, env)
return warn_return(num_dev, 'Found non-empty CUDA_VISIBLE_DEVICES. ')
(output, code) = subproc_call('nvidia-smi -L', timeout=5)
if (code == 0):
output = output.decode('utf-8')
return warn_return(len(output.strip().split('\n')), 'Found nvidia-smi. ')
if (nvml_num_dev is not None):
return warn_return(nvml_num_dev, 'NVML found nvidia devices. ')
logger.info('Loading local devices by TensorFlow ...')
try:
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
except AttributeError:
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
gpu_devices = [x.name for x in local_device_protos if (x.device_type == 'GPU')]
return len(gpu_devices) | Returns:
int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system. | tensorpack/utils/gpu.py | get_num_gpu | gopalakrishna-r/tensorpack | 4,404 | python | def get_num_gpu():
'\n Returns:\n int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.\n '
def warn_return(ret, message):
try:
import tensorflow as tf
except ImportError:
return ret
built_with_cuda = tf.test.is_built_with_cuda()
if ((not built_with_cuda) and (ret > 0)):
logger.warn((message + 'But TensorFlow was not built with CUDA support and could not use GPUs!'))
return ret
try:
with NVMLContext() as ctx:
nvml_num_dev = ctx.num_devices()
except Exception:
nvml_num_dev = None
env = os.environ.get('CUDA_VISIBLE_DEVICES', None)
if env:
num_dev = len(env.split(','))
assert (num_dev <= nvml_num_dev), 'Only {} GPU(s) available, but CUDA_VISIBLE_DEVICES is set to {}'.format(nvml_num_dev, env)
return warn_return(num_dev, 'Found non-empty CUDA_VISIBLE_DEVICES. ')
(output, code) = subproc_call('nvidia-smi -L', timeout=5)
if (code == 0):
output = output.decode('utf-8')
return warn_return(len(output.strip().split('\n')), 'Found nvidia-smi. ')
if (nvml_num_dev is not None):
return warn_return(nvml_num_dev, 'NVML found nvidia devices. ')
logger.info('Loading local devices by TensorFlow ...')
try:
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
except AttributeError:
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
gpu_devices = [x.name for x in local_device_protos if (x.device_type == 'GPU')]
return len(gpu_devices) | def get_num_gpu():
'\n Returns:\n int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.\n '
def warn_return(ret, message):
try:
import tensorflow as tf
except ImportError:
return ret
built_with_cuda = tf.test.is_built_with_cuda()
if ((not built_with_cuda) and (ret > 0)):
logger.warn((message + 'But TensorFlow was not built with CUDA support and could not use GPUs!'))
return ret
try:
with NVMLContext() as ctx:
nvml_num_dev = ctx.num_devices()
except Exception:
nvml_num_dev = None
env = os.environ.get('CUDA_VISIBLE_DEVICES', None)
if env:
num_dev = len(env.split(','))
assert (num_dev <= nvml_num_dev), 'Only {} GPU(s) available, but CUDA_VISIBLE_DEVICES is set to {}'.format(nvml_num_dev, env)
return warn_return(num_dev, 'Found non-empty CUDA_VISIBLE_DEVICES. ')
(output, code) = subproc_call('nvidia-smi -L', timeout=5)
if (code == 0):
output = output.decode('utf-8')
return warn_return(len(output.strip().split('\n')), 'Found nvidia-smi. ')
if (nvml_num_dev is not None):
return warn_return(nvml_num_dev, 'NVML found nvidia devices. ')
logger.info('Loading local devices by TensorFlow ...')
try:
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
except AttributeError:
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
gpu_devices = [x.name for x in local_device_protos if (x.device_type == 'GPU')]
return len(gpu_devices)<|docstring|>Returns:
int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.<|endoftext|> |
a2a417f3cc83efc6b1d7e429dfaf63609c16da148069764eeabc34aee05b8dc9 | @command(outgoing=True, pattern='^.alive$')
async def amireallyalive(alive):
' For .alive command, check if the bot is running. '
(await alive.edit(f'''`I am on, my peru master!` **ψ(`∇´)ψ**
`Telethon version: 6.9.0
Python: 3.7.3
``Bot created by:` [SnapDragon](tg://user?id=1239527009), @surajit1
`My peru owner`: {DEFAULTUSER}
https://t.me/surajit_1''')) | For .alive command, check if the bot is running. | userbot/plugins/alive.py | amireallyalive | surajit1212/X-tra-Telegram | 0 | python | @command(outgoing=True, pattern='^.alive$')
async def amireallyalive(alive):
' '
(await alive.edit(f'`I am on, my peru master!` **ψ(`∇´)ψ**
`Telethon version: 6.9.0
Python: 3.7.3
``Bot created by:` [SnapDragon](tg://user?id=1239527009), @surajit1
`My peru owner`: {DEFAULTUSER}
https://t.me/surajit_1')) | @command(outgoing=True, pattern='^.alive$')
async def amireallyalive(alive):
' '
(await alive.edit(f'`I am on, my peru master!` **ψ(`∇´)ψ**
`Telethon version: 6.9.0
Python: 3.7.3
``Bot created by:` [SnapDragon](tg://user?id=1239527009), @surajit1
`My peru owner`: {DEFAULTUSER}
https://t.me/surajit_1'))<|docstring|>For .alive command, check if the bot is running.<|endoftext|> |
1c6e387efc2569ba65341afdda061b84b35d408dd19d9f717b7d68e24d0e8b07 | @blueprint.route('/')
def index() -> dict:
'Return index.'
return {'path': 'add_data/', 'name': 'add_data'} | Return index. | app/bp/api/v1/__init__.py | index | ScholCommLab/fhe-collector | 2 | python | @blueprint.route('/')
def index() -> dict:
return {'path': 'add_data/', 'name': 'add_data'} | @blueprint.route('/')
def index() -> dict:
return {'path': 'add_data/', 'name': 'add_data'}<|docstring|>Return index.<|endoftext|> |
185a7db389db9c4f96ff7f3502f6699cea5b2875404cfac3e21e5d243fbdafe5 | @blueprint.route('/add_data', methods=['POST'])
def add_data() -> str:
'Add data via an API endpoint to the database.\n\n Required: doi\n Optional: url, date\n '
response_status = 'error'
url_type_list = ['ojs', 'doi_new', 'doi_old', 'doi_new_landingpage', 'unpaywall', 'pubmed', 'pubmedcentral']
json_data = request.get_json()
if (request.method == 'POST'):
try:
if ('X-API-Key' in request.headers):
if (current_app.config['API_TOKEN'] == request.headers['X-API-Key']):
if (request.headers['Content-Type'] == 'application/json'):
json_data = request.get_json()
if isinstance(json_data, list):
is_data_valid = True
for entry in json_data:
if ('doi' in entry):
if (not isinstance(entry['doi'], str)):
response = 'DOI {} is no string.'.format(entry['doi'])
is_data_valid = False
if ('url' in entry):
if (not isinstance(entry['url'], str)):
response = 'URL {} is no string.'.format(entry['url'])
is_data_valid = False
else:
print('URL is missing')
is_data_valid = False
if ('url_type' in entry):
if (not isinstance(entry['url_type'], str)):
response = 'URL type {} is no string.'.format(entry['url_type'])
is_data_valid = False
if (entry['url_type'] not in url_type_list):
response = 'URL type {} is not one of the allowed types.'.format(entry['url_type'])
is_data_valid = False
else:
response = 'URL type is missing.'
is_data_valid = False
if ('date' in entry):
if (not isinstance(entry['date'], str)):
response = 'Date {} is no string.'.format(entry['date'])
is_data_valid = False
else:
response = 'Date is missing.'
is_data_valid = False
else:
is_data_valid = False
response = 'DOI is missing in {}.'.format(entry)
if is_data_valid:
resp_func = import_dois_from_api(data)
if resp_func:
response = resp_func
response_status = 'ok'
else:
response = 'Error: JSON from API could not be stored in database.'
else:
response = 'No list of data in JSON.'
else:
response = 'No JSON delivered.'
else:
response = 'Authentication token not right.'
else:
response = 'Authentication token not passed.'
except:
raise 'Undefined error.'
return jsonify({'status': response_status, 'content': response}) | Add data via an API endpoint to the database.
Required: doi
Optional: url, date | app/bp/api/v1/__init__.py | add_data | ScholCommLab/fhe-collector | 2 | python | @blueprint.route('/add_data', methods=['POST'])
def add_data() -> str:
'Add data via an API endpoint to the database.\n\n Required: doi\n Optional: url, date\n '
response_status = 'error'
url_type_list = ['ojs', 'doi_new', 'doi_old', 'doi_new_landingpage', 'unpaywall', 'pubmed', 'pubmedcentral']
json_data = request.get_json()
if (request.method == 'POST'):
try:
if ('X-API-Key' in request.headers):
if (current_app.config['API_TOKEN'] == request.headers['X-API-Key']):
if (request.headers['Content-Type'] == 'application/json'):
json_data = request.get_json()
if isinstance(json_data, list):
is_data_valid = True
for entry in json_data:
if ('doi' in entry):
if (not isinstance(entry['doi'], str)):
response = 'DOI {} is no string.'.format(entry['doi'])
is_data_valid = False
if ('url' in entry):
if (not isinstance(entry['url'], str)):
response = 'URL {} is no string.'.format(entry['url'])
is_data_valid = False
else:
print('URL is missing')
is_data_valid = False
if ('url_type' in entry):
if (not isinstance(entry['url_type'], str)):
response = 'URL type {} is no string.'.format(entry['url_type'])
is_data_valid = False
if (entry['url_type'] not in url_type_list):
response = 'URL type {} is not one of the allowed types.'.format(entry['url_type'])
is_data_valid = False
else:
response = 'URL type is missing.'
is_data_valid = False
if ('date' in entry):
if (not isinstance(entry['date'], str)):
response = 'Date {} is no string.'.format(entry['date'])
is_data_valid = False
else:
response = 'Date is missing.'
is_data_valid = False
else:
is_data_valid = False
response = 'DOI is missing in {}.'.format(entry)
if is_data_valid:
resp_func = import_dois_from_api(data)
if resp_func:
response = resp_func
response_status = 'ok'
else:
response = 'Error: JSON from API could not be stored in database.'
else:
response = 'No list of data in JSON.'
else:
response = 'No JSON delivered.'
else:
response = 'Authentication token not right.'
else:
response = 'Authentication token not passed.'
except:
raise 'Undefined error.'
return jsonify({'status': response_status, 'content': response}) | @blueprint.route('/add_data', methods=['POST'])
def add_data() -> str:
'Add data via an API endpoint to the database.\n\n Required: doi\n Optional: url, date\n '
response_status = 'error'
url_type_list = ['ojs', 'doi_new', 'doi_old', 'doi_new_landingpage', 'unpaywall', 'pubmed', 'pubmedcentral']
json_data = request.get_json()
if (request.method == 'POST'):
try:
if ('X-API-Key' in request.headers):
if (current_app.config['API_TOKEN'] == request.headers['X-API-Key']):
if (request.headers['Content-Type'] == 'application/json'):
json_data = request.get_json()
if isinstance(json_data, list):
is_data_valid = True
for entry in json_data:
if ('doi' in entry):
if (not isinstance(entry['doi'], str)):
response = 'DOI {} is no string.'.format(entry['doi'])
is_data_valid = False
if ('url' in entry):
if (not isinstance(entry['url'], str)):
response = 'URL {} is no string.'.format(entry['url'])
is_data_valid = False
else:
print('URL is missing')
is_data_valid = False
if ('url_type' in entry):
if (not isinstance(entry['url_type'], str)):
response = 'URL type {} is no string.'.format(entry['url_type'])
is_data_valid = False
if (entry['url_type'] not in url_type_list):
response = 'URL type {} is not one of the allowed types.'.format(entry['url_type'])
is_data_valid = False
else:
response = 'URL type is missing.'
is_data_valid = False
if ('date' in entry):
if (not isinstance(entry['date'], str)):
response = 'Date {} is no string.'.format(entry['date'])
is_data_valid = False
else:
response = 'Date is missing.'
is_data_valid = False
else:
is_data_valid = False
response = 'DOI is missing in {}.'.format(entry)
if is_data_valid:
resp_func = import_dois_from_api(data)
if resp_func:
response = resp_func
response_status = 'ok'
else:
response = 'Error: JSON from API could not be stored in database.'
else:
response = 'No list of data in JSON.'
else:
response = 'No JSON delivered.'
else:
response = 'Authentication token not right.'
else:
response = 'Authentication token not passed.'
except:
raise 'Undefined error.'
return jsonify({'status': response_status, 'content': response})<|docstring|>Add data via an API endpoint to the database.
Required: doi
Optional: url, date<|endoftext|> |
7ce8ef35d640ccc488ef3739cd36909045f327f943b6ed53e94d2131216ee899 | def create_Server(user, password):
' \n establish the SMTP server for accessing Gmail from terminal; \n args: Gmail username, Gmail password;\n returns: server\n '
try:
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(user, password)
print('Log in successful!')
return server
except:
print('Could not log in...') | establish the SMTP server for accessing Gmail from terminal;
args: Gmail username, Gmail password;
returns: server | methods.py | create_Server | SamLeBlanc/George-Floyd-Email-Bot | 0 | python | def create_Server(user, password):
' \n establish the SMTP server for accessing Gmail from terminal; \n args: Gmail username, Gmail password;\n returns: server\n '
try:
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(user, password)
print('Log in successful!')
return server
except:
print('Could not log in...') | def create_Server(user, password):
' \n establish the SMTP server for accessing Gmail from terminal; \n args: Gmail username, Gmail password;\n returns: server\n '
try:
server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
server.ehlo()
server.login(user, password)
print('Log in successful!')
return server
except:
print('Could not log in...')<|docstring|>establish the SMTP server for accessing Gmail from terminal;
args: Gmail username, Gmail password;
returns: server<|endoftext|> |
9c1e0d61cff608ed6f346768ac815d0c696785d77dcc48a1d008755f6ffd2ada | def create_Email(frum, tos, sub, bod):
'\n create the text email object to be read by the server;\n args: \n frum: email sender (string)\n tos: email recipients (list)\n sub: email subject (string)\n bod: email body (string)\n returns:\n text: full email info to be read by server\n '
msg = MIMEMultipart()
msg['From'] = frum
msg['To'] = ', '.join(tos)
msg['Subject'] = sub
body = bod
msg.attach(MIMEText(body, 'plain'))
text = msg.as_string()
return text | create the text email object to be read by the server;
args:
frum: email sender (string)
tos: email recipients (list)
sub: email subject (string)
bod: email body (string)
returns:
text: full email info to be read by server | methods.py | create_Email | SamLeBlanc/George-Floyd-Email-Bot | 0 | python | def create_Email(frum, tos, sub, bod):
'\n create the text email object to be read by the server;\n args: \n frum: email sender (string)\n tos: email recipients (list)\n sub: email subject (string)\n bod: email body (string)\n returns:\n text: full email info to be read by server\n '
msg = MIMEMultipart()
msg['From'] = frum
msg['To'] = ', '.join(tos)
msg['Subject'] = sub
body = bod
msg.attach(MIMEText(body, 'plain'))
text = msg.as_string()
return text | def create_Email(frum, tos, sub, bod):
'\n create the text email object to be read by the server;\n args: \n frum: email sender (string)\n tos: email recipients (list)\n sub: email subject (string)\n bod: email body (string)\n returns:\n text: full email info to be read by server\n '
msg = MIMEMultipart()
msg['From'] = frum
msg['To'] = ', '.join(tos)
msg['Subject'] = sub
body = bod
msg.attach(MIMEText(body, 'plain'))
text = msg.as_string()
return text<|docstring|>create the text email object to be read by the server;
args:
frum: email sender (string)
tos: email recipients (list)
sub: email subject (string)
bod: email body (string)
returns:
text: full email info to be read by server<|endoftext|> |
5abcc087c1428c3618682de2fabdaae257cb2b97a4b790c08d3e5981d7c047c7 | def send_Email(serv, email, frum, tos, sub, bod):
'\n send the email using the SMTP server;\n args: \n serv: SMTP server (server)\n frum: email sender (string)\n tos: email recipients (list)\n sub: email subject (string)\n bod: email body (string)\n returns:\n None\n '
try:
serv.sendmail(frum, tos, email)
print('Email sent!')
print(('From: ' + frum))
print(('To: ' + ', '.join(tos)))
print(('Subject: ' + sub))
print(('Body: ' + bod))
except:
print('Email did not send...') | send the email using the SMTP server;
args:
serv: SMTP server (server)
frum: email sender (string)
tos: email recipients (list)
sub: email subject (string)
bod: email body (string)
returns:
None | methods.py | send_Email | SamLeBlanc/George-Floyd-Email-Bot | 0 | python | def send_Email(serv, email, frum, tos, sub, bod):
'\n send the email using the SMTP server;\n args: \n serv: SMTP server (server)\n frum: email sender (string)\n tos: email recipients (list)\n sub: email subject (string)\n bod: email body (string)\n returns:\n None\n '
try:
serv.sendmail(frum, tos, email)
print('Email sent!')
print(('From: ' + frum))
print(('To: ' + ', '.join(tos)))
print(('Subject: ' + sub))
print(('Body: ' + bod))
except:
print('Email did not send...') | def send_Email(serv, email, frum, tos, sub, bod):
'\n send the email using the SMTP server;\n args: \n serv: SMTP server (server)\n frum: email sender (string)\n tos: email recipients (list)\n sub: email subject (string)\n bod: email body (string)\n returns:\n None\n '
try:
serv.sendmail(frum, tos, email)
print('Email sent!')
print(('From: ' + frum))
print(('To: ' + ', '.join(tos)))
print(('Subject: ' + sub))
print(('Body: ' + bod))
except:
print('Email did not send...')<|docstring|>send the email using the SMTP server;
args:
serv: SMTP server (server)
frum: email sender (string)
tos: email recipients (list)
sub: email subject (string)
bod: email body (string)
returns:
None<|endoftext|> |
8baf655e67bc3e65c667a0f6924c0cc3856214ee2f3bcb2985c2ea71dbba9ae8 | def get_String_Time():
' return current time as string '
now = datetime.now()
current_time = now.strftime('%I:%M %p')
return str(current_time) | return current time as string | methods.py | get_String_Time | SamLeBlanc/George-Floyd-Email-Bot | 0 | python | def get_String_Time():
' '
now = datetime.now()
current_time = now.strftime('%I:%M %p')
return str(current_time) | def get_String_Time():
' '
now = datetime.now()
current_time = now.strftime('%I:%M %p')
return str(current_time)<|docstring|>return current time as string<|endoftext|> |
7bca38c01f807b3a7ba127985062d41eb86fa613746b0309f98adfdef46b0e71 | def readCSV(fil):
' \n read csv file of email information and convert to nested list;\n args: fil (string, csv file path)\n returns: data (csv file as nested list)\n '
data = []
with open(fil) as csvDataFile:
csvReader = csv.reader(csvDataFile)
for row in csvReader:
data.append(row)
return data | read csv file of email information and convert to nested list;
args: fil (string, csv file path)
returns: data (csv file as nested list) | methods.py | readCSV | SamLeBlanc/George-Floyd-Email-Bot | 0 | python | def readCSV(fil):
' \n read csv file of email information and convert to nested list;\n args: fil (string, csv file path)\n returns: data (csv file as nested list)\n '
data = []
with open(fil) as csvDataFile:
csvReader = csv.reader(csvDataFile)
for row in csvReader:
data.append(row)
return data | def readCSV(fil):
' \n read csv file of email information and convert to nested list;\n args: fil (string, csv file path)\n returns: data (csv file as nested list)\n '
data = []
with open(fil) as csvDataFile:
csvReader = csv.reader(csvDataFile)
for row in csvReader:
data.append(row)
return data<|docstring|>read csv file of email information and convert to nested list;
args: fil (string, csv file path)
returns: data (csv file as nested list)<|endoftext|> |
b1352e02dbcb9fcb7eb7bdf621e6d1e86ba825df3374ffcf6728135b4c66089e | def subdivideData(d):
'\n subdivide data into categories (names, emails, passwords, subjects, bodies, recipients)\n '
nm = em = pw = sb = bd = rcp = []
for x in range(len(d)):
nm.append(d[x][0])
em.append(d[x][1])
pw.append(d[x][2])
sb.append(d[x][3])
bd.append(d[x][4])
r = []
y = 5
while (y < len(d[x])):
r.append(d[x][y])
y = (y + 1)
rcp.append(r)
return (nm, em, pw, sb, bd, rcp) | subdivide data into categories (names, emails, passwords, subjects, bodies, recipients) | methods.py | subdivideData | SamLeBlanc/George-Floyd-Email-Bot | 0 | python | def subdivideData(d):
'\n \n '
nm = em = pw = sb = bd = rcp = []
for x in range(len(d)):
nm.append(d[x][0])
em.append(d[x][1])
pw.append(d[x][2])
sb.append(d[x][3])
bd.append(d[x][4])
r = []
y = 5
while (y < len(d[x])):
r.append(d[x][y])
y = (y + 1)
rcp.append(r)
return (nm, em, pw, sb, bd, rcp) | def subdivideData(d):
'\n \n '
nm = em = pw = sb = bd = rcp = []
for x in range(len(d)):
nm.append(d[x][0])
em.append(d[x][1])
pw.append(d[x][2])
sb.append(d[x][3])
bd.append(d[x][4])
r = []
y = 5
while (y < len(d[x])):
r.append(d[x][y])
y = (y + 1)
rcp.append(r)
return (nm, em, pw, sb, bd, rcp)<|docstring|>subdivide data into categories (names, emails, passwords, subjects, bodies, recipients)<|endoftext|> |
98e14ccddec4ad172c66452b02e302ba5ebce14ad23d6cd6c2346981eacec668 | @classmethod
def setUpClass(cls):
'\n Initialize test data.\n '
cls.db = SQLite({})
cls.db.initialize()
cls.sql = SQL(cls.db) | Initialize test data. | test/python/testdatabase/testsql.py | setUpClass | neuml/txtai | 1,893 | python | @classmethod
def setUpClass(cls):
'\n \n '
cls.db = SQLite({})
cls.db.initialize()
cls.sql = SQL(cls.db) | @classmethod
def setUpClass(cls):
'\n \n '
cls.db = SQLite({})
cls.db.initialize()
cls.sql = SQL(cls.db)<|docstring|>Initialize test data.<|endoftext|> |
1a25ce13957da802b5d8b623a8b9190ed29c238a10d3c5140d6bb0723e7adbe2 | def testAlias(self):
'\n Test alias clauses\n '
self.assertSql('select', 'select a as a1 from txtai', 'json_extract(data, "$.a") as a1')
self.assertSql('select', "select a 'a1' from txtai", 'json_extract(data, "$.a") \'a1\'')
self.assertSql('select', 'select a "a1" from txtai', 'json_extract(data, "$.a") "a1"')
self.assertSql('select', 'select a a1 from txtai', 'json_extract(data, "$.a") a1')
self.assertSql('select', "select a, b as b1, c, d + 1 as 'd1' from txtai", ('json_extract(data, "$.a") as "a", json_extract(data, "$.b") as b1, ' + 'json_extract(data, "$.c") as "c", json_extract(data, "$.d") + 1 as \'d1\''))
self.assertSql('select', 'select id as myid from txtai', 's.id as myid')
self.assertSql('select', 'select length(a) t from txtai', 'length(json_extract(data, "$.a")) t')
self.assertSql('where', 'select id as myid from txtai where myid != 3 and a != 1', 'myid != 3 and json_extract(data, "$.a") != 1')
self.assertSql('where', "select txt T from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', "select txt 'T' from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', 'select txt "T" from txtai where t LIKE \'%abc%\'', "t LIKE '%abc%'")
self.assertSql('where', "select txt as T from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', "select txt as 'T' from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', 'select txt as "T" from txtai where t LIKE \'%abc%\'', "t LIKE '%abc%'")
self.assertSql('groupby', 'select id as myid, count(*) from txtai group by myid, a', 'myid, json_extract(data, "$.a")')
self.assertSql('orderby', 'select id as myid from txtai order by myid, a', 'myid, json_extract(data, "$.a")') | Test alias clauses | test/python/testdatabase/testsql.py | testAlias | neuml/txtai | 1,893 | python | def testAlias(self):
'\n \n '
self.assertSql('select', 'select a as a1 from txtai', 'json_extract(data, "$.a") as a1')
self.assertSql('select', "select a 'a1' from txtai", 'json_extract(data, "$.a") \'a1\)
self.assertSql('select', 'select a "a1" from txtai', 'json_extract(data, "$.a") "a1"')
self.assertSql('select', 'select a a1 from txtai', 'json_extract(data, "$.a") a1')
self.assertSql('select', "select a, b as b1, c, d + 1 as 'd1' from txtai", ('json_extract(data, "$.a") as "a", json_extract(data, "$.b") as b1, ' + 'json_extract(data, "$.c") as "c", json_extract(data, "$.d") + 1 as \'d1\))
self.assertSql('select', 'select id as myid from txtai', 's.id as myid')
self.assertSql('select', 'select length(a) t from txtai', 'length(json_extract(data, "$.a")) t')
self.assertSql('where', 'select id as myid from txtai where myid != 3 and a != 1', 'myid != 3 and json_extract(data, "$.a") != 1')
self.assertSql('where', "select txt T from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', "select txt 'T' from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', 'select txt "T" from txtai where t LIKE \'%abc%\, "t LIKE '%abc%'")
self.assertSql('where', "select txt as T from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', "select txt as 'T' from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', 'select txt as "T" from txtai where t LIKE \'%abc%\, "t LIKE '%abc%'")
self.assertSql('groupby', 'select id as myid, count(*) from txtai group by myid, a', 'myid, json_extract(data, "$.a")')
self.assertSql('orderby', 'select id as myid from txtai order by myid, a', 'myid, json_extract(data, "$.a")') | def testAlias(self):
'\n \n '
self.assertSql('select', 'select a as a1 from txtai', 'json_extract(data, "$.a") as a1')
self.assertSql('select', "select a 'a1' from txtai", 'json_extract(data, "$.a") \'a1\)
self.assertSql('select', 'select a "a1" from txtai', 'json_extract(data, "$.a") "a1"')
self.assertSql('select', 'select a a1 from txtai', 'json_extract(data, "$.a") a1')
self.assertSql('select', "select a, b as b1, c, d + 1 as 'd1' from txtai", ('json_extract(data, "$.a") as "a", json_extract(data, "$.b") as b1, ' + 'json_extract(data, "$.c") as "c", json_extract(data, "$.d") + 1 as \'d1\))
self.assertSql('select', 'select id as myid from txtai', 's.id as myid')
self.assertSql('select', 'select length(a) t from txtai', 'length(json_extract(data, "$.a")) t')
self.assertSql('where', 'select id as myid from txtai where myid != 3 and a != 1', 'myid != 3 and json_extract(data, "$.a") != 1')
self.assertSql('where', "select txt T from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', "select txt 'T' from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', 'select txt "T" from txtai where t LIKE \'%abc%\, "t LIKE '%abc%'")
self.assertSql('where', "select txt as T from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', "select txt as 'T' from txtai where t LIKE '%abc%'", "t LIKE '%abc%'")
self.assertSql('where', 'select txt as "T" from txtai where t LIKE \'%abc%\, "t LIKE '%abc%'")
self.assertSql('groupby', 'select id as myid, count(*) from txtai group by myid, a', 'myid, json_extract(data, "$.a")')
self.assertSql('orderby', 'select id as myid from txtai order by myid, a', 'myid, json_extract(data, "$.a")')<|docstring|>Test alias clauses<|endoftext|> |
80590929897a2d2f86393d522cd66a48d14cc8a746df30647e49f8049e8f6927 | def testBadSQL(self):
'\n Test invalid SQL\n '
with self.assertRaises(SQLException):
self.db.search('select * from txtai where order by')
with self.assertRaises(SQLException):
self.db.search('select * from txtai where groupby order by')
with self.assertRaises(SQLException):
self.db.search('select * from txtai where a(1)')
with self.assertRaises(SQLException):
self.db.search('select a b c from txtai where id match id') | Test invalid SQL | test/python/testdatabase/testsql.py | testBadSQL | neuml/txtai | 1,893 | python | def testBadSQL(self):
'\n \n '
with self.assertRaises(SQLException):
self.db.search('select * from txtai where order by')
with self.assertRaises(SQLException):
self.db.search('select * from txtai where groupby order by')
with self.assertRaises(SQLException):
self.db.search('select * from txtai where a(1)')
with self.assertRaises(SQLException):
self.db.search('select a b c from txtai where id match id') | def testBadSQL(self):
'\n \n '
with self.assertRaises(SQLException):
self.db.search('select * from txtai where order by')
with self.assertRaises(SQLException):
self.db.search('select * from txtai where groupby order by')
with self.assertRaises(SQLException):
self.db.search('select * from txtai where a(1)')
with self.assertRaises(SQLException):
self.db.search('select a b c from txtai where id match id')<|docstring|>Test invalid SQL<|endoftext|> |
74db0ee4c3579b723e0c116a8f98b6c060119e216353609201caa8d3d15cda18 | def testBracket(self):
'\n Test bracket expressions\n '
self.assertSql('select', 'select [a] from txtai', 'json_extract(data, "$.a") as "a"')
self.assertSql('select', 'select [a] ab from txtai', 'json_extract(data, "$.a") ab')
self.assertSql('select', 'select [abc] from txtai', 'json_extract(data, "$.abc") as "abc"')
self.assertSql('select', 'select [id], text, score from txtai', 's.id, text, score')
self.assertSql('select', 'select [ab cd], text, score from txtai', 'json_extract(data, "$.ab cd") as "ab cd", text, score')
self.assertSql('select', 'select [a[0]] from txtai', 'json_extract(data, "$.a[0]") as "a[0]"')
self.assertSql('select', 'select [a[0].ab] from txtai', 'json_extract(data, "$.a[0].ab") as "a[0].ab"')
self.assertSql('select', 'select [a[0].c[0]] from txtai', 'json_extract(data, "$.a[0].c[0]") as "a[0].c[0]"')
self.assertSql('where', 'select * from txtai where [a b] < 1 or a > 1', 'json_extract(data, "$.a b") < 1 or json_extract(data, "$.a") > 1')
self.assertSql('where', 'select [a[0].c[0]] a from txtai where a < 1', 'a < 1')
self.assertSql('groupby', 'select * from txtai group by [a]', 'json_extract(data, "$.a")')
self.assertSql('orderby', 'select * from txtai where order by [a]', 'json_extract(data, "$.a")') | Test bracket expressions | test/python/testdatabase/testsql.py | testBracket | neuml/txtai | 1,893 | python | def testBracket(self):
'\n \n '
self.assertSql('select', 'select [a] from txtai', 'json_extract(data, "$.a") as "a"')
self.assertSql('select', 'select [a] ab from txtai', 'json_extract(data, "$.a") ab')
self.assertSql('select', 'select [abc] from txtai', 'json_extract(data, "$.abc") as "abc"')
self.assertSql('select', 'select [id], text, score from txtai', 's.id, text, score')
self.assertSql('select', 'select [ab cd], text, score from txtai', 'json_extract(data, "$.ab cd") as "ab cd", text, score')
self.assertSql('select', 'select [a[0]] from txtai', 'json_extract(data, "$.a[0]") as "a[0]"')
self.assertSql('select', 'select [a[0].ab] from txtai', 'json_extract(data, "$.a[0].ab") as "a[0].ab"')
self.assertSql('select', 'select [a[0].c[0]] from txtai', 'json_extract(data, "$.a[0].c[0]") as "a[0].c[0]"')
self.assertSql('where', 'select * from txtai where [a b] < 1 or a > 1', 'json_extract(data, "$.a b") < 1 or json_extract(data, "$.a") > 1')
self.assertSql('where', 'select [a[0].c[0]] a from txtai where a < 1', 'a < 1')
self.assertSql('groupby', 'select * from txtai group by [a]', 'json_extract(data, "$.a")')
self.assertSql('orderby', 'select * from txtai where order by [a]', 'json_extract(data, "$.a")') | def testBracket(self):
'\n \n '
self.assertSql('select', 'select [a] from txtai', 'json_extract(data, "$.a") as "a"')
self.assertSql('select', 'select [a] ab from txtai', 'json_extract(data, "$.a") ab')
self.assertSql('select', 'select [abc] from txtai', 'json_extract(data, "$.abc") as "abc"')
self.assertSql('select', 'select [id], text, score from txtai', 's.id, text, score')
self.assertSql('select', 'select [ab cd], text, score from txtai', 'json_extract(data, "$.ab cd") as "ab cd", text, score')
self.assertSql('select', 'select [a[0]] from txtai', 'json_extract(data, "$.a[0]") as "a[0]"')
self.assertSql('select', 'select [a[0].ab] from txtai', 'json_extract(data, "$.a[0].ab") as "a[0].ab"')
self.assertSql('select', 'select [a[0].c[0]] from txtai', 'json_extract(data, "$.a[0].c[0]") as "a[0].c[0]"')
self.assertSql('where', 'select * from txtai where [a b] < 1 or a > 1', 'json_extract(data, "$.a b") < 1 or json_extract(data, "$.a") > 1')
self.assertSql('where', 'select [a[0].c[0]] a from txtai where a < 1', 'a < 1')
self.assertSql('groupby', 'select * from txtai group by [a]', 'json_extract(data, "$.a")')
self.assertSql('orderby', 'select * from txtai where order by [a]', 'json_extract(data, "$.a")')<|docstring|>Test bracket expressions<|endoftext|> |
7be9251a7ebc1daa58b98883ebd0d5f148154b0f6a2e82220d4ba9df1ef0849f | def testGroupby(self):
'\n Test group by clauses\n '
prefix = 'select count(*), flag from txtai '
self.assertSql('groupby', (prefix + 'group by text'), 'text')
self.assertSql('groupby', (prefix + 'group by distinct(a)'), 'distinct(json_extract(data, "$.a"))')
self.assertSql('groupby', (prefix + 'where a > 1 group by text'), 'text') | Test group by clauses | test/python/testdatabase/testsql.py | testGroupby | neuml/txtai | 1,893 | python | def testGroupby(self):
'\n \n '
prefix = 'select count(*), flag from txtai '
self.assertSql('groupby', (prefix + 'group by text'), 'text')
self.assertSql('groupby', (prefix + 'group by distinct(a)'), 'distinct(json_extract(data, "$.a"))')
self.assertSql('groupby', (prefix + 'where a > 1 group by text'), 'text') | def testGroupby(self):
'\n \n '
prefix = 'select count(*), flag from txtai '
self.assertSql('groupby', (prefix + 'group by text'), 'text')
self.assertSql('groupby', (prefix + 'group by distinct(a)'), 'distinct(json_extract(data, "$.a"))')
self.assertSql('groupby', (prefix + 'where a > 1 group by text'), 'text')<|docstring|>Test group by clauses<|endoftext|> |
b3317eb0d6134850115b7a1e81628e23925c2e8e2bc0409c55f701ab56e49c55 | def testHaving(self):
'\n Test having clauses\n '
prefix = 'select count(*), flag from txtai '
self.assertSql('having', (prefix + 'group by text having count(*) > 1'), 'count(*) > 1')
self.assertSql('having', (prefix + 'where flag = 1 group by text having count(*) > 1'), 'count(*) > 1') | Test having clauses | test/python/testdatabase/testsql.py | testHaving | neuml/txtai | 1,893 | python | def testHaving(self):
'\n \n '
prefix = 'select count(*), flag from txtai '
self.assertSql('having', (prefix + 'group by text having count(*) > 1'), 'count(*) > 1')
self.assertSql('having', (prefix + 'where flag = 1 group by text having count(*) > 1'), 'count(*) > 1') | def testHaving(self):
'\n \n '
prefix = 'select count(*), flag from txtai '
self.assertSql('having', (prefix + 'group by text having count(*) > 1'), 'count(*) > 1')
self.assertSql('having', (prefix + 'where flag = 1 group by text having count(*) > 1'), 'count(*) > 1')<|docstring|>Test having clauses<|endoftext|> |
33e836e5d5a64a66626a90cebd20b912839f838a898021077ec0283041dc7a5a | def testLimit(self):
'\n Test limit clauses\n '
prefix = 'select count(*) from txtai '
self.assertSql('limit', (prefix + 'limit 100'), '100') | Test limit clauses | test/python/testdatabase/testsql.py | testLimit | neuml/txtai | 1,893 | python | def testLimit(self):
'\n \n '
prefix = 'select count(*) from txtai '
self.assertSql('limit', (prefix + 'limit 100'), '100') | def testLimit(self):
'\n \n '
prefix = 'select count(*) from txtai '
self.assertSql('limit', (prefix + 'limit 100'), '100')<|docstring|>Test limit clauses<|endoftext|> |
f483b7e00f20a4c13102f3d1dbbcf1d66d367448e3a0a0275bea93dfaf2926f2 | def testOrderby(self):
'\n Test order by clauses\n '
prefix = 'select * from txtai '
self.assertSql('orderby', (prefix + 'order by id'), 's.id')
self.assertSql('orderby', (prefix + 'order by id, text'), 's.id, text')
self.assertSql('orderby', (prefix + 'order by id asc'), 's.id asc')
self.assertSql('orderby', (prefix + 'order by id desc'), 's.id desc')
self.assertSql('orderby', (prefix + 'order by id asc, text desc'), 's.id asc, text desc') | Test order by clauses | test/python/testdatabase/testsql.py | testOrderby | neuml/txtai | 1,893 | python | def testOrderby(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('orderby', (prefix + 'order by id'), 's.id')
self.assertSql('orderby', (prefix + 'order by id, text'), 's.id, text')
self.assertSql('orderby', (prefix + 'order by id asc'), 's.id asc')
self.assertSql('orderby', (prefix + 'order by id desc'), 's.id desc')
self.assertSql('orderby', (prefix + 'order by id asc, text desc'), 's.id asc, text desc') | def testOrderby(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('orderby', (prefix + 'order by id'), 's.id')
self.assertSql('orderby', (prefix + 'order by id, text'), 's.id, text')
self.assertSql('orderby', (prefix + 'order by id asc'), 's.id asc')
self.assertSql('orderby', (prefix + 'order by id desc'), 's.id desc')
self.assertSql('orderby', (prefix + 'order by id asc, text desc'), 's.id asc, text desc')<|docstring|>Test order by clauses<|endoftext|> |
8683346a7ca20f40ea44683b4a76bf573c880baeff002cf35a9fb1c2b97fe5a8 | def testSelectBasic(self):
'\n Test basic select clauses\n '
self.assertSql('select', 'select id, indexid, tags from txtai', 's.id, s.indexid, s.tags')
self.assertSql('select', 'select id, indexid, flag from txtai', 's.id, s.indexid, json_extract(data, "$.flag") as "flag"')
self.assertSql('select', 'select id, indexid, a.b.c from txtai', 's.id, s.indexid, json_extract(data, "$.a.b.c") as "a.b.c"')
self.assertSql('select', "select 'id', [id], (id) from txtai", "'id', s.id, (s.id)")
self.assertSql('select', 'select * from txtai', '*') | Test basic select clauses | test/python/testdatabase/testsql.py | testSelectBasic | neuml/txtai | 1,893 | python | def testSelectBasic(self):
'\n \n '
self.assertSql('select', 'select id, indexid, tags from txtai', 's.id, s.indexid, s.tags')
self.assertSql('select', 'select id, indexid, flag from txtai', 's.id, s.indexid, json_extract(data, "$.flag") as "flag"')
self.assertSql('select', 'select id, indexid, a.b.c from txtai', 's.id, s.indexid, json_extract(data, "$.a.b.c") as "a.b.c"')
self.assertSql('select', "select 'id', [id], (id) from txtai", "'id', s.id, (s.id)")
self.assertSql('select', 'select * from txtai', '*') | def testSelectBasic(self):
'\n \n '
self.assertSql('select', 'select id, indexid, tags from txtai', 's.id, s.indexid, s.tags')
self.assertSql('select', 'select id, indexid, flag from txtai', 's.id, s.indexid, json_extract(data, "$.flag") as "flag"')
self.assertSql('select', 'select id, indexid, a.b.c from txtai', 's.id, s.indexid, json_extract(data, "$.a.b.c") as "a.b.c"')
self.assertSql('select', "select 'id', [id], (id) from txtai", "'id', s.id, (s.id)")
self.assertSql('select', 'select * from txtai', '*')<|docstring|>Test basic select clauses<|endoftext|> |
9b8b6cd31accbf2908f15051fdbbacb4601dec6d1a1f913f2d6fb433ea4b9d91 | def testSelectCompound(self):
'\n Test compound select clauses\n '
self.assertSql('select', 'select a + 1 from txtai', 'json_extract(data, "$.a") + 1 as "a + 1"')
self.assertSql('select', 'select 1 * a from txtai', '1 * json_extract(data, "$.a") as "1 * a"')
self.assertSql('select', 'select a/1 from txtai', 'json_extract(data, "$.a") / 1 as "a / 1"')
self.assertSql('select', 'select avg(a-b) from txtai', 'avg(json_extract(data, "$.a") - json_extract(data, "$.b")) as "avg(a - b)"')
self.assertSql('select', 'select distinct(text) from txtai', 'distinct(text)')
self.assertSql('select', 'select id, score, (a/2)*3 from txtai', 's.id, score, (json_extract(data, "$.a") / 2) * 3 as "(a / 2) * 3"')
self.assertSql('select', 'select id, score, (a/2*3) from txtai', 's.id, score, (json_extract(data, "$.a") / 2 * 3) as "(a / 2 * 3)"')
self.assertSql('select', 'select func(func2(indexid + 1), a) from txtai', 'func(func2(s.indexid + 1), json_extract(data, "$.a")) as "func(func2(indexid + 1), a)"')
self.assertSql('select', 'select func(func2(indexid + 1), a) a from txtai', 'func(func2(s.indexid + 1), json_extract(data, "$.a")) a') | Test compound select clauses | test/python/testdatabase/testsql.py | testSelectCompound | neuml/txtai | 1,893 | python | def testSelectCompound(self):
'\n \n '
self.assertSql('select', 'select a + 1 from txtai', 'json_extract(data, "$.a") + 1 as "a + 1"')
self.assertSql('select', 'select 1 * a from txtai', '1 * json_extract(data, "$.a") as "1 * a"')
self.assertSql('select', 'select a/1 from txtai', 'json_extract(data, "$.a") / 1 as "a / 1"')
self.assertSql('select', 'select avg(a-b) from txtai', 'avg(json_extract(data, "$.a") - json_extract(data, "$.b")) as "avg(a - b)"')
self.assertSql('select', 'select distinct(text) from txtai', 'distinct(text)')
self.assertSql('select', 'select id, score, (a/2)*3 from txtai', 's.id, score, (json_extract(data, "$.a") / 2) * 3 as "(a / 2) * 3"')
self.assertSql('select', 'select id, score, (a/2*3) from txtai', 's.id, score, (json_extract(data, "$.a") / 2 * 3) as "(a / 2 * 3)"')
self.assertSql('select', 'select func(func2(indexid + 1), a) from txtai', 'func(func2(s.indexid + 1), json_extract(data, "$.a")) as "func(func2(indexid + 1), a)"')
self.assertSql('select', 'select func(func2(indexid + 1), a) a from txtai', 'func(func2(s.indexid + 1), json_extract(data, "$.a")) a') | def testSelectCompound(self):
'\n \n '
self.assertSql('select', 'select a + 1 from txtai', 'json_extract(data, "$.a") + 1 as "a + 1"')
self.assertSql('select', 'select 1 * a from txtai', '1 * json_extract(data, "$.a") as "1 * a"')
self.assertSql('select', 'select a/1 from txtai', 'json_extract(data, "$.a") / 1 as "a / 1"')
self.assertSql('select', 'select avg(a-b) from txtai', 'avg(json_extract(data, "$.a") - json_extract(data, "$.b")) as "avg(a - b)"')
self.assertSql('select', 'select distinct(text) from txtai', 'distinct(text)')
self.assertSql('select', 'select id, score, (a/2)*3 from txtai', 's.id, score, (json_extract(data, "$.a") / 2) * 3 as "(a / 2) * 3"')
self.assertSql('select', 'select id, score, (a/2*3) from txtai', 's.id, score, (json_extract(data, "$.a") / 2 * 3) as "(a / 2 * 3)"')
self.assertSql('select', 'select func(func2(indexid + 1), a) from txtai', 'func(func2(s.indexid + 1), json_extract(data, "$.a")) as "func(func2(indexid + 1), a)"')
self.assertSql('select', 'select func(func2(indexid + 1), a) a from txtai', 'func(func2(s.indexid + 1), json_extract(data, "$.a")) a')<|docstring|>Test compound select clauses<|endoftext|> |
93bf2652b31ba94ece3ce70f8ba407239add4b590c6b6f0befc1383b6defc32e | def testSimilar(self):
'\n Test similar functions\n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + "where similar('abc')"), '__SIMILAR__0')
self.assertSql('similar', (prefix + "where similar('abc')"), [['abc']])
self.assertSql('where', (prefix + "where similar('abc') AND id = 1"), '__SIMILAR__0 AND s.id = 1')
self.assertSql('similar', (prefix + "where similar('abc')"), [['abc']])
self.assertSql('where', (prefix + "where similar('abc') and similar('def')"), '__SIMILAR__0 and __SIMILAR__1')
self.assertSql('similar', (prefix + "where similar('abc') and similar('def')"), [['abc'], ['def']])
self.assertSql('where', (prefix + "where similar('abc', 1000)"), '__SIMILAR__0')
self.assertSql('similar', (prefix + "where similar('abc', 1000)"), [['abc', '1000']])
self.assertSql('where', (prefix + "where similar('abc', 1000) and similar('def', 10)"), '__SIMILAR__0 and __SIMILAR__1')
self.assertSql('similar', (prefix + "where similar('abc', 1000) and similar('def', 10)"), [['abc', '1000'], ['def', '10']]) | Test similar functions | test/python/testdatabase/testsql.py | testSimilar | neuml/txtai | 1,893 | python | def testSimilar(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + "where similar('abc')"), '__SIMILAR__0')
self.assertSql('similar', (prefix + "where similar('abc')"), [['abc']])
self.assertSql('where', (prefix + "where similar('abc') AND id = 1"), '__SIMILAR__0 AND s.id = 1')
self.assertSql('similar', (prefix + "where similar('abc')"), [['abc']])
self.assertSql('where', (prefix + "where similar('abc') and similar('def')"), '__SIMILAR__0 and __SIMILAR__1')
self.assertSql('similar', (prefix + "where similar('abc') and similar('def')"), [['abc'], ['def']])
self.assertSql('where', (prefix + "where similar('abc', 1000)"), '__SIMILAR__0')
self.assertSql('similar', (prefix + "where similar('abc', 1000)"), [['abc', '1000']])
self.assertSql('where', (prefix + "where similar('abc', 1000) and similar('def', 10)"), '__SIMILAR__0 and __SIMILAR__1')
self.assertSql('similar', (prefix + "where similar('abc', 1000) and similar('def', 10)"), [['abc', '1000'], ['def', '10']]) | def testSimilar(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + "where similar('abc')"), '__SIMILAR__0')
self.assertSql('similar', (prefix + "where similar('abc')"), [['abc']])
self.assertSql('where', (prefix + "where similar('abc') AND id = 1"), '__SIMILAR__0 AND s.id = 1')
self.assertSql('similar', (prefix + "where similar('abc')"), [['abc']])
self.assertSql('where', (prefix + "where similar('abc') and similar('def')"), '__SIMILAR__0 and __SIMILAR__1')
self.assertSql('similar', (prefix + "where similar('abc') and similar('def')"), [['abc'], ['def']])
self.assertSql('where', (prefix + "where similar('abc', 1000)"), '__SIMILAR__0')
self.assertSql('similar', (prefix + "where similar('abc', 1000)"), [['abc', '1000']])
self.assertSql('where', (prefix + "where similar('abc', 1000) and similar('def', 10)"), '__SIMILAR__0 and __SIMILAR__1')
self.assertSql('similar', (prefix + "where similar('abc', 1000) and similar('def', 10)"), [['abc', '1000'], ['def', '10']])<|docstring|>Test similar functions<|endoftext|> |
b750dc4c59caf23bfad09a0b09f18bd14b4041fbfe6b3e5b88ef5b6b88f0dce8 | def testWhereBasic(self):
'\n Test basic where clauses\n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + 'where a = b'), 'json_extract(data, "$.a") = json_extract(data, "$.b")')
self.assertSql('where', (prefix + 'where abc = def'), 'json_extract(data, "$.abc") = json_extract(data, "$.def")')
self.assertSql('where', (prefix + 'where a = b.value'), 'json_extract(data, "$.a") = json_extract(data, "$.b.value")')
self.assertSql('where', (prefix + 'where a = 1'), 'json_extract(data, "$.a") = 1')
self.assertSql('where', (prefix + 'WHERE 1 = a'), '1 = json_extract(data, "$.a")')
self.assertSql('where', (prefix + "WHERE a LIKE 'abc'"), 'json_extract(data, "$.a") LIKE \'abc\'')
self.assertSql('where', (prefix + "WHERE a NOT LIKE 'abc'"), 'json_extract(data, "$.a") NOT LIKE \'abc\'')
self.assertSql('where', (prefix + 'WHERE a IN (1, 2, 3, b)'), 'json_extract(data, "$.a") IN (1, 2, 3, json_extract(data, "$.b"))')
self.assertSql('where', (prefix + 'WHERE a is not null'), 'json_extract(data, "$.a") is not null')
self.assertSql('where', (prefix + 'WHERE score >= 0.15'), 'score >= 0.15') | Test basic where clauses | test/python/testdatabase/testsql.py | testWhereBasic | neuml/txtai | 1,893 | python | def testWhereBasic(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + 'where a = b'), 'json_extract(data, "$.a") = json_extract(data, "$.b")')
self.assertSql('where', (prefix + 'where abc = def'), 'json_extract(data, "$.abc") = json_extract(data, "$.def")')
self.assertSql('where', (prefix + 'where a = b.value'), 'json_extract(data, "$.a") = json_extract(data, "$.b.value")')
self.assertSql('where', (prefix + 'where a = 1'), 'json_extract(data, "$.a") = 1')
self.assertSql('where', (prefix + 'WHERE 1 = a'), '1 = json_extract(data, "$.a")')
self.assertSql('where', (prefix + "WHERE a LIKE 'abc'"), 'json_extract(data, "$.a") LIKE \'abc\)
self.assertSql('where', (prefix + "WHERE a NOT LIKE 'abc'"), 'json_extract(data, "$.a") NOT LIKE \'abc\)
self.assertSql('where', (prefix + 'WHERE a IN (1, 2, 3, b)'), 'json_extract(data, "$.a") IN (1, 2, 3, json_extract(data, "$.b"))')
self.assertSql('where', (prefix + 'WHERE a is not null'), 'json_extract(data, "$.a") is not null')
self.assertSql('where', (prefix + 'WHERE score >= 0.15'), 'score >= 0.15') | def testWhereBasic(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + 'where a = b'), 'json_extract(data, "$.a") = json_extract(data, "$.b")')
self.assertSql('where', (prefix + 'where abc = def'), 'json_extract(data, "$.abc") = json_extract(data, "$.def")')
self.assertSql('where', (prefix + 'where a = b.value'), 'json_extract(data, "$.a") = json_extract(data, "$.b.value")')
self.assertSql('where', (prefix + 'where a = 1'), 'json_extract(data, "$.a") = 1')
self.assertSql('where', (prefix + 'WHERE 1 = a'), '1 = json_extract(data, "$.a")')
self.assertSql('where', (prefix + "WHERE a LIKE 'abc'"), 'json_extract(data, "$.a") LIKE \'abc\)
self.assertSql('where', (prefix + "WHERE a NOT LIKE 'abc'"), 'json_extract(data, "$.a") NOT LIKE \'abc\)
self.assertSql('where', (prefix + 'WHERE a IN (1, 2, 3, b)'), 'json_extract(data, "$.a") IN (1, 2, 3, json_extract(data, "$.b"))')
self.assertSql('where', (prefix + 'WHERE a is not null'), 'json_extract(data, "$.a") is not null')
self.assertSql('where', (prefix + 'WHERE score >= 0.15'), 'score >= 0.15')<|docstring|>Test basic where clauses<|endoftext|> |
e1560b2f1c6b1d3505acf4c38519d641d04e6afbbe057dce63d64e138147f6b9 | def testWhereCompound(self):
'\n Test compound where clauses\n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + 'where a > (b + 1)'), 'json_extract(data, "$.a") > (json_extract(data, "$.b") + 1)')
self.assertSql('where', (prefix + "where a > func('abc')"), 'json_extract(data, "$.a") > func(\'abc\')')
self.assertSql('where', (prefix + "where (id = 1 or id = 2) and a like 'abc'"), '(s.id = 1 or s.id = 2) and json_extract(data, "$.a") like \'abc\'')
self.assertSql('where', (prefix + 'where a > f(d(b, c, 1),1)'), 'json_extract(data, "$.a") > f(d(json_extract(data, "$.b"), json_extract(data, "$.c"), 1), 1)')
self.assertSql('where', (prefix + 'where (id = 1 AND id = 2) OR indexid = 3'), '(s.id = 1 AND s.id = 2) OR s.indexid = 3')
self.assertSql('where', (prefix + 'where f(id) = b(id)'), 'f(s.id) = b(s.id)')
self.assertSql('where', (prefix + 'WHERE f(id)'), 'f(s.id)') | Test compound where clauses | test/python/testdatabase/testsql.py | testWhereCompound | neuml/txtai | 1,893 | python | def testWhereCompound(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + 'where a > (b + 1)'), 'json_extract(data, "$.a") > (json_extract(data, "$.b") + 1)')
self.assertSql('where', (prefix + "where a > func('abc')"), 'json_extract(data, "$.a") > func(\'abc\')')
self.assertSql('where', (prefix + "where (id = 1 or id = 2) and a like 'abc'"), '(s.id = 1 or s.id = 2) and json_extract(data, "$.a") like \'abc\)
self.assertSql('where', (prefix + 'where a > f(d(b, c, 1),1)'), 'json_extract(data, "$.a") > f(d(json_extract(data, "$.b"), json_extract(data, "$.c"), 1), 1)')
self.assertSql('where', (prefix + 'where (id = 1 AND id = 2) OR indexid = 3'), '(s.id = 1 AND s.id = 2) OR s.indexid = 3')
self.assertSql('where', (prefix + 'where f(id) = b(id)'), 'f(s.id) = b(s.id)')
self.assertSql('where', (prefix + 'WHERE f(id)'), 'f(s.id)') | def testWhereCompound(self):
'\n \n '
prefix = 'select * from txtai '
self.assertSql('where', (prefix + 'where a > (b + 1)'), 'json_extract(data, "$.a") > (json_extract(data, "$.b") + 1)')
self.assertSql('where', (prefix + "where a > func('abc')"), 'json_extract(data, "$.a") > func(\'abc\')')
self.assertSql('where', (prefix + "where (id = 1 or id = 2) and a like 'abc'"), '(s.id = 1 or s.id = 2) and json_extract(data, "$.a") like \'abc\)
self.assertSql('where', (prefix + 'where a > f(d(b, c, 1),1)'), 'json_extract(data, "$.a") > f(d(json_extract(data, "$.b"), json_extract(data, "$.c"), 1), 1)')
self.assertSql('where', (prefix + 'where (id = 1 AND id = 2) OR indexid = 3'), '(s.id = 1 AND s.id = 2) OR s.indexid = 3')
self.assertSql('where', (prefix + 'where f(id) = b(id)'), 'f(s.id) = b(s.id)')
self.assertSql('where', (prefix + 'WHERE f(id)'), 'f(s.id)')<|docstring|>Test compound where clauses<|endoftext|> |
cb8103e31fb31c8f4f3deffed31965dd92bdf4b04c50552ece5df023d5ff1519 | def assertSql(self, clause, query, expected):
'\n Helper method to assert a query clause is as expected.\n\n Args:\n clause: clause to select\n query: input query\n expected: expected transformed query value\n '
self.assertEqual(self.sql(query)[clause], expected) | Helper method to assert a query clause is as expected.
Args:
clause: clause to select
query: input query
expected: expected transformed query value | test/python/testdatabase/testsql.py | assertSql | neuml/txtai | 1,893 | python | def assertSql(self, clause, query, expected):
'\n Helper method to assert a query clause is as expected.\n\n Args:\n clause: clause to select\n query: input query\n expected: expected transformed query value\n '
self.assertEqual(self.sql(query)[clause], expected) | def assertSql(self, clause, query, expected):
'\n Helper method to assert a query clause is as expected.\n\n Args:\n clause: clause to select\n query: input query\n expected: expected transformed query value\n '
self.assertEqual(self.sql(query)[clause], expected)<|docstring|>Helper method to assert a query clause is as expected.
Args:
clause: clause to select
query: input query
expected: expected transformed query value<|endoftext|> |
5535697bdac4fecc8c6351b992f2d1ee2c18635de2565deb486c6cc15d7bc84a | def __init__(self, request):
'Init method.'
self.request = request
self.view_name = 'DefaultViews'
user = request.user
if ((user is None) or (user.role not in ('editor', 'basic'))):
raise HTTPForbidden | Init method. | kenwin/views/default.py | __init__ | aleducode/pyramid-simple-user-login | 0 | python | def __init__(self, request):
self.request = request
self.view_name = 'DefaultViews'
user = request.user
if ((user is None) or (user.role not in ('editor', 'basic'))):
raise HTTPForbidden | def __init__(self, request):
self.request = request
self.view_name = 'DefaultViews'
user = request.user
if ((user is None) or (user.role not in ('editor', 'basic'))):
raise HTTPForbidden<|docstring|>Init method.<|endoftext|> |
0ae265e00d1de6c79872495c68f890792db5b507fe2bdbab61c070065fdfd4e6 | @view_config(route_name='index')
def index(self):
'Index view.'
url_logout = self.request.route_url('logout')
return {'url_logout': url_logout} | Index view. | kenwin/views/default.py | index | aleducode/pyramid-simple-user-login | 0 | python | @view_config(route_name='index')
def index(self):
url_logout = self.request.route_url('logout')
return {'url_logout': url_logout} | @view_config(route_name='index')
def index(self):
url_logout = self.request.route_url('logout')
return {'url_logout': url_logout}<|docstring|>Index view.<|endoftext|> |
4160971dea8d63cf1bb61f1bf3192ad794fd153638f18097f958236cfb11b57e | def orthologsFromStreamGen(handle, version=(- 1)):
'\n handle: a stream from which lines containing orthologs are read.\n version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.\n 2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.\n -1 = the default = the latest version, 2.\n returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)\n '
for line in handle:
(id1, id2, dist) = line.split('\t')
if (version == 1):
(yield (id2, id1, dist))
else:
(yield (id1, id2, dist)) | handle: a stream from which lines containing orthologs are read.
version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.
2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.
-1 = the default = the latest version, 2.
returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance) | rsd/orthutil.py | orthologsFromStreamGen | todddeluca/reciprocal_smallest_distance | 4 | python | def orthologsFromStreamGen(handle, version=(- 1)):
'\n handle: a stream from which lines containing orthologs are read.\n version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.\n 2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.\n -1 = the default = the latest version, 2.\n returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)\n '
for line in handle:
(id1, id2, dist) = line.split('\t')
if (version == 1):
(yield (id2, id1, dist))
else:
(yield (id1, id2, dist)) | def orthologsFromStreamGen(handle, version=(- 1)):
'\n handle: a stream from which lines containing orthologs are read.\n version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.\n 2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.\n -1 = the default = the latest version, 2.\n returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)\n '
for line in handle:
(id1, id2, dist) = line.split('\t')
if (version == 1):
(yield (id2, id1, dist))
else:
(yield (id1, id2, dist))<|docstring|>handle: a stream from which lines containing orthologs are read.
version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.
2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.
-1 = the default = the latest version, 2.
returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)<|endoftext|> |
237c278ccbf4a025de84588b549b150b4a38cdc932cb6ec34a912a9e69ce55d9 | def orthologsToStream(orthologs, handle, version=(- 1)):
'\n orthologs: an iterable of tuples of (query_id, subject_id, distance)\n handle: a stream from which lines containing orthologs are read.\n version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.\n 2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.\n -1 = the default = the latest version, 2.\n returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)\n '
for ortholog in orthologs:
(qid, sid, dist) = ortholog
if (version == 1):
handle.write('{}\t{}\t{}\n'.format(sid, qid, dist))
else:
handle.write('{}\t{}\t{}\n'.format(qid, sid, dist)) | orthologs: an iterable of tuples of (query_id, subject_id, distance)
handle: a stream from which lines containing orthologs are read.
version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.
2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.
-1 = the default = the latest version, 2.
returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance) | rsd/orthutil.py | orthologsToStream | todddeluca/reciprocal_smallest_distance | 4 | python | def orthologsToStream(orthologs, handle, version=(- 1)):
'\n orthologs: an iterable of tuples of (query_id, subject_id, distance)\n handle: a stream from which lines containing orthologs are read.\n version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.\n 2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.\n -1 = the default = the latest version, 2.\n returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)\n '
for ortholog in orthologs:
(qid, sid, dist) = ortholog
if (version == 1):
handle.write('{}\t{}\t{}\n'.format(sid, qid, dist))
else:
handle.write('{}\t{}\t{}\n'.format(qid, sid, dist)) | def orthologsToStream(orthologs, handle, version=(- 1)):
'\n orthologs: an iterable of tuples of (query_id, subject_id, distance)\n handle: a stream from which lines containing orthologs are read.\n version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.\n 2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.\n -1 = the default = the latest version, 2.\n returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)\n '
for ortholog in orthologs:
(qid, sid, dist) = ortholog
if (version == 1):
handle.write('{}\t{}\t{}\n'.format(sid, qid, dist))
else:
handle.write('{}\t{}\t{}\n'.format(qid, sid, dist))<|docstring|>orthologs: an iterable of tuples of (query_id, subject_id, distance)
handle: a stream from which lines containing orthologs are read.
version: 1 = the old way RSD serialized orthologs: each line has a subject sequence id, query sequence id, paml distance.
2 = the new, more sensible way RSD serializes orthologs: each line is a tab-separated query sequence id, subject sequence id, paml distance.
-1 = the default = the latest version, 2.
returns: a generator that yields orthologs, which are tuples of (query_sequence_id, subject_sequence_id, distance)<|endoftext|> |
545e6f7286e2ddb4c6a981311333184e63111bbb55fb075570a076b9111b2fea | def orthDatasFromFileGen(path):
'\n path: contains zero or more orthDatas. must exist.\n yields: every orthData, a pair of params and orthologs, in path.\n '
with open(path) as fh:
for orthData in orthDatasFromStreamGen(fh):
(yield orthData) | path: contains zero or more orthDatas. must exist.
yields: every orthData, a pair of params and orthologs, in path. | rsd/orthutil.py | orthDatasFromFileGen | todddeluca/reciprocal_smallest_distance | 4 | python | def orthDatasFromFileGen(path):
'\n path: contains zero or more orthDatas. must exist.\n yields: every orthData, a pair of params and orthologs, in path.\n '
with open(path) as fh:
for orthData in orthDatasFromStreamGen(fh):
(yield orthData) | def orthDatasFromFileGen(path):
'\n path: contains zero or more orthDatas. must exist.\n yields: every orthData, a pair of params and orthologs, in path.\n '
with open(path) as fh:
for orthData in orthDatasFromStreamGen(fh):
(yield orthData)<|docstring|>path: contains zero or more orthDatas. must exist.
yields: every orthData, a pair of params and orthologs, in path.<|endoftext|> |
55ce5e59c4a215f81f32ebf869492a2009544b00911a204d238726a4321703ff | def orthDatasFromFilesGen(paths):
'\n paths: a list of file paths containing orthDatas.\n yields: every orthData in every file in paths\n '
for path in paths:
for orthData in orthDatasFromFile(path):
(yield orthData) | paths: a list of file paths containing orthDatas.
yields: every orthData in every file in paths | rsd/orthutil.py | orthDatasFromFilesGen | todddeluca/reciprocal_smallest_distance | 4 | python | def orthDatasFromFilesGen(paths):
'\n paths: a list of file paths containing orthDatas.\n yields: every orthData in every file in paths\n '
for path in paths:
for orthData in orthDatasFromFile(path):
(yield orthData) | def orthDatasFromFilesGen(paths):
'\n paths: a list of file paths containing orthDatas.\n yields: every orthData in every file in paths\n '
for path in paths:
for orthData in orthDatasFromFile(path):
(yield orthData)<|docstring|>paths: a list of file paths containing orthDatas.
yields: every orthData in every file in paths<|endoftext|> |
fdadfc8ba707af4cab2ec79520d62e11bbccc4124d478a07cc78ae1d08a31594 | def orthDatasToFile(orthDatas, path, mode='w'):
"\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n path: where to save the orthDatas\n mode: change to 'a' to append to an existing file\n serializes orthDatas and persists them to path\n Inspired by the Uniprot dat files, a set of orthologs starts with a params row, then has 0 or more ortholog rows, then has an end row.\n Easy to parse. Can represent a set of parameters with no orthologs.\n Example:\n PA\tLACJO\tYEAS7\t0.2\t1e-15\n OR\tQ74IU0\tA6ZM40\t1.7016\n OR\tQ74K17\tA6ZKK5\t0.8215\n //\n PA MYCGE MYCHP 0.2 1e-15\n //\n "
with open(path, mode) as fh:
orthDatasToStream(orthDatas, fh) | orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs
path: where to save the orthDatas
mode: change to 'a' to append to an existing file
serializes orthDatas and persists them to path
Inspired by the Uniprot dat files, a set of orthologs starts with a params row, then has 0 or more ortholog rows, then has an end row.
Easy to parse. Can represent a set of parameters with no orthologs.
Example:
PA LACJO YEAS7 0.2 1e-15
OR Q74IU0 A6ZM40 1.7016
OR Q74K17 A6ZKK5 0.8215
//
PA MYCGE MYCHP 0.2 1e-15
// | rsd/orthutil.py | orthDatasToFile | todddeluca/reciprocal_smallest_distance | 4 | python | def orthDatasToFile(orthDatas, path, mode='w'):
"\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n path: where to save the orthDatas\n mode: change to 'a' to append to an existing file\n serializes orthDatas and persists them to path\n Inspired by the Uniprot dat files, a set of orthologs starts with a params row, then has 0 or more ortholog rows, then has an end row.\n Easy to parse. Can represent a set of parameters with no orthologs.\n Example:\n PA\tLACJO\tYEAS7\t0.2\t1e-15\n OR\tQ74IU0\tA6ZM40\t1.7016\n OR\tQ74K17\tA6ZKK5\t0.8215\n //\n PA MYCGE MYCHP 0.2 1e-15\n //\n "
with open(path, mode) as fh:
orthDatasToStream(orthDatas, fh) | def orthDatasToFile(orthDatas, path, mode='w'):
"\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n path: where to save the orthDatas\n mode: change to 'a' to append to an existing file\n serializes orthDatas and persists them to path\n Inspired by the Uniprot dat files, a set of orthologs starts with a params row, then has 0 or more ortholog rows, then has an end row.\n Easy to parse. Can represent a set of parameters with no orthologs.\n Example:\n PA\tLACJO\tYEAS7\t0.2\t1e-15\n OR\tQ74IU0\tA6ZM40\t1.7016\n OR\tQ74K17\tA6ZKK5\t0.8215\n //\n PA MYCGE MYCHP 0.2 1e-15\n //\n "
with open(path, mode) as fh:
orthDatasToStream(orthDatas, fh)<|docstring|>orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs
path: where to save the orthDatas
mode: change to 'a' to append to an existing file
serializes orthDatas and persists them to path
Inspired by the Uniprot dat files, a set of orthologs starts with a params row, then has 0 or more ortholog rows, then has an end row.
Easy to parse. Can represent a set of parameters with no orthologs.
Example:
PA LACJO YEAS7 0.2 1e-15
OR Q74IU0 A6ZM40 1.7016
OR Q74K17 A6ZKK5 0.8215
//
PA MYCGE MYCHP 0.2 1e-15
//<|endoftext|> |
76025403a6e3e85429016ab807ab4693701aa7ace783f456e6b741b84923a805 | def orthDatasToStr(orthDatas):
'\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n serialize orthDatas as a string.\n returns: a string containing the serialized orthDatas.\n '
with io.BytesIO() as handle:
orthDatasToStream(orthDatas, handle)
return handle.getvalue() | orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs
serialize orthDatas as a string.
returns: a string containing the serialized orthDatas. | rsd/orthutil.py | orthDatasToStr | todddeluca/reciprocal_smallest_distance | 4 | python | def orthDatasToStr(orthDatas):
'\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n serialize orthDatas as a string.\n returns: a string containing the serialized orthDatas.\n '
with io.BytesIO() as handle:
orthDatasToStream(orthDatas, handle)
return handle.getvalue() | def orthDatasToStr(orthDatas):
'\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n serialize orthDatas as a string.\n returns: a string containing the serialized orthDatas.\n '
with io.BytesIO() as handle:
orthDatasToStream(orthDatas, handle)
return handle.getvalue()<|docstring|>orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs
serialize orthDatas as a string.
returns: a string containing the serialized orthDatas.<|endoftext|> |
5b8412b7b67d01c836c2bea3c6ca14ae920df87f642430c5b9c46e8665a20c4a | def orthDatasToStream(orthDatas, handle):
'\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n handle: an open io stream (e.g. a filehandle or a StringIO) to which the orthDatas are written\n the handle is not opened or closed in this function.\n '
for ((qdb, sdb, div, evalue), orthologs) in orthDatas:
handle.write('PA\t{}\t{}\t{}\t{}\n'.format(qdb, sdb, div, evalue))
for ortholog in orthologs:
handle.write('OR\t{}\t{}\t{}\n'.format(*ortholog))
handle.write('//\n')
return handle | orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs
handle: an open io stream (e.g. a filehandle or a StringIO) to which the orthDatas are written
the handle is not opened or closed in this function. | rsd/orthutil.py | orthDatasToStream | todddeluca/reciprocal_smallest_distance | 4 | python | def orthDatasToStream(orthDatas, handle):
'\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n handle: an open io stream (e.g. a filehandle or a StringIO) to which the orthDatas are written\n the handle is not opened or closed in this function.\n '
for ((qdb, sdb, div, evalue), orthologs) in orthDatas:
handle.write('PA\t{}\t{}\t{}\t{}\n'.format(qdb, sdb, div, evalue))
for ortholog in orthologs:
handle.write('OR\t{}\t{}\t{}\n'.format(*ortholog))
handle.write('//\n')
return handle | def orthDatasToStream(orthDatas, handle):
'\n orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs\n handle: an open io stream (e.g. a filehandle or a StringIO) to which the orthDatas are written\n the handle is not opened or closed in this function.\n '
for ((qdb, sdb, div, evalue), orthologs) in orthDatas:
handle.write('PA\t{}\t{}\t{}\t{}\n'.format(qdb, sdb, div, evalue))
for ortholog in orthologs:
handle.write('OR\t{}\t{}\t{}\n'.format(*ortholog))
handle.write('//\n')
return handle<|docstring|>orthDatas: a list of rsd orthDatas. orthData is a pair of params and orthologs
handle: an open io stream (e.g. a filehandle or a StringIO) to which the orthDatas are written
the handle is not opened or closed in this function.<|endoftext|> |
cdf4cb0e2d32491dad4429bd303bcea076d18a2dd445f2852566cdb1dd72afcb | def orthDatasFromStreamGen(handle):
'\n handle: an open io stream (e.g. a filehandle or a StringIO) from which orthDatas are read and yielded\n yields: every orthData, a pair of params and orthologs, in path.\n '
for line in handle:
if line.startswith('PA'):
(lineType, qdb, sdb, div, evalue) = line.strip().split('\t')
orthologs = []
elif line.startswith('OR'):
(lineType, qid, sid, dist) = line.strip().split('\t')
orthologs.append((qid, sid, dist))
elif line.startswith('//'):
(yield ((qdb, sdb, div, evalue), orthologs)) | handle: an open io stream (e.g. a filehandle or a StringIO) from which orthDatas are read and yielded
yields: every orthData, a pair of params and orthologs, in path. | rsd/orthutil.py | orthDatasFromStreamGen | todddeluca/reciprocal_smallest_distance | 4 | python | def orthDatasFromStreamGen(handle):
'\n handle: an open io stream (e.g. a filehandle or a StringIO) from which orthDatas are read and yielded\n yields: every orthData, a pair of params and orthologs, in path.\n '
for line in handle:
if line.startswith('PA'):
(lineType, qdb, sdb, div, evalue) = line.strip().split('\t')
orthologs = []
elif line.startswith('OR'):
(lineType, qid, sid, dist) = line.strip().split('\t')
orthologs.append((qid, sid, dist))
elif line.startswith('//'):
(yield ((qdb, sdb, div, evalue), orthologs)) | def orthDatasFromStreamGen(handle):
'\n handle: an open io stream (e.g. a filehandle or a StringIO) from which orthDatas are read and yielded\n yields: every orthData, a pair of params and orthologs, in path.\n '
for line in handle:
if line.startswith('PA'):
(lineType, qdb, sdb, div, evalue) = line.strip().split('\t')
orthologs = []
elif line.startswith('OR'):
(lineType, qid, sid, dist) = line.strip().split('\t')
orthologs.append((qid, sid, dist))
elif line.startswith('//'):
(yield ((qdb, sdb, div, evalue), orthologs))<|docstring|>handle: an open io stream (e.g. a filehandle or a StringIO) from which orthDatas are read and yielded
yields: every orthData, a pair of params and orthologs, in path.<|endoftext|> |
eb6a034b45b2444c48bbca8f4897afb00b7c1fd7f18bef9c3ae90a12b0c2e883 | def waitLock(self, timeout):
'\n Wait for timeout seconds on CV\n '
try:
self.cond.acquire()
print('waiting')
timeout_time = (time.time() + timeout)
while ((len(self.readbuffer) == 0) and (time.time() < timeout_time)):
self.cond.wait(timeout)
print(('Returning from condition wait, read %s bytes' % len(self.readbuffer)))
return (len(self.readbuffer) != 0)
finally:
self.cond.release() | Wait for timeout seconds on CV | HW5_Gao_Tony,PRG1/my_socket_stuff.py | waitLock | rpg711/CS453 | 0 | python | def waitLock(self, timeout):
'\n \n '
try:
self.cond.acquire()
print('waiting')
timeout_time = (time.time() + timeout)
while ((len(self.readbuffer) == 0) and (time.time() < timeout_time)):
self.cond.wait(timeout)
print(('Returning from condition wait, read %s bytes' % len(self.readbuffer)))
return (len(self.readbuffer) != 0)
finally:
self.cond.release() | def waitLock(self, timeout):
'\n \n '
try:
self.cond.acquire()
print('waiting')
timeout_time = (time.time() + timeout)
while ((len(self.readbuffer) == 0) and (time.time() < timeout_time)):
self.cond.wait(timeout)
print(('Returning from condition wait, read %s bytes' % len(self.readbuffer)))
return (len(self.readbuffer) != 0)
finally:
self.cond.release()<|docstring|>Wait for timeout seconds on CV<|endoftext|> |
167cbb62e37154288105379a48fbf13fc292aa6279de0ed9c9a5f0428d1628c8 | def _samples_to_coeffs(line, poles, tol=DBL_EPSILON):
'Filter a 1D series to obtain the corresponding coefficients\n\n BSplines are separable, and therefore dimensions can be filtered\n sequentially. Processing of "lines" is independent (i.e. parallelizable)\n\n Parameters\n ----------\n\n line : array_like of shape (N, C)\n N is the number of samples along the processed axis\n and C is the number of components of the data.\n poles : list of float\n Poles corresponding to the B-Spline basis selected\n\n tol : float\n A tolerance value extending filtering to infinite\n\n '
gain = np.prod(((1 - poles) * (1 - (1.0 / poles))))
line *= gain
for p in poles:
line[0] = _causal_c0(line, p, tol)
for n in range(1, len(line)):
line[n] += (p * line[(n - 1)])
line[(- 1)] = _anticausal_cn(line, p)
for n in reversed(range(0, (len(line) - 1))):
line[n] = (p * (line[(n + 1)] - line[n]))
return line | Filter a 1D series to obtain the corresponding coefficients
BSplines are separable, and therefore dimensions can be filtered
sequentially. Processing of "lines" is independent (i.e. parallelizable)
Parameters
----------
line : array_like of shape (N, C)
N is the number of samples along the processed axis
and C is the number of components of the data.
poles : list of float
Poles corresponding to the B-Spline basis selected
tol : float
A tolerance value extending filtering to infinite | gridbspline/interpolate.py | _samples_to_coeffs | oesteban/gridbspline | 0 | python | def _samples_to_coeffs(line, poles, tol=DBL_EPSILON):
'Filter a 1D series to obtain the corresponding coefficients\n\n BSplines are separable, and therefore dimensions can be filtered\n sequentially. Processing of "lines" is independent (i.e. parallelizable)\n\n Parameters\n ----------\n\n line : array_like of shape (N, C)\n N is the number of samples along the processed axis\n and C is the number of components of the data.\n poles : list of float\n Poles corresponding to the B-Spline basis selected\n\n tol : float\n A tolerance value extending filtering to infinite\n\n '
gain = np.prod(((1 - poles) * (1 - (1.0 / poles))))
line *= gain
for p in poles:
line[0] = _causal_c0(line, p, tol)
for n in range(1, len(line)):
line[n] += (p * line[(n - 1)])
line[(- 1)] = _anticausal_cn(line, p)
for n in reversed(range(0, (len(line) - 1))):
line[n] = (p * (line[(n + 1)] - line[n]))
return line | def _samples_to_coeffs(line, poles, tol=DBL_EPSILON):
'Filter a 1D series to obtain the corresponding coefficients\n\n BSplines are separable, and therefore dimensions can be filtered\n sequentially. Processing of "lines" is independent (i.e. parallelizable)\n\n Parameters\n ----------\n\n line : array_like of shape (N, C)\n N is the number of samples along the processed axis\n and C is the number of components of the data.\n poles : list of float\n Poles corresponding to the B-Spline basis selected\n\n tol : float\n A tolerance value extending filtering to infinite\n\n '
gain = np.prod(((1 - poles) * (1 - (1.0 / poles))))
line *= gain
for p in poles:
line[0] = _causal_c0(line, p, tol)
for n in range(1, len(line)):
line[n] += (p * line[(n - 1)])
line[(- 1)] = _anticausal_cn(line, p)
for n in reversed(range(0, (len(line) - 1))):
line[n] = (p * (line[(n + 1)] - line[n]))
return line<|docstring|>Filter a 1D series to obtain the corresponding coefficients
BSplines are separable, and therefore dimensions can be filtered
sequentially. Processing of "lines" is independent (i.e. parallelizable)
Parameters
----------
line : array_like of shape (N, C)
N is the number of samples along the processed axis
and C is the number of components of the data.
poles : list of float
Poles corresponding to the B-Spline basis selected
tol : float
A tolerance value extending filtering to infinite<|endoftext|> |
15d6b0b28932ad4eee17dc73cdde563053d131149d36df237b4173c3bcfaa04c | def _causal_c0(line, z, tol=DBL_EPSILON):
'Calculate the first coefficient of the causal filter'
length = len(line)
horiz = length
if (tol > 0):
horiz = (np.ceil(np.log(tol)) / np.log(np.abs(z))).astype(int)
zn = float(z)
if (horiz < length):
csum = line[0]
for n in range(horiz):
csum += (zn * line[n])
zn *= z
return csum
iz = (1.0 / z)
z2n = (z ** length)
csum = (line[0] + (z2n * line[(- 1)]))
z2n *= (z2n * iz)
for n in range(length):
csum += ((zn + z2n) * line[n])
zn *= z
z2n *= iz
return (csum / (1.0 - (zn ** 2))) | Calculate the first coefficient of the causal filter | gridbspline/interpolate.py | _causal_c0 | oesteban/gridbspline | 0 | python | def _causal_c0(line, z, tol=DBL_EPSILON):
length = len(line)
horiz = length
if (tol > 0):
horiz = (np.ceil(np.log(tol)) / np.log(np.abs(z))).astype(int)
zn = float(z)
if (horiz < length):
csum = line[0]
for n in range(horiz):
csum += (zn * line[n])
zn *= z
return csum
iz = (1.0 / z)
z2n = (z ** length)
csum = (line[0] + (z2n * line[(- 1)]))
z2n *= (z2n * iz)
for n in range(length):
csum += ((zn + z2n) * line[n])
zn *= z
z2n *= iz
return (csum / (1.0 - (zn ** 2))) | def _causal_c0(line, z, tol=DBL_EPSILON):
length = len(line)
horiz = length
if (tol > 0):
horiz = (np.ceil(np.log(tol)) / np.log(np.abs(z))).astype(int)
zn = float(z)
if (horiz < length):
csum = line[0]
for n in range(horiz):
csum += (zn * line[n])
zn *= z
return csum
iz = (1.0 / z)
z2n = (z ** length)
csum = (line[0] + (z2n * line[(- 1)]))
z2n *= (z2n * iz)
for n in range(length):
csum += ((zn + z2n) * line[n])
zn *= z
z2n *= iz
return (csum / (1.0 - (zn ** 2)))<|docstring|>Calculate the first coefficient of the causal filter<|endoftext|> |
b32ce3264371614e6493673003eb91ca52f33153c264bb5cf0ab7c215c55ce9a | def _anticausal_cn(line, z):
'Calculate the last coefficient of the anticausal filter'
return ((z / ((z * z) - 1.0)) * ((z * line[(- 2)]) + line[(- 1)])) | Calculate the last coefficient of the anticausal filter | gridbspline/interpolate.py | _anticausal_cn | oesteban/gridbspline | 0 | python | def _anticausal_cn(line, z):
return ((z / ((z * z) - 1.0)) * ((z * line[(- 2)]) + line[(- 1)])) | def _anticausal_cn(line, z):
return ((z / ((z * z) - 1.0)) * ((z * line[(- 2)]) + line[(- 1)]))<|docstring|>Calculate the last coefficient of the anticausal filter<|endoftext|> |
22d819e1d54f343a1f7401c2c5d0eb12306fe7b5ce800c21f192d27e6e0636b4 | def __call__(self, coords):
'\n Interpolation at coordinates\n Parameters\n ----------\n coords : ndarray of shape (..., ndim)\n The coordinates to sample the gridded data at\n '
if (self._order != 3):
raise NotImplementedError
for xi in coords:
(yield self._interpolate(xi)) | Interpolation at coordinates
Parameters
----------
coords : ndarray of shape (..., ndim)
The coordinates to sample the gridded data at | gridbspline/interpolate.py | __call__ | oesteban/gridbspline | 0 | python | def __call__(self, coords):
'\n Interpolation at coordinates\n Parameters\n ----------\n coords : ndarray of shape (..., ndim)\n The coordinates to sample the gridded data at\n '
if (self._order != 3):
raise NotImplementedError
for xi in coords:
(yield self._interpolate(xi)) | def __call__(self, coords):
'\n Interpolation at coordinates\n Parameters\n ----------\n coords : ndarray of shape (..., ndim)\n The coordinates to sample the gridded data at\n '
if (self._order != 3):
raise NotImplementedError
for xi in coords:
(yield self._interpolate(xi))<|docstring|>Interpolation at coordinates
Parameters
----------
coords : ndarray of shape (..., ndim)
The coordinates to sample the gridded data at<|endoftext|> |
167c145fe70fa857b889f1f21031a6351aea7601788d59eb3a8d5d9108a5bbb7 | def _interpolate(self, xi):
'Evaluates the interpolated value for position xi\n\n Calculates the bspline weights corresponding to the samples\n around xi and evaluates the interpolated value\n\n Parameters\n ----------\n\n xi : array_like of shape (ndim,)\n The position at which the image is interpolated\n\n '
if (self._order != 3):
raise NotImplementedError
indexes = []
offset = (0.0 if (self._order & 1) else 0.5)
for dim in range(self.ndim):
first = int((np.floor((xi[dim] + offset)) - (self._order // 2)))
indexes.append(list(range(first, ((first + self._order) + 1))))
ndindex = np.moveaxis(np.array(np.meshgrid(*indexes, indexing='ij')), 0, (- 1)).reshape((- 1), self.ndim)
vbspl = np.vectorize(cubic)
weights = np.prod(vbspl((ndindex - xi)), axis=(- 1))
ndindex = [tuple(v) for v in ndindex]
zero = np.zeros(self.ndim)
shape = np.array(self.shape)
coeffs = []
for ijk in ndindex:
offbounds = ((zero > ijk) | (shape <= ijk))
if np.any(offbounds):
if (self._off_bounds == 'constant'):
coeffs.append(([self._fill_value] * self.ncomp))
continue
ijk = np.array(ijk, dtype=int)
ijk[(ijk < 0)] *= (- 1)
ijk[(ijk >= shape)] = (((2 * shape[(ijk >= shape)]) - ijk[(ijk >= shape)]) - 1).astype(int)
ijk = tuple(ijk.tolist())
coeffs.append(self._coeffs[ijk])
return weights.dot(np.array(coeffs, dtype=float)) | Evaluates the interpolated value for position xi
Calculates the bspline weights corresponding to the samples
around xi and evaluates the interpolated value
Parameters
----------
xi : array_like of shape (ndim,)
The position at which the image is interpolated | gridbspline/interpolate.py | _interpolate | oesteban/gridbspline | 0 | python | def _interpolate(self, xi):
'Evaluates the interpolated value for position xi\n\n Calculates the bspline weights corresponding to the samples\n around xi and evaluates the interpolated value\n\n Parameters\n ----------\n\n xi : array_like of shape (ndim,)\n The position at which the image is interpolated\n\n '
if (self._order != 3):
raise NotImplementedError
indexes = []
offset = (0.0 if (self._order & 1) else 0.5)
for dim in range(self.ndim):
first = int((np.floor((xi[dim] + offset)) - (self._order // 2)))
indexes.append(list(range(first, ((first + self._order) + 1))))
ndindex = np.moveaxis(np.array(np.meshgrid(*indexes, indexing='ij')), 0, (- 1)).reshape((- 1), self.ndim)
vbspl = np.vectorize(cubic)
weights = np.prod(vbspl((ndindex - xi)), axis=(- 1))
ndindex = [tuple(v) for v in ndindex]
zero = np.zeros(self.ndim)
shape = np.array(self.shape)
coeffs = []
for ijk in ndindex:
offbounds = ((zero > ijk) | (shape <= ijk))
if np.any(offbounds):
if (self._off_bounds == 'constant'):
coeffs.append(([self._fill_value] * self.ncomp))
continue
ijk = np.array(ijk, dtype=int)
ijk[(ijk < 0)] *= (- 1)
ijk[(ijk >= shape)] = (((2 * shape[(ijk >= shape)]) - ijk[(ijk >= shape)]) - 1).astype(int)
ijk = tuple(ijk.tolist())
coeffs.append(self._coeffs[ijk])
return weights.dot(np.array(coeffs, dtype=float)) | def _interpolate(self, xi):
'Evaluates the interpolated value for position xi\n\n Calculates the bspline weights corresponding to the samples\n around xi and evaluates the interpolated value\n\n Parameters\n ----------\n\n xi : array_like of shape (ndim,)\n The position at which the image is interpolated\n\n '
if (self._order != 3):
raise NotImplementedError
indexes = []
offset = (0.0 if (self._order & 1) else 0.5)
for dim in range(self.ndim):
first = int((np.floor((xi[dim] + offset)) - (self._order // 2)))
indexes.append(list(range(first, ((first + self._order) + 1))))
ndindex = np.moveaxis(np.array(np.meshgrid(*indexes, indexing='ij')), 0, (- 1)).reshape((- 1), self.ndim)
vbspl = np.vectorize(cubic)
weights = np.prod(vbspl((ndindex - xi)), axis=(- 1))
ndindex = [tuple(v) for v in ndindex]
zero = np.zeros(self.ndim)
shape = np.array(self.shape)
coeffs = []
for ijk in ndindex:
offbounds = ((zero > ijk) | (shape <= ijk))
if np.any(offbounds):
if (self._off_bounds == 'constant'):
coeffs.append(([self._fill_value] * self.ncomp))
continue
ijk = np.array(ijk, dtype=int)
ijk[(ijk < 0)] *= (- 1)
ijk[(ijk >= shape)] = (((2 * shape[(ijk >= shape)]) - ijk[(ijk >= shape)]) - 1).astype(int)
ijk = tuple(ijk.tolist())
coeffs.append(self._coeffs[ijk])
return weights.dot(np.array(coeffs, dtype=float))<|docstring|>Evaluates the interpolated value for position xi
Calculates the bspline weights corresponding to the samples
around xi and evaluates the interpolated value
Parameters
----------
xi : array_like of shape (ndim,)
The position at which the image is interpolated<|endoftext|> |
a9ce6b0c3ae4399e66506681987caed1c1e8a7f7584f14ac9dd084ce1ee217d6 | def _right_project(self, M, theta, inc):
"Apply the projection operator on the right.\n\n Specifically, this method returns the dot product `M . R`,\n where `M` is an input matrix and `R` is the Wigner rotation matrix\n that transforms a spherical harmonic coefficient vector in the\n input frame to a vector in the observer's frame.\n "
M = self._dotRx(M, (- inc))
M = self._tensordotRz(M, theta)
M = self._dotRx(M, (0.5 * np.pi))
return M | Apply the projection operator on the right.
Specifically, this method returns the dot product `M . R`,
where `M` is an input matrix and `R` is the Wigner rotation matrix
that transforms a spherical harmonic coefficient vector in the
input frame to a vector in the observer's frame. | starry_process/flux.py | _right_project | arfon/starry_process | 13 | python | def _right_project(self, M, theta, inc):
"Apply the projection operator on the right.\n\n Specifically, this method returns the dot product `M . R`,\n where `M` is an input matrix and `R` is the Wigner rotation matrix\n that transforms a spherical harmonic coefficient vector in the\n input frame to a vector in the observer's frame.\n "
M = self._dotRx(M, (- inc))
M = self._tensordotRz(M, theta)
M = self._dotRx(M, (0.5 * np.pi))
return M | def _right_project(self, M, theta, inc):
"Apply the projection operator on the right.\n\n Specifically, this method returns the dot product `M . R`,\n where `M` is an input matrix and `R` is the Wigner rotation matrix\n that transforms a spherical harmonic coefficient vector in the\n input frame to a vector in the observer's frame.\n "
M = self._dotRx(M, (- inc))
M = self._tensordotRz(M, theta)
M = self._dotRx(M, (0.5 * np.pi))
return M<|docstring|>Apply the projection operator on the right.
Specifically, this method returns the dot product `M . R`,
where `M` is an input matrix and `R` is the Wigner rotation matrix
that transforms a spherical harmonic coefficient vector in the
input frame to a vector in the observer's frame.<|endoftext|> |
31eb062ab6b8d28049ae852a7c03f3e547c5793775cfa713ec519ce2b904fab5 | def _G(self, j, i):
'\n This is the integral of\n\n cos(x / 2)^i sin(x / 2)^j sin(x)\n\n from 0 to pi/2.\n '
return ((((2 * gamma((1 + (0.5 * i)))) * gamma((1 + (0.5 * j)))) / gamma((0.5 * ((4 + i) + j)))) - (((2 ** (1 - (0.5 * i))) / (2 + i)) * hyp2f1((1 + (0.5 * i)), ((- 0.5) * j), (2 + (0.5 * i)), 0.5))) | This is the integral of
cos(x / 2)^i sin(x / 2)^j sin(x)
from 0 to pi/2. | starry_process/flux.py | _G | arfon/starry_process | 13 | python | def _G(self, j, i):
'\n This is the integral of\n\n cos(x / 2)^i sin(x / 2)^j sin(x)\n\n from 0 to pi/2.\n '
return ((((2 * gamma((1 + (0.5 * i)))) * gamma((1 + (0.5 * j)))) / gamma((0.5 * ((4 + i) + j)))) - (((2 ** (1 - (0.5 * i))) / (2 + i)) * hyp2f1((1 + (0.5 * i)), ((- 0.5) * j), (2 + (0.5 * i)), 0.5))) | def _G(self, j, i):
'\n This is the integral of\n\n cos(x / 2)^i sin(x / 2)^j sin(x)\n\n from 0 to pi/2.\n '
return ((((2 * gamma((1 + (0.5 * i)))) * gamma((1 + (0.5 * j)))) / gamma((0.5 * ((4 + i) + j)))) - (((2 ** (1 - (0.5 * i))) / (2 + i)) * hyp2f1((1 + (0.5 * i)), ((- 0.5) * j), (2 + (0.5 * i)), 0.5)))<|docstring|>This is the integral of
cos(x / 2)^i sin(x / 2)^j sin(x)
from 0 to pi/2.<|endoftext|> |
1a95544077b2514096d16acc884cc9b30e93ac44f1b7a11c96df2025162e47c2 | def _precompute(self):
"\n Pre-compute some stuff that doesn't depend on\n user inputs.\n\n "
G = np.array([[self._G(i, j) for i in range(((4 * self._ydeg) + 1))] for j in range(((4 * self._ydeg) + 1))])
self._wnp = [None for l in range((self._ydeg + 1))]
for l in range((self._ydeg + 1)):
m = np.arange((- l), (l + 1))
i = slice((l ** 2), ((l + 1) ** 2))
self._wnp[l] = (self._R[l] @ G[((l - m), (l + m))])
Q = np.empty((((2 * self._ydeg) + 1), ((2 * self._ydeg) + 1), ((2 * self._ydeg) + 1), self._nylm))
for l1 in range((self._ydeg + 1)):
k = np.arange((l1 ** 2), ((l1 + 1) ** 2))
k0 = np.arange(((2 * l1) + 1)).reshape((- 1), 1)
for p in range(self._nylm):
l2 = int(np.floor(np.sqrt(p)))
j = np.arange((l2 ** 2), ((l2 + 1) ** 2))
j0 = np.arange(((2 * l2) + 1)).reshape(1, (- 1))
L = (self._R[l1][(l1, (k - (l1 ** 2)))] @ G[((k0 + j0), ((((2 * l1) - k0) + (2 * l2)) - j0))])
R = self._R[l2][((j - (l2 ** 2)), (p - (l2 ** 2)))].T
Q[(l1, :((2 * l1) + 1), :((2 * l2) + 1), p)] = (L @ R)
self._Wnp = np.empty((self._nylm, self._nylm))
for l1 in range((self._ydeg + 1)):
i = np.arange((l1 ** 2), ((l1 + 1) ** 2))
for l2 in range((self._ydeg + 1)):
j = np.arange((l2 ** 2), ((l2 + 1) ** 2))
self._Wnp[(i.reshape((- 1), 1), j.reshape(1, (- 1)))] = Q[(l1, :((2 * l1) + 1), l2, j)].T | Pre-compute some stuff that doesn't depend on
user inputs. | starry_process/flux.py | _precompute | arfon/starry_process | 13 | python | def _precompute(self):
"\n Pre-compute some stuff that doesn't depend on\n user inputs.\n\n "
G = np.array([[self._G(i, j) for i in range(((4 * self._ydeg) + 1))] for j in range(((4 * self._ydeg) + 1))])
self._wnp = [None for l in range((self._ydeg + 1))]
for l in range((self._ydeg + 1)):
m = np.arange((- l), (l + 1))
i = slice((l ** 2), ((l + 1) ** 2))
self._wnp[l] = (self._R[l] @ G[((l - m), (l + m))])
Q = np.empty((((2 * self._ydeg) + 1), ((2 * self._ydeg) + 1), ((2 * self._ydeg) + 1), self._nylm))
for l1 in range((self._ydeg + 1)):
k = np.arange((l1 ** 2), ((l1 + 1) ** 2))
k0 = np.arange(((2 * l1) + 1)).reshape((- 1), 1)
for p in range(self._nylm):
l2 = int(np.floor(np.sqrt(p)))
j = np.arange((l2 ** 2), ((l2 + 1) ** 2))
j0 = np.arange(((2 * l2) + 1)).reshape(1, (- 1))
L = (self._R[l1][(l1, (k - (l1 ** 2)))] @ G[((k0 + j0), ((((2 * l1) - k0) + (2 * l2)) - j0))])
R = self._R[l2][((j - (l2 ** 2)), (p - (l2 ** 2)))].T
Q[(l1, :((2 * l1) + 1), :((2 * l2) + 1), p)] = (L @ R)
self._Wnp = np.empty((self._nylm, self._nylm))
for l1 in range((self._ydeg + 1)):
i = np.arange((l1 ** 2), ((l1 + 1) ** 2))
for l2 in range((self._ydeg + 1)):
j = np.arange((l2 ** 2), ((l2 + 1) ** 2))
self._Wnp[(i.reshape((- 1), 1), j.reshape(1, (- 1)))] = Q[(l1, :((2 * l1) + 1), l2, j)].T | def _precompute(self):
"\n Pre-compute some stuff that doesn't depend on\n user inputs.\n\n "
G = np.array([[self._G(i, j) for i in range(((4 * self._ydeg) + 1))] for j in range(((4 * self._ydeg) + 1))])
self._wnp = [None for l in range((self._ydeg + 1))]
for l in range((self._ydeg + 1)):
m = np.arange((- l), (l + 1))
i = slice((l ** 2), ((l + 1) ** 2))
self._wnp[l] = (self._R[l] @ G[((l - m), (l + m))])
Q = np.empty((((2 * self._ydeg) + 1), ((2 * self._ydeg) + 1), ((2 * self._ydeg) + 1), self._nylm))
for l1 in range((self._ydeg + 1)):
k = np.arange((l1 ** 2), ((l1 + 1) ** 2))
k0 = np.arange(((2 * l1) + 1)).reshape((- 1), 1)
for p in range(self._nylm):
l2 = int(np.floor(np.sqrt(p)))
j = np.arange((l2 ** 2), ((l2 + 1) ** 2))
j0 = np.arange(((2 * l2) + 1)).reshape(1, (- 1))
L = (self._R[l1][(l1, (k - (l1 ** 2)))] @ G[((k0 + j0), ((((2 * l1) - k0) + (2 * l2)) - j0))])
R = self._R[l2][((j - (l2 ** 2)), (p - (l2 ** 2)))].T
Q[(l1, :((2 * l1) + 1), :((2 * l2) + 1), p)] = (L @ R)
self._Wnp = np.empty((self._nylm, self._nylm))
for l1 in range((self._ydeg + 1)):
i = np.arange((l1 ** 2), ((l1 + 1) ** 2))
for l2 in range((self._ydeg + 1)):
j = np.arange((l2 ** 2), ((l2 + 1) ** 2))
self._Wnp[(i.reshape((- 1), 1), j.reshape(1, (- 1)))] = Q[(l1, :((2 * l1) + 1), l2, j)].T<|docstring|>Pre-compute some stuff that doesn't depend on
user inputs.<|endoftext|> |
a60ad1aa9de2255e4e650d1c58eb30cd95983303b0e40ee773217a8653d1a073 | def _interpolate_cov(self):
'\n Interpolate the pre-computed kernel onto a\n grid of time lags in 2D to get the full covariance.\n\n '
theta = ((2 * np.pi) * tt.mod((self._t / self._p), 1.0))
x = tt.reshape(tt.abs_((theta[(:, None)] - theta[(None, :)])), ((- 1),))
inds = tt.cast(tt.floor((x / self._dx)), 'int64')
x0 = ((x - self._xp[(inds + 1)]) / self._dx)
cov = tt.reshape((((self._a0[inds] + (self._a1[inds] * x0)) + (self._a2[inds] * (x0 ** 2))) + (self._a3[inds] * (x0 ** 3))), (theta.shape[0], theta.shape[0]))
cov = ifelse(tt.eq(theta.shape[0], 1), self._var, cov)
return cov | Interpolate the pre-computed kernel onto a
grid of time lags in 2D to get the full covariance. | starry_process/flux.py | _interpolate_cov | arfon/starry_process | 13 | python | def _interpolate_cov(self):
'\n Interpolate the pre-computed kernel onto a\n grid of time lags in 2D to get the full covariance.\n\n '
theta = ((2 * np.pi) * tt.mod((self._t / self._p), 1.0))
x = tt.reshape(tt.abs_((theta[(:, None)] - theta[(None, :)])), ((- 1),))
inds = tt.cast(tt.floor((x / self._dx)), 'int64')
x0 = ((x - self._xp[(inds + 1)]) / self._dx)
cov = tt.reshape((((self._a0[inds] + (self._a1[inds] * x0)) + (self._a2[inds] * (x0 ** 2))) + (self._a3[inds] * (x0 ** 3))), (theta.shape[0], theta.shape[0]))
cov = ifelse(tt.eq(theta.shape[0], 1), self._var, cov)
return cov | def _interpolate_cov(self):
'\n Interpolate the pre-computed kernel onto a\n grid of time lags in 2D to get the full covariance.\n\n '
theta = ((2 * np.pi) * tt.mod((self._t / self._p), 1.0))
x = tt.reshape(tt.abs_((theta[(:, None)] - theta[(None, :)])), ((- 1),))
inds = tt.cast(tt.floor((x / self._dx)), 'int64')
x0 = ((x - self._xp[(inds + 1)]) / self._dx)
cov = tt.reshape((((self._a0[inds] + (self._a1[inds] * x0)) + (self._a2[inds] * (x0 ** 2))) + (self._a3[inds] * (x0 ** 3))), (theta.shape[0], theta.shape[0]))
cov = ifelse(tt.eq(theta.shape[0], 1), self._var, cov)
return cov<|docstring|>Interpolate the pre-computed kernel onto a
grid of time lags in 2D to get the full covariance.<|endoftext|> |
d63bcc534b3dd71e4b6f67ef9d8069030d567583fc7cd6a19b7492045ee6ffa8 | def _compute(self):
'\n Compute some vectors and matrices used in the\n evaluation of the mean and covariance.\n\n '
self._compute_inclination_integrals()
if self._marginalize_over_inclination:
self._mean = tt.sum([tt.dot(self._w[l], self._ez[slice((l ** 2), ((l + 1) ** 2))]) for l in range((self._ydeg + 1))])
self._var = ((tt.tensordot(self._W, self._Ez) - (self._mean ** 2)) * tt.eye(1))
self._dx = ((2 * np.pi) / self._covpts)
self._xp = tt.arange((- self._dx), ((2 * np.pi) + (2.5 * self._dx)), self._dx)
mom2 = self._special_tensordotRz(self._W, self._Ez, self._xp)
yp = (mom2 - (self._mean ** 2))
y0 = yp[:(- 3)]
y1 = yp[1:(- 2)]
y2 = yp[2:(- 1)]
y3 = yp[3:]
self._a0 = y1
self._a1 = (((((- y0) / 3.0) - (0.5 * y1)) + y2) - (y3 / 6.0))
self._a2 = ((0.5 * (y0 + y2)) - y1)
self._a3 = (0.5 * ((y1 - y2) + ((y3 - y0) / 3.0)))
self._cov = self._interpolate_cov()
else:
A = self._design_matrix()
self._mean = tt.dot(A, self._mean_ylm)[0]
self._cov = tt.dot(tt.dot(A, self._cov_ylm), tt.transpose(A)) | Compute some vectors and matrices used in the
evaluation of the mean and covariance. | starry_process/flux.py | _compute | arfon/starry_process | 13 | python | def _compute(self):
'\n Compute some vectors and matrices used in the\n evaluation of the mean and covariance.\n\n '
self._compute_inclination_integrals()
if self._marginalize_over_inclination:
self._mean = tt.sum([tt.dot(self._w[l], self._ez[slice((l ** 2), ((l + 1) ** 2))]) for l in range((self._ydeg + 1))])
self._var = ((tt.tensordot(self._W, self._Ez) - (self._mean ** 2)) * tt.eye(1))
self._dx = ((2 * np.pi) / self._covpts)
self._xp = tt.arange((- self._dx), ((2 * np.pi) + (2.5 * self._dx)), self._dx)
mom2 = self._special_tensordotRz(self._W, self._Ez, self._xp)
yp = (mom2 - (self._mean ** 2))
y0 = yp[:(- 3)]
y1 = yp[1:(- 2)]
y2 = yp[2:(- 1)]
y3 = yp[3:]
self._a0 = y1
self._a1 = (((((- y0) / 3.0) - (0.5 * y1)) + y2) - (y3 / 6.0))
self._a2 = ((0.5 * (y0 + y2)) - y1)
self._a3 = (0.5 * ((y1 - y2) + ((y3 - y0) / 3.0)))
self._cov = self._interpolate_cov()
else:
A = self._design_matrix()
self._mean = tt.dot(A, self._mean_ylm)[0]
self._cov = tt.dot(tt.dot(A, self._cov_ylm), tt.transpose(A)) | def _compute(self):
'\n Compute some vectors and matrices used in the\n evaluation of the mean and covariance.\n\n '
self._compute_inclination_integrals()
if self._marginalize_over_inclination:
self._mean = tt.sum([tt.dot(self._w[l], self._ez[slice((l ** 2), ((l + 1) ** 2))]) for l in range((self._ydeg + 1))])
self._var = ((tt.tensordot(self._W, self._Ez) - (self._mean ** 2)) * tt.eye(1))
self._dx = ((2 * np.pi) / self._covpts)
self._xp = tt.arange((- self._dx), ((2 * np.pi) + (2.5 * self._dx)), self._dx)
mom2 = self._special_tensordotRz(self._W, self._Ez, self._xp)
yp = (mom2 - (self._mean ** 2))
y0 = yp[:(- 3)]
y1 = yp[1:(- 2)]
y2 = yp[2:(- 1)]
y3 = yp[3:]
self._a0 = y1
self._a1 = (((((- y0) / 3.0) - (0.5 * y1)) + y2) - (y3 / 6.0))
self._a2 = ((0.5 * (y0 + y2)) - y1)
self._a3 = (0.5 * ((y1 - y2) + ((y3 - y0) / 3.0)))
self._cov = self._interpolate_cov()
else:
A = self._design_matrix()
self._mean = tt.dot(A, self._mean_ylm)[0]
self._cov = tt.dot(tt.dot(A, self._cov_ylm), tt.transpose(A))<|docstring|>Compute some vectors and matrices used in the
evaluation of the mean and covariance.<|endoftext|> |
053715bd0c64055bf0015c3821d9298f25a3c862d5195054293934bd445eaf52 | def apply_transaction(state, tx: transactions.Transaction, tx_wrapper_hash):
'tx_wrapper_hash is the hash for quarkchain.core.Transaction\n TODO: remove quarkchain.core.Transaction wrapper and use evm.Transaction directly\n '
state.logs = []
state.suicides = []
state.refunds = 0
validate_transaction(state, tx)
state.full_shard_key = tx.to_full_shard_key
intrinsic_gas = tx.intrinsic_gas_used
log_tx.debug('TX NEW', txdict=tx.to_dict())
if (tx.sender != null_address):
state.increment_nonce(tx.sender)
local_fee_rate = ((1 - state.qkc_config.reward_tax_rate) if state.qkc_config else Fraction(1))
assert (state.get_balance(tx.sender, token_id=tx.gas_token_id) >= (tx.startgas * tx.gasprice))
state.delta_token_balance(tx.sender, tx.gas_token_id, ((- tx.startgas) * tx.gasprice))
message_data = vm.CallData([safe_ord(x) for x in tx.data], 0, len(tx.data))
message = vm.Message(tx.sender, tx.to, tx.value, (tx.startgas - intrinsic_gas), message_data, code_address=tx.to, is_cross_shard=tx.is_cross_shard, from_full_shard_key=tx.from_full_shard_key, to_full_shard_key=tx.to_full_shard_key, tx_hash=tx_wrapper_hash, transfer_token_id=tx.transfer_token_id, gas_token_id=tx.gas_token_id)
ext = VMExt(state, tx.sender, tx.gasprice)
contract_address = b''
if tx.is_cross_shard:
local_gas_used = intrinsic_gas
remote_gas_reserved = 0
if transfer_failure_by_posw_balance_check(ext, message):
success = 0
local_gas_used = tx.startgas
elif (tx.to == b''):
success = 0
else:
state.delta_token_balance(tx.sender, tx.transfer_token_id, (- tx.value))
if ((state.qkc_config.ENABLE_EVM_TIMESTAMP is None) or (state.timestamp >= state.qkc_config.ENABLE_EVM_TIMESTAMP)):
remote_gas_reserved = (tx.startgas - intrinsic_gas)
ext.add_cross_shard_transaction_deposit(quarkchain.core.CrossShardTransactionDeposit(tx_hash=tx_wrapper_hash, from_address=quarkchain.core.Address(tx.sender, tx.from_full_shard_key), to_address=quarkchain.core.Address(tx.to, tx.to_full_shard_key), value=tx.value, gas_price=tx.gasprice, gas_token_id=tx.gas_token_id, transfer_token_id=tx.transfer_token_id, message_data=tx.data, create_contract=False, gas_remained=remote_gas_reserved))
success = 1
gas_remained = ((tx.startgas - local_gas_used) - remote_gas_reserved)
state.delta_token_balance(message.sender, message.gas_token_id, (ext.tx_gasprice * gas_remained))
fee = (((tx.gasprice * (local_gas_used - (opcodes.GTXXSHARDCOST if success else 0))) * local_fee_rate.numerator) // local_fee_rate.denominator)
state.delta_token_balance(state.block_coinbase, tx.gas_token_id, fee)
add_dict(state.block_fee_tokens, {message.gas_token_id: fee})
output = []
state.gas_used += local_gas_used
r = mk_receipt(state, success, state.logs, contract_address, state.full_shard_key)
state.logs = []
state.add_receipt(r)
return (success, output)
return apply_transaction_message(state, message, ext, (tx.to == b''), intrinsic_gas) | tx_wrapper_hash is the hash for quarkchain.core.Transaction
TODO: remove quarkchain.core.Transaction wrapper and use evm.Transaction directly | quarkchain/evm/messages.py | apply_transaction | braveheart12/QuarkChain | 0 | python | def apply_transaction(state, tx: transactions.Transaction, tx_wrapper_hash):
'tx_wrapper_hash is the hash for quarkchain.core.Transaction\n TODO: remove quarkchain.core.Transaction wrapper and use evm.Transaction directly\n '
state.logs = []
state.suicides = []
state.refunds = 0
validate_transaction(state, tx)
state.full_shard_key = tx.to_full_shard_key
intrinsic_gas = tx.intrinsic_gas_used
log_tx.debug('TX NEW', txdict=tx.to_dict())
if (tx.sender != null_address):
state.increment_nonce(tx.sender)
local_fee_rate = ((1 - state.qkc_config.reward_tax_rate) if state.qkc_config else Fraction(1))
assert (state.get_balance(tx.sender, token_id=tx.gas_token_id) >= (tx.startgas * tx.gasprice))
state.delta_token_balance(tx.sender, tx.gas_token_id, ((- tx.startgas) * tx.gasprice))
message_data = vm.CallData([safe_ord(x) for x in tx.data], 0, len(tx.data))
message = vm.Message(tx.sender, tx.to, tx.value, (tx.startgas - intrinsic_gas), message_data, code_address=tx.to, is_cross_shard=tx.is_cross_shard, from_full_shard_key=tx.from_full_shard_key, to_full_shard_key=tx.to_full_shard_key, tx_hash=tx_wrapper_hash, transfer_token_id=tx.transfer_token_id, gas_token_id=tx.gas_token_id)
ext = VMExt(state, tx.sender, tx.gasprice)
contract_address = b
if tx.is_cross_shard:
local_gas_used = intrinsic_gas
remote_gas_reserved = 0
if transfer_failure_by_posw_balance_check(ext, message):
success = 0
local_gas_used = tx.startgas
elif (tx.to == b):
success = 0
else:
state.delta_token_balance(tx.sender, tx.transfer_token_id, (- tx.value))
if ((state.qkc_config.ENABLE_EVM_TIMESTAMP is None) or (state.timestamp >= state.qkc_config.ENABLE_EVM_TIMESTAMP)):
remote_gas_reserved = (tx.startgas - intrinsic_gas)
ext.add_cross_shard_transaction_deposit(quarkchain.core.CrossShardTransactionDeposit(tx_hash=tx_wrapper_hash, from_address=quarkchain.core.Address(tx.sender, tx.from_full_shard_key), to_address=quarkchain.core.Address(tx.to, tx.to_full_shard_key), value=tx.value, gas_price=tx.gasprice, gas_token_id=tx.gas_token_id, transfer_token_id=tx.transfer_token_id, message_data=tx.data, create_contract=False, gas_remained=remote_gas_reserved))
success = 1
gas_remained = ((tx.startgas - local_gas_used) - remote_gas_reserved)
state.delta_token_balance(message.sender, message.gas_token_id, (ext.tx_gasprice * gas_remained))
fee = (((tx.gasprice * (local_gas_used - (opcodes.GTXXSHARDCOST if success else 0))) * local_fee_rate.numerator) // local_fee_rate.denominator)
state.delta_token_balance(state.block_coinbase, tx.gas_token_id, fee)
add_dict(state.block_fee_tokens, {message.gas_token_id: fee})
output = []
state.gas_used += local_gas_used
r = mk_receipt(state, success, state.logs, contract_address, state.full_shard_key)
state.logs = []
state.add_receipt(r)
return (success, output)
return apply_transaction_message(state, message, ext, (tx.to == b), intrinsic_gas) | def apply_transaction(state, tx: transactions.Transaction, tx_wrapper_hash):
'tx_wrapper_hash is the hash for quarkchain.core.Transaction\n TODO: remove quarkchain.core.Transaction wrapper and use evm.Transaction directly\n '
state.logs = []
state.suicides = []
state.refunds = 0
validate_transaction(state, tx)
state.full_shard_key = tx.to_full_shard_key
intrinsic_gas = tx.intrinsic_gas_used
log_tx.debug('TX NEW', txdict=tx.to_dict())
if (tx.sender != null_address):
state.increment_nonce(tx.sender)
local_fee_rate = ((1 - state.qkc_config.reward_tax_rate) if state.qkc_config else Fraction(1))
assert (state.get_balance(tx.sender, token_id=tx.gas_token_id) >= (tx.startgas * tx.gasprice))
state.delta_token_balance(tx.sender, tx.gas_token_id, ((- tx.startgas) * tx.gasprice))
message_data = vm.CallData([safe_ord(x) for x in tx.data], 0, len(tx.data))
message = vm.Message(tx.sender, tx.to, tx.value, (tx.startgas - intrinsic_gas), message_data, code_address=tx.to, is_cross_shard=tx.is_cross_shard, from_full_shard_key=tx.from_full_shard_key, to_full_shard_key=tx.to_full_shard_key, tx_hash=tx_wrapper_hash, transfer_token_id=tx.transfer_token_id, gas_token_id=tx.gas_token_id)
ext = VMExt(state, tx.sender, tx.gasprice)
contract_address = b
if tx.is_cross_shard:
local_gas_used = intrinsic_gas
remote_gas_reserved = 0
if transfer_failure_by_posw_balance_check(ext, message):
success = 0
local_gas_used = tx.startgas
elif (tx.to == b):
success = 0
else:
state.delta_token_balance(tx.sender, tx.transfer_token_id, (- tx.value))
if ((state.qkc_config.ENABLE_EVM_TIMESTAMP is None) or (state.timestamp >= state.qkc_config.ENABLE_EVM_TIMESTAMP)):
remote_gas_reserved = (tx.startgas - intrinsic_gas)
ext.add_cross_shard_transaction_deposit(quarkchain.core.CrossShardTransactionDeposit(tx_hash=tx_wrapper_hash, from_address=quarkchain.core.Address(tx.sender, tx.from_full_shard_key), to_address=quarkchain.core.Address(tx.to, tx.to_full_shard_key), value=tx.value, gas_price=tx.gasprice, gas_token_id=tx.gas_token_id, transfer_token_id=tx.transfer_token_id, message_data=tx.data, create_contract=False, gas_remained=remote_gas_reserved))
success = 1
gas_remained = ((tx.startgas - local_gas_used) - remote_gas_reserved)
state.delta_token_balance(message.sender, message.gas_token_id, (ext.tx_gasprice * gas_remained))
fee = (((tx.gasprice * (local_gas_used - (opcodes.GTXXSHARDCOST if success else 0))) * local_fee_rate.numerator) // local_fee_rate.denominator)
state.delta_token_balance(state.block_coinbase, tx.gas_token_id, fee)
add_dict(state.block_fee_tokens, {message.gas_token_id: fee})
output = []
state.gas_used += local_gas_used
r = mk_receipt(state, success, state.logs, contract_address, state.full_shard_key)
state.logs = []
state.add_receipt(r)
return (success, output)
return apply_transaction_message(state, message, ext, (tx.to == b), intrinsic_gas)<|docstring|>tx_wrapper_hash is the hash for quarkchain.core.Transaction
TODO: remove quarkchain.core.Transaction wrapper and use evm.Transaction directly<|endoftext|> |
a8c00b191f1d7db65bf89dc26cdc2a1b27f8a7425867389f7b43d1ad0ec572c9 | def get_environment_config_value(key: str, default_return: Optional[str]=None) -> Optional[str]:
' Get the environment variable value of the environment variable matching key.\n Otherwise return default_return.\n '
return os.environ.get(key, default_return) | Get the environment variable value of the environment variable matching key.
Otherwise return default_return. | splitgraph/config/environment_config.py | get_environment_config_value | dazzag24/splitgraph | 1 | python | def get_environment_config_value(key: str, default_return: Optional[str]=None) -> Optional[str]:
' Get the environment variable value of the environment variable matching key.\n Otherwise return default_return.\n '
return os.environ.get(key, default_return) | def get_environment_config_value(key: str, default_return: Optional[str]=None) -> Optional[str]:
' Get the environment variable value of the environment variable matching key.\n Otherwise return default_return.\n '
return os.environ.get(key, default_return)<|docstring|>Get the environment variable value of the environment variable matching key.
Otherwise return default_return.<|endoftext|> |
8c3fa61687eed61e9322c67b63de58ff8f4b2384a3494a20c82acfb742116fa3 | def safe_float(field):
'Convert field to a float.\n\n Args:\n field: The field (usually a str) to convert to float.\n\n Return:\n The float value represented by field or NaN if float conversion throws a ValueError.\n '
try:
return float(field)
except ValueError:
return float('NaN') | Convert field to a float.
Args:
field: The field (usually a str) to convert to float.
Return:
The float value represented by field or NaN if float conversion throws a ValueError. | src/libnmea_navsat_driver/parser.py | safe_float | boschresearch/nmea_navsat_driver | 0 | python | def safe_float(field):
'Convert field to a float.\n\n Args:\n field: The field (usually a str) to convert to float.\n\n Return:\n The float value represented by field or NaN if float conversion throws a ValueError.\n '
try:
return float(field)
except ValueError:
return float('NaN') | def safe_float(field):
'Convert field to a float.\n\n Args:\n field: The field (usually a str) to convert to float.\n\n Return:\n The float value represented by field or NaN if float conversion throws a ValueError.\n '
try:
return float(field)
except ValueError:
return float('NaN')<|docstring|>Convert field to a float.
Args:
field: The field (usually a str) to convert to float.
Return:
The float value represented by field or NaN if float conversion throws a ValueError.<|endoftext|> |
d0facc16701d8e8435f4f80d30f0f92e0de89b23f4db25fe86da8b516cff8f15 | def safe_int(field):
'Convert field to an int.\n\n Args:\n field: The field (usually a str) to convert to int.\n\n Return:\n The int value represented by field or 0 if int conversion throws a ValueError.\n '
try:
return int(field)
except ValueError:
return 0 | Convert field to an int.
Args:
field: The field (usually a str) to convert to int.
Return:
The int value represented by field or 0 if int conversion throws a ValueError. | src/libnmea_navsat_driver/parser.py | safe_int | boschresearch/nmea_navsat_driver | 0 | python | def safe_int(field):
'Convert field to an int.\n\n Args:\n field: The field (usually a str) to convert to int.\n\n Return:\n The int value represented by field or 0 if int conversion throws a ValueError.\n '
try:
return int(field)
except ValueError:
return 0 | def safe_int(field):
'Convert field to an int.\n\n Args:\n field: The field (usually a str) to convert to int.\n\n Return:\n The int value represented by field or 0 if int conversion throws a ValueError.\n '
try:
return int(field)
except ValueError:
return 0<|docstring|>Convert field to an int.
Args:
field: The field (usually a str) to convert to int.
Return:
The int value represented by field or 0 if int conversion throws a ValueError.<|endoftext|> |
733a27c71e9200494dc62259baa1bba553c63de795382193aa4f7bf6767177c6 | def convert_latitude(field):
'Convert a latitude string to floating point decimal degrees.\n\n Args:\n field (str): Latitude string, expected to be formatted as DDMM.MMM, where\n DD is the latitude degrees, and MM.MMM are the minutes latitude.\n\n Return:\n Floating point latitude in decimal degrees.\n '
return (safe_float(field[0:2]) + (safe_float(field[2:]) / 60.0)) | Convert a latitude string to floating point decimal degrees.
Args:
field (str): Latitude string, expected to be formatted as DDMM.MMM, where
DD is the latitude degrees, and MM.MMM are the minutes latitude.
Return:
Floating point latitude in decimal degrees. | src/libnmea_navsat_driver/parser.py | convert_latitude | boschresearch/nmea_navsat_driver | 0 | python | def convert_latitude(field):
'Convert a latitude string to floating point decimal degrees.\n\n Args:\n field (str): Latitude string, expected to be formatted as DDMM.MMM, where\n DD is the latitude degrees, and MM.MMM are the minutes latitude.\n\n Return:\n Floating point latitude in decimal degrees.\n '
return (safe_float(field[0:2]) + (safe_float(field[2:]) / 60.0)) | def convert_latitude(field):
'Convert a latitude string to floating point decimal degrees.\n\n Args:\n field (str): Latitude string, expected to be formatted as DDMM.MMM, where\n DD is the latitude degrees, and MM.MMM are the minutes latitude.\n\n Return:\n Floating point latitude in decimal degrees.\n '
return (safe_float(field[0:2]) + (safe_float(field[2:]) / 60.0))<|docstring|>Convert a latitude string to floating point decimal degrees.
Args:
field (str): Latitude string, expected to be formatted as DDMM.MMM, where
DD is the latitude degrees, and MM.MMM are the minutes latitude.
Return:
Floating point latitude in decimal degrees.<|endoftext|> |
96ca68e60d8febe0f3b040bfa265b85dd0a581dbf26d0a85be27c2ad89f0790c | def convert_longitude(field):
'Convert a longitude string to floating point decimal degrees.\n\n Args:\n field (str): Longitude string, expected to be formatted as DDDMM.MMM, where\n DDD is the longitude degrees, and MM.MMM are the minutes longitude.\n\n Return:\n Floating point latitude in decimal degrees.\n '
return (safe_float(field[0:3]) + (safe_float(field[3:]) / 60.0)) | Convert a longitude string to floating point decimal degrees.
Args:
field (str): Longitude string, expected to be formatted as DDDMM.MMM, where
DDD is the longitude degrees, and MM.MMM are the minutes longitude.
Return:
Floating point latitude in decimal degrees. | src/libnmea_navsat_driver/parser.py | convert_longitude | boschresearch/nmea_navsat_driver | 0 | python | def convert_longitude(field):
'Convert a longitude string to floating point decimal degrees.\n\n Args:\n field (str): Longitude string, expected to be formatted as DDDMM.MMM, where\n DDD is the longitude degrees, and MM.MMM are the minutes longitude.\n\n Return:\n Floating point latitude in decimal degrees.\n '
return (safe_float(field[0:3]) + (safe_float(field[3:]) / 60.0)) | def convert_longitude(field):
'Convert a longitude string to floating point decimal degrees.\n\n Args:\n field (str): Longitude string, expected to be formatted as DDDMM.MMM, where\n DDD is the longitude degrees, and MM.MMM are the minutes longitude.\n\n Return:\n Floating point latitude in decimal degrees.\n '
return (safe_float(field[0:3]) + (safe_float(field[3:]) / 60.0))<|docstring|>Convert a longitude string to floating point decimal degrees.
Args:
field (str): Longitude string, expected to be formatted as DDDMM.MMM, where
DDD is the longitude degrees, and MM.MMM are the minutes longitude.
Return:
Floating point latitude in decimal degrees.<|endoftext|> |
48d07cf2ef70aeed3d2a73547f9a8c902c8f0b534d1e2263afacd60798952bd4 | def convert_time(nmea_utc):
'Extract time info from a NMEA UTC time string and use it to generate a UNIX epoch time.\n\n Time information (hours, minutes, seconds) is extracted from the given string and augmented\n with the date, which is taken from the current system time on the host computer (i.e. UTC now).\n The date ambiguity is resolved by adding a day to the current date if the host time is more than\n 12 hours behind the NMEA time and subtracting a day from the current date if the host time is\n more than 12 hours ahead of the NMEA time.\n\n Args:\n nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS[.SS] where\n HH is the number of hours [0,24), MM is the number of minutes [0,60),\n and SS[.SS] is the number of seconds [0,60) of the time in UTC.\n\n Return:\n tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.\n tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.\n '
if ((not nmea_utc[0:2]) or (not nmea_utc[2:4]) or (not nmea_utc[4:6])):
return (float('NaN'), float('NaN'))
utc_time = datetime.datetime.utcnow()
hours = safe_int(nmea_utc[0:2])
minutes = safe_int(nmea_utc[2:4])
seconds = safe_int(nmea_utc[4:6])
nanosecs = 0
if (len(nmea_utc) > 7):
nanosecs = (safe_int(nmea_utc[7:]) * pow(10, (9 - len(nmea_utc[7:]))))
day_offset = safe_int(((utc_time.hour - hours) / 12.0))
utc_time += datetime.timedelta(day_offset)
utc_time.replace(hour=hours, minute=minutes, second=seconds)
unix_secs = calendar.timegm(utc_time.timetuple())
return (unix_secs, nanosecs) | Extract time info from a NMEA UTC time string and use it to generate a UNIX epoch time.
Time information (hours, minutes, seconds) is extracted from the given string and augmented
with the date, which is taken from the current system time on the host computer (i.e. UTC now).
The date ambiguity is resolved by adding a day to the current date if the host time is more than
12 hours behind the NMEA time and subtracting a day from the current date if the host time is
more than 12 hours ahead of the NMEA time.
Args:
nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS[.SS] where
HH is the number of hours [0,24), MM is the number of minutes [0,60),
and SS[.SS] is the number of seconds [0,60) of the time in UTC.
Return:
tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.
tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time. | src/libnmea_navsat_driver/parser.py | convert_time | boschresearch/nmea_navsat_driver | 0 | python | def convert_time(nmea_utc):
'Extract time info from a NMEA UTC time string and use it to generate a UNIX epoch time.\n\n Time information (hours, minutes, seconds) is extracted from the given string and augmented\n with the date, which is taken from the current system time on the host computer (i.e. UTC now).\n The date ambiguity is resolved by adding a day to the current date if the host time is more than\n 12 hours behind the NMEA time and subtracting a day from the current date if the host time is\n more than 12 hours ahead of the NMEA time.\n\n Args:\n nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS[.SS] where\n HH is the number of hours [0,24), MM is the number of minutes [0,60),\n and SS[.SS] is the number of seconds [0,60) of the time in UTC.\n\n Return:\n tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.\n tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.\n '
if ((not nmea_utc[0:2]) or (not nmea_utc[2:4]) or (not nmea_utc[4:6])):
return (float('NaN'), float('NaN'))
utc_time = datetime.datetime.utcnow()
hours = safe_int(nmea_utc[0:2])
minutes = safe_int(nmea_utc[2:4])
seconds = safe_int(nmea_utc[4:6])
nanosecs = 0
if (len(nmea_utc) > 7):
nanosecs = (safe_int(nmea_utc[7:]) * pow(10, (9 - len(nmea_utc[7:]))))
day_offset = safe_int(((utc_time.hour - hours) / 12.0))
utc_time += datetime.timedelta(day_offset)
utc_time.replace(hour=hours, minute=minutes, second=seconds)
unix_secs = calendar.timegm(utc_time.timetuple())
return (unix_secs, nanosecs) | def convert_time(nmea_utc):
'Extract time info from a NMEA UTC time string and use it to generate a UNIX epoch time.\n\n Time information (hours, minutes, seconds) is extracted from the given string and augmented\n with the date, which is taken from the current system time on the host computer (i.e. UTC now).\n The date ambiguity is resolved by adding a day to the current date if the host time is more than\n 12 hours behind the NMEA time and subtracting a day from the current date if the host time is\n more than 12 hours ahead of the NMEA time.\n\n Args:\n nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS[.SS] where\n HH is the number of hours [0,24), MM is the number of minutes [0,60),\n and SS[.SS] is the number of seconds [0,60) of the time in UTC.\n\n Return:\n tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.\n tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.\n '
if ((not nmea_utc[0:2]) or (not nmea_utc[2:4]) or (not nmea_utc[4:6])):
return (float('NaN'), float('NaN'))
utc_time = datetime.datetime.utcnow()
hours = safe_int(nmea_utc[0:2])
minutes = safe_int(nmea_utc[2:4])
seconds = safe_int(nmea_utc[4:6])
nanosecs = 0
if (len(nmea_utc) > 7):
nanosecs = (safe_int(nmea_utc[7:]) * pow(10, (9 - len(nmea_utc[7:]))))
day_offset = safe_int(((utc_time.hour - hours) / 12.0))
utc_time += datetime.timedelta(day_offset)
utc_time.replace(hour=hours, minute=minutes, second=seconds)
unix_secs = calendar.timegm(utc_time.timetuple())
return (unix_secs, nanosecs)<|docstring|>Extract time info from a NMEA UTC time string and use it to generate a UNIX epoch time.
Time information (hours, minutes, seconds) is extracted from the given string and augmented
with the date, which is taken from the current system time on the host computer (i.e. UTC now).
The date ambiguity is resolved by adding a day to the current date if the host time is more than
12 hours behind the NMEA time and subtracting a day from the current date if the host time is
more than 12 hours ahead of the NMEA time.
Args:
nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS[.SS] where
HH is the number of hours [0,24), MM is the number of minutes [0,60),
and SS[.SS] is the number of seconds [0,60) of the time in UTC.
Return:
tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.
tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.<|endoftext|> |
37209fcd10dd7b9869cd24a3970eca97e9329caec4be1bbf4521ea9df86fcf68 | def convert_time_rmc(date_str, time_str):
'Convert a NMEA RMC date string and time string to UNIX epoch time.\n\n Args:\n date_str (str): NMEA UTC date string to convert, formatted as DDMMYY.\n nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS.SS where\n HH is the number of hours [0,24), MM is the number of minutes [0,60),\n and SS.SS is the number of seconds [0,60) of the time in UTC.\n\n Return:\n tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.\n tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.\n '
if ((not time_str[0:2]) or (not time_str[2:4]) or (not time_str[4:6])):
return (float('NaN'), float('NaN'))
pc_year = datetime.date.today().year
'\n example 1: utc_year = 99, pc_year = 2100\n years = 2100 + safe_int((2100 % 100 - 99) / 50.0) = 2099\n example 2: utc_year = 00, pc_year = 2099\n years = 2099 + safe_int((2099 % 100 - 00) / 50.0) = 2100\n '
utc_year = safe_int(date_str[4:6])
years = (pc_year + safe_int((((pc_year % 100) - utc_year) / 50.0)))
months = safe_int(date_str[2:4])
days = safe_int(date_str[0:2])
hours = safe_int(time_str[0:2])
minutes = safe_int(time_str[2:4])
seconds = safe_int(time_str[4:6])
nanosecs = (safe_int(time_str[7:]) * pow(10, (9 - len(time_str[7:]))))
unix_secs = calendar.timegm((years, months, days, hours, minutes, seconds))
return (unix_secs, nanosecs) | Convert a NMEA RMC date string and time string to UNIX epoch time.
Args:
date_str (str): NMEA UTC date string to convert, formatted as DDMMYY.
nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS.SS where
HH is the number of hours [0,24), MM is the number of minutes [0,60),
and SS.SS is the number of seconds [0,60) of the time in UTC.
Return:
tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.
tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time. | src/libnmea_navsat_driver/parser.py | convert_time_rmc | boschresearch/nmea_navsat_driver | 0 | python | def convert_time_rmc(date_str, time_str):
'Convert a NMEA RMC date string and time string to UNIX epoch time.\n\n Args:\n date_str (str): NMEA UTC date string to convert, formatted as DDMMYY.\n nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS.SS where\n HH is the number of hours [0,24), MM is the number of minutes [0,60),\n and SS.SS is the number of seconds [0,60) of the time in UTC.\n\n Return:\n tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.\n tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.\n '
if ((not time_str[0:2]) or (not time_str[2:4]) or (not time_str[4:6])):
return (float('NaN'), float('NaN'))
pc_year = datetime.date.today().year
'\n example 1: utc_year = 99, pc_year = 2100\n years = 2100 + safe_int((2100 % 100 - 99) / 50.0) = 2099\n example 2: utc_year = 00, pc_year = 2099\n years = 2099 + safe_int((2099 % 100 - 00) / 50.0) = 2100\n '
utc_year = safe_int(date_str[4:6])
years = (pc_year + safe_int((((pc_year % 100) - utc_year) / 50.0)))
months = safe_int(date_str[2:4])
days = safe_int(date_str[0:2])
hours = safe_int(time_str[0:2])
minutes = safe_int(time_str[2:4])
seconds = safe_int(time_str[4:6])
nanosecs = (safe_int(time_str[7:]) * pow(10, (9 - len(time_str[7:]))))
unix_secs = calendar.timegm((years, months, days, hours, minutes, seconds))
return (unix_secs, nanosecs) | def convert_time_rmc(date_str, time_str):
'Convert a NMEA RMC date string and time string to UNIX epoch time.\n\n Args:\n date_str (str): NMEA UTC date string to convert, formatted as DDMMYY.\n nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS.SS where\n HH is the number of hours [0,24), MM is the number of minutes [0,60),\n and SS.SS is the number of seconds [0,60) of the time in UTC.\n\n Return:\n tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.\n tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.\n '
if ((not time_str[0:2]) or (not time_str[2:4]) or (not time_str[4:6])):
return (float('NaN'), float('NaN'))
pc_year = datetime.date.today().year
'\n example 1: utc_year = 99, pc_year = 2100\n years = 2100 + safe_int((2100 % 100 - 99) / 50.0) = 2099\n example 2: utc_year = 00, pc_year = 2099\n years = 2099 + safe_int((2099 % 100 - 00) / 50.0) = 2100\n '
utc_year = safe_int(date_str[4:6])
years = (pc_year + safe_int((((pc_year % 100) - utc_year) / 50.0)))
months = safe_int(date_str[2:4])
days = safe_int(date_str[0:2])
hours = safe_int(time_str[0:2])
minutes = safe_int(time_str[2:4])
seconds = safe_int(time_str[4:6])
nanosecs = (safe_int(time_str[7:]) * pow(10, (9 - len(time_str[7:]))))
unix_secs = calendar.timegm((years, months, days, hours, minutes, seconds))
return (unix_secs, nanosecs)<|docstring|>Convert a NMEA RMC date string and time string to UNIX epoch time.
Args:
date_str (str): NMEA UTC date string to convert, formatted as DDMMYY.
nmea_utc (str): NMEA UTC time string to convert. The expected format is HHMMSS.SS where
HH is the number of hours [0,24), MM is the number of minutes [0,60),
and SS.SS is the number of seconds [0,60) of the time in UTC.
Return:
tuple(int, int): 2-tuple of (unix seconds, nanoseconds) if the sentence contains valid time.
tuple(float, float): 2-tuple of (NaN, NaN) if the sentence does not contain valid time.<|endoftext|> |
bbe7f272b576e2932b288186c879ec529e710b1b0690c51c19db531c8775cb8a | def convert_status_flag(status_flag):
'Convert a NMEA RMB/RMC status flag to bool.\n\n Args:\n status_flag (str): NMEA status flag, which should be "A" or "V"\n\n Return:\n True if the status_flag is "A" for Active.\n '
if (status_flag == 'A'):
return True
elif (status_flag == 'V'):
return False
else:
return False | Convert a NMEA RMB/RMC status flag to bool.
Args:
status_flag (str): NMEA status flag, which should be "A" or "V"
Return:
True if the status_flag is "A" for Active. | src/libnmea_navsat_driver/parser.py | convert_status_flag | boschresearch/nmea_navsat_driver | 0 | python | def convert_status_flag(status_flag):
'Convert a NMEA RMB/RMC status flag to bool.\n\n Args:\n status_flag (str): NMEA status flag, which should be "A" or "V"\n\n Return:\n True if the status_flag is "A" for Active.\n '
if (status_flag == 'A'):
return True
elif (status_flag == 'V'):
return False
else:
return False | def convert_status_flag(status_flag):
'Convert a NMEA RMB/RMC status flag to bool.\n\n Args:\n status_flag (str): NMEA status flag, which should be "A" or "V"\n\n Return:\n True if the status_flag is "A" for Active.\n '
if (status_flag == 'A'):
return True
elif (status_flag == 'V'):
return False
else:
return False<|docstring|>Convert a NMEA RMB/RMC status flag to bool.
Args:
status_flag (str): NMEA status flag, which should be "A" or "V"
Return:
True if the status_flag is "A" for Active.<|endoftext|> |
bfe73525285a68917e3424b8e443088ed30bb8e1d62edfc9d2b8507a30f1971b | def convert_knots_to_mps(knots):
'Convert a speed in knots to meters per second.\n\n Args:\n knots (float, int, or str): Speed in knots.\n\n Return:\n The value of safe_float(knots) converted from knots to meters/second.\n '
return (safe_float(knots) * 0.514444444444) | Convert a speed in knots to meters per second.
Args:
knots (float, int, or str): Speed in knots.
Return:
The value of safe_float(knots) converted from knots to meters/second. | src/libnmea_navsat_driver/parser.py | convert_knots_to_mps | boschresearch/nmea_navsat_driver | 0 | python | def convert_knots_to_mps(knots):
'Convert a speed in knots to meters per second.\n\n Args:\n knots (float, int, or str): Speed in knots.\n\n Return:\n The value of safe_float(knots) converted from knots to meters/second.\n '
return (safe_float(knots) * 0.514444444444) | def convert_knots_to_mps(knots):
'Convert a speed in knots to meters per second.\n\n Args:\n knots (float, int, or str): Speed in knots.\n\n Return:\n The value of safe_float(knots) converted from knots to meters/second.\n '
return (safe_float(knots) * 0.514444444444)<|docstring|>Convert a speed in knots to meters per second.
Args:
knots (float, int, or str): Speed in knots.
Return:
The value of safe_float(knots) converted from knots to meters/second.<|endoftext|> |
ce9c57f8bfc642eef9eea502515e43ae01730ee02b28af261975fb08e8be201f | def convert_deg_to_rads(degs):
"Convert an angle in degrees to radians.\n\n This wrapper is needed because math.radians doesn't accept non-numeric inputs.\n\n Args:\n degs (float, int, or str): Angle in degrees\n\n Return:\n The value of safe_float(degs) converted from degrees to radians.\n "
return math.radians(safe_float(degs)) | Convert an angle in degrees to radians.
This wrapper is needed because math.radians doesn't accept non-numeric inputs.
Args:
degs (float, int, or str): Angle in degrees
Return:
The value of safe_float(degs) converted from degrees to radians. | src/libnmea_navsat_driver/parser.py | convert_deg_to_rads | boschresearch/nmea_navsat_driver | 0 | python | def convert_deg_to_rads(degs):
"Convert an angle in degrees to radians.\n\n This wrapper is needed because math.radians doesn't accept non-numeric inputs.\n\n Args:\n degs (float, int, or str): Angle in degrees\n\n Return:\n The value of safe_float(degs) converted from degrees to radians.\n "
return math.radians(safe_float(degs)) | def convert_deg_to_rads(degs):
"Convert an angle in degrees to radians.\n\n This wrapper is needed because math.radians doesn't accept non-numeric inputs.\n\n Args:\n degs (float, int, or str): Angle in degrees\n\n Return:\n The value of safe_float(degs) converted from degrees to radians.\n "
return math.radians(safe_float(degs))<|docstring|>Convert an angle in degrees to radians.
This wrapper is needed because math.radians doesn't accept non-numeric inputs.
Args:
degs (float, int, or str): Angle in degrees
Return:
The value of safe_float(degs) converted from degrees to radians.<|endoftext|> |
2bb996bca83a946e811434b373350beb208c3442168f5bb1b666a684de7618ab | def parse_nmea_sentence(nmea_sentence):
'Parse a NMEA sentence string into a dictionary.\n\n Args:\n nmea_sentence (str): A single NMEA sentence of one of the types in parse_maps.\n\n Return:\n A dict mapping string field names to values for each field in the NMEA sentence or\n False if the sentence could not be parsed.\n '
if (not re.match('^\\$(GP|GN|GL|GB|IN|PTNL|PUBX).*\\*[0-9A-Fa-f]{2}$', nmea_sentence)):
logger.debug(("Regex didn't match, sentence not valid NMEA? Sentence was: %s" % repr(nmea_sentence)))
return False
fields = [field.strip(',') for field in nmea_sentence[:(- 3)].split(',')]
if (fields[0][1:] not in ['PTNL', 'PUBX']):
sentence_type = fields[0][3:]
else:
sentence_type = ((fields[0][1:] + ',') + fields[1])
if (sentence_type not in parse_maps):
logger.debug(('Sentence type %s not in parse map, ignoring.' % repr(sentence_type)))
return False
parse_map = parse_maps[sentence_type]
parsed_sentence = {}
for entry in parse_map:
parsed_sentence[entry[0]] = entry[1](fields[entry[2]])
if (sentence_type == 'RMC'):
parsed_sentence['utc_time'] = convert_time_rmc(fields[9], fields[1])
return {sentence_type: parsed_sentence} | Parse a NMEA sentence string into a dictionary.
Args:
nmea_sentence (str): A single NMEA sentence of one of the types in parse_maps.
Return:
A dict mapping string field names to values for each field in the NMEA sentence or
False if the sentence could not be parsed. | src/libnmea_navsat_driver/parser.py | parse_nmea_sentence | boschresearch/nmea_navsat_driver | 0 | python | def parse_nmea_sentence(nmea_sentence):
'Parse a NMEA sentence string into a dictionary.\n\n Args:\n nmea_sentence (str): A single NMEA sentence of one of the types in parse_maps.\n\n Return:\n A dict mapping string field names to values for each field in the NMEA sentence or\n False if the sentence could not be parsed.\n '
if (not re.match('^\\$(GP|GN|GL|GB|IN|PTNL|PUBX).*\\*[0-9A-Fa-f]{2}$', nmea_sentence)):
logger.debug(("Regex didn't match, sentence not valid NMEA? Sentence was: %s" % repr(nmea_sentence)))
return False
fields = [field.strip(',') for field in nmea_sentence[:(- 3)].split(',')]
if (fields[0][1:] not in ['PTNL', 'PUBX']):
sentence_type = fields[0][3:]
else:
sentence_type = ((fields[0][1:] + ',') + fields[1])
if (sentence_type not in parse_maps):
logger.debug(('Sentence type %s not in parse map, ignoring.' % repr(sentence_type)))
return False
parse_map = parse_maps[sentence_type]
parsed_sentence = {}
for entry in parse_map:
parsed_sentence[entry[0]] = entry[1](fields[entry[2]])
if (sentence_type == 'RMC'):
parsed_sentence['utc_time'] = convert_time_rmc(fields[9], fields[1])
return {sentence_type: parsed_sentence} | def parse_nmea_sentence(nmea_sentence):
'Parse a NMEA sentence string into a dictionary.\n\n Args:\n nmea_sentence (str): A single NMEA sentence of one of the types in parse_maps.\n\n Return:\n A dict mapping string field names to values for each field in the NMEA sentence or\n False if the sentence could not be parsed.\n '
if (not re.match('^\\$(GP|GN|GL|GB|IN|PTNL|PUBX).*\\*[0-9A-Fa-f]{2}$', nmea_sentence)):
logger.debug(("Regex didn't match, sentence not valid NMEA? Sentence was: %s" % repr(nmea_sentence)))
return False
fields = [field.strip(',') for field in nmea_sentence[:(- 3)].split(',')]
if (fields[0][1:] not in ['PTNL', 'PUBX']):
sentence_type = fields[0][3:]
else:
sentence_type = ((fields[0][1:] + ',') + fields[1])
if (sentence_type not in parse_maps):
logger.debug(('Sentence type %s not in parse map, ignoring.' % repr(sentence_type)))
return False
parse_map = parse_maps[sentence_type]
parsed_sentence = {}
for entry in parse_map:
parsed_sentence[entry[0]] = entry[1](fields[entry[2]])
if (sentence_type == 'RMC'):
parsed_sentence['utc_time'] = convert_time_rmc(fields[9], fields[1])
return {sentence_type: parsed_sentence}<|docstring|>Parse a NMEA sentence string into a dictionary.
Args:
nmea_sentence (str): A single NMEA sentence of one of the types in parse_maps.
Return:
A dict mapping string field names to values for each field in the NMEA sentence or
False if the sentence could not be parsed.<|endoftext|> |
93ff5f4f8ac4cf97a894e122fb91c5afbec4e989ab637b17c0677981d7f83861 | def reset_instrument(self, mount: Optional[Mount]=None) -> None:
'\n Reset the internal state of a pipette by its mount, without doing\n any lower level reconfiguration. This is useful to make sure that no\n settings changes from a protocol persist.\n\n mount: If specified, reset that mount. If not specified, reset both\n '
... | Reset the internal state of a pipette by its mount, without doing
any lower level reconfiguration. This is useful to make sure that no
settings changes from a protocol persist.
mount: If specified, reset that mount. If not specified, reset both | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | reset_instrument | anuwrag/opentrons | 2 | python | def reset_instrument(self, mount: Optional[Mount]=None) -> None:
'\n Reset the internal state of a pipette by its mount, without doing\n any lower level reconfiguration. This is useful to make sure that no\n settings changes from a protocol persist.\n\n mount: If specified, reset that mount. If not specified, reset both\n '
... | def reset_instrument(self, mount: Optional[Mount]=None) -> None:
'\n Reset the internal state of a pipette by its mount, without doing\n any lower level reconfiguration. This is useful to make sure that no\n settings changes from a protocol persist.\n\n mount: If specified, reset that mount. If not specified, reset both\n '
...<|docstring|>Reset the internal state of a pipette by its mount, without doing
any lower level reconfiguration. This is useful to make sure that no
settings changes from a protocol persist.
mount: If specified, reset that mount. If not specified, reset both<|endoftext|> |
280030960aa118c2ca3cb6d2fc0e16bb5ba16473e6985f4d48a407d16d83068e | async def cache_instruments(self, require: Optional[Dict[(Mount, PipetteName)]]=None) -> None:
'\n Scan the attached instruments, take necessary configuration actions,\n and set up hardware controller internal state if necessary.\n\n require: If specified, the require should be a dict of mounts to\n instrument names describing the instruments expected to be\n present. This can save a subsequent call of attached_instruments\n and also serves as the hook for the hardware simulator to decide\n what is attached.\n raises RuntimeError: If an instrument is expected but not found.\n\n This function will only change the things that need to be changed.\n If the same pipette (by serial) or the same lack of pipette is\n observed on a mount before and after the scan, no action will be\n taken. That makes this function appropriate for setting up the\n robot for operation, but not for making sure that any previous\n settings changes have been reset. For the latter use case, use\n reset_instrument.\n '
... | Scan the attached instruments, take necessary configuration actions,
and set up hardware controller internal state if necessary.
require: If specified, the require should be a dict of mounts to
instrument names describing the instruments expected to be
present. This can save a subsequent call of attached_instruments
and also serves as the hook for the hardware simulator to decide
what is attached.
raises RuntimeError: If an instrument is expected but not found.
This function will only change the things that need to be changed.
If the same pipette (by serial) or the same lack of pipette is
observed on a mount before and after the scan, no action will be
taken. That makes this function appropriate for setting up the
robot for operation, but not for making sure that any previous
settings changes have been reset. For the latter use case, use
reset_instrument. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | cache_instruments | anuwrag/opentrons | 2 | python | async def cache_instruments(self, require: Optional[Dict[(Mount, PipetteName)]]=None) -> None:
'\n Scan the attached instruments, take necessary configuration actions,\n and set up hardware controller internal state if necessary.\n\n require: If specified, the require should be a dict of mounts to\n instrument names describing the instruments expected to be\n present. This can save a subsequent call of attached_instruments\n and also serves as the hook for the hardware simulator to decide\n what is attached.\n raises RuntimeError: If an instrument is expected but not found.\n\n This function will only change the things that need to be changed.\n If the same pipette (by serial) or the same lack of pipette is\n observed on a mount before and after the scan, no action will be\n taken. That makes this function appropriate for setting up the\n robot for operation, but not for making sure that any previous\n settings changes have been reset. For the latter use case, use\n reset_instrument.\n '
... | async def cache_instruments(self, require: Optional[Dict[(Mount, PipetteName)]]=None) -> None:
'\n Scan the attached instruments, take necessary configuration actions,\n and set up hardware controller internal state if necessary.\n\n require: If specified, the require should be a dict of mounts to\n instrument names describing the instruments expected to be\n present. This can save a subsequent call of attached_instruments\n and also serves as the hook for the hardware simulator to decide\n what is attached.\n raises RuntimeError: If an instrument is expected but not found.\n\n This function will only change the things that need to be changed.\n If the same pipette (by serial) or the same lack of pipette is\n observed on a mount before and after the scan, no action will be\n taken. That makes this function appropriate for setting up the\n robot for operation, but not for making sure that any previous\n settings changes have been reset. For the latter use case, use\n reset_instrument.\n '
...<|docstring|>Scan the attached instruments, take necessary configuration actions,
and set up hardware controller internal state if necessary.
require: If specified, the require should be a dict of mounts to
instrument names describing the instruments expected to be
present. This can save a subsequent call of attached_instruments
and also serves as the hook for the hardware simulator to decide
what is attached.
raises RuntimeError: If an instrument is expected but not found.
This function will only change the things that need to be changed.
If the same pipette (by serial) or the same lack of pipette is
observed on a mount before and after the scan, no action will be
taken. That makes this function appropriate for setting up the
robot for operation, but not for making sure that any previous
settings changes have been reset. For the latter use case, use
reset_instrument.<|endoftext|> |
b396027069b1a6cff30fe0bb64ed6f153308a66b4936e142da53101d462f6ee8 | def get_attached_instruments(self) -> Dict[(Mount, PipetteDict)]:
"Get the status dicts of the cached attached instruments.\n\n Also available as :py:meth:`get_attached_instruments`.\n\n This returns a dictified version of the\n hardware_control.pipette.Pipette as a dict keyed by\n the Mount to which the pipette is attached.\n If no pipette is attached on a given mount, the mount key will\n still be present but will have the value ``None``.\n\n Note that on the OT-2 this is only a query of a cached value;\n to actively scan for changes, use cache_instruments`. This process\n deactivates the OT-2's motors and should be used sparingly.\n "
... | Get the status dicts of the cached attached instruments.
Also available as :py:meth:`get_attached_instruments`.
This returns a dictified version of the
hardware_control.pipette.Pipette as a dict keyed by
the Mount to which the pipette is attached.
If no pipette is attached on a given mount, the mount key will
still be present but will have the value ``None``.
Note that on the OT-2 this is only a query of a cached value;
to actively scan for changes, use cache_instruments`. This process
deactivates the OT-2's motors and should be used sparingly. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | get_attached_instruments | anuwrag/opentrons | 2 | python | def get_attached_instruments(self) -> Dict[(Mount, PipetteDict)]:
"Get the status dicts of the cached attached instruments.\n\n Also available as :py:meth:`get_attached_instruments`.\n\n This returns a dictified version of the\n hardware_control.pipette.Pipette as a dict keyed by\n the Mount to which the pipette is attached.\n If no pipette is attached on a given mount, the mount key will\n still be present but will have the value ``None``.\n\n Note that on the OT-2 this is only a query of a cached value;\n to actively scan for changes, use cache_instruments`. This process\n deactivates the OT-2's motors and should be used sparingly.\n "
... | def get_attached_instruments(self) -> Dict[(Mount, PipetteDict)]:
"Get the status dicts of the cached attached instruments.\n\n Also available as :py:meth:`get_attached_instruments`.\n\n This returns a dictified version of the\n hardware_control.pipette.Pipette as a dict keyed by\n the Mount to which the pipette is attached.\n If no pipette is attached on a given mount, the mount key will\n still be present but will have the value ``None``.\n\n Note that on the OT-2 this is only a query of a cached value;\n to actively scan for changes, use cache_instruments`. This process\n deactivates the OT-2's motors and should be used sparingly.\n "
...<|docstring|>Get the status dicts of the cached attached instruments.
Also available as :py:meth:`get_attached_instruments`.
This returns a dictified version of the
hardware_control.pipette.Pipette as a dict keyed by
the Mount to which the pipette is attached.
If no pipette is attached on a given mount, the mount key will
still be present but will have the value ``None``.
Note that on the OT-2 this is only a query of a cached value;
to actively scan for changes, use cache_instruments`. This process
deactivates the OT-2's motors and should be used sparingly.<|endoftext|> |
a3335cf478ebb0b86d948f9ad9126c116fe14af7c248823976453fef3beef483 | def get_attached_instrument(self, mount: Mount) -> PipetteDict:
'Get the status dict of a single cached instrument.\n\n Return values and caveats are as get_attached_instruments.\n '
... | Get the status dict of a single cached instrument.
Return values and caveats are as get_attached_instruments. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | get_attached_instrument | anuwrag/opentrons | 2 | python | def get_attached_instrument(self, mount: Mount) -> PipetteDict:
'Get the status dict of a single cached instrument.\n\n Return values and caveats are as get_attached_instruments.\n '
... | def get_attached_instrument(self, mount: Mount) -> PipetteDict:
'Get the status dict of a single cached instrument.\n\n Return values and caveats are as get_attached_instruments.\n '
...<|docstring|>Get the status dict of a single cached instrument.
Return values and caveats are as get_attached_instruments.<|endoftext|> |
0521c54afecd53132aad7170c08edd1b498bbbd880f64148d3e85755e2734f2e | def calibrate_plunger(self, mount: Mount, top: Optional[float]=None, bottom: Optional[float]=None, blow_out: Optional[float]=None, drop_tip: Optional[float]=None) -> None:
"\n Set calibration values for the pipette plunger.\n This can be called multiple times as the user sets each value,\n or you can set them all at once.\n :param top: Touching but not engaging the plunger.\n :param bottom: Must be above the pipette's physical hard-stop, while\n still leaving enough room for 'blow_out'\n :param blow_out: Plunger is pushed down enough to expel all liquids.\n :param drop_tip: Position that causes the tip to be released from the\n pipette\n "
... | Set calibration values for the pipette plunger.
This can be called multiple times as the user sets each value,
or you can set them all at once.
:param top: Touching but not engaging the plunger.
:param bottom: Must be above the pipette's physical hard-stop, while
still leaving enough room for 'blow_out'
:param blow_out: Plunger is pushed down enough to expel all liquids.
:param drop_tip: Position that causes the tip to be released from the
pipette | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | calibrate_plunger | anuwrag/opentrons | 2 | python | def calibrate_plunger(self, mount: Mount, top: Optional[float]=None, bottom: Optional[float]=None, blow_out: Optional[float]=None, drop_tip: Optional[float]=None) -> None:
"\n Set calibration values for the pipette plunger.\n This can be called multiple times as the user sets each value,\n or you can set them all at once.\n :param top: Touching but not engaging the plunger.\n :param bottom: Must be above the pipette's physical hard-stop, while\n still leaving enough room for 'blow_out'\n :param blow_out: Plunger is pushed down enough to expel all liquids.\n :param drop_tip: Position that causes the tip to be released from the\n pipette\n "
... | def calibrate_plunger(self, mount: Mount, top: Optional[float]=None, bottom: Optional[float]=None, blow_out: Optional[float]=None, drop_tip: Optional[float]=None) -> None:
"\n Set calibration values for the pipette plunger.\n This can be called multiple times as the user sets each value,\n or you can set them all at once.\n :param top: Touching but not engaging the plunger.\n :param bottom: Must be above the pipette's physical hard-stop, while\n still leaving enough room for 'blow_out'\n :param blow_out: Plunger is pushed down enough to expel all liquids.\n :param drop_tip: Position that causes the tip to be released from the\n pipette\n "
...<|docstring|>Set calibration values for the pipette plunger.
This can be called multiple times as the user sets each value,
or you can set them all at once.
:param top: Touching but not engaging the plunger.
:param bottom: Must be above the pipette's physical hard-stop, while
still leaving enough room for 'blow_out'
:param blow_out: Plunger is pushed down enough to expel all liquids.
:param drop_tip: Position that causes the tip to be released from the
pipette<|endoftext|> |
ae916d0381841154e0600315755cc0fdbed6cae8279db56eb407a1886c586595 | def set_flow_rate(self, mount: Mount, aspirate: Optional[float]=None, dispense: Optional[float]=None, blow_out: Optional[float]=None) -> None:
"Set a pipette's rate of liquid handling in flow rate units"
... | Set a pipette's rate of liquid handling in flow rate units | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | set_flow_rate | anuwrag/opentrons | 2 | python | def set_flow_rate(self, mount: Mount, aspirate: Optional[float]=None, dispense: Optional[float]=None, blow_out: Optional[float]=None) -> None:
... | def set_flow_rate(self, mount: Mount, aspirate: Optional[float]=None, dispense: Optional[float]=None, blow_out: Optional[float]=None) -> None:
...<|docstring|>Set a pipette's rate of liquid handling in flow rate units<|endoftext|> |
f694da4cb5b8ac0450f4ef994f184cbce335d0c41776cf8555d23669873bae9e | def set_pipette_speed(self, mount: Mount, aspirate: Optional[float]=None, dispense: Optional[float]=None, blow_out: Optional[float]=None) -> None:
"Set a pipette's rate of liquid handling in linear speed units."
... | Set a pipette's rate of liquid handling in linear speed units. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | set_pipette_speed | anuwrag/opentrons | 2 | python | def set_pipette_speed(self, mount: Mount, aspirate: Optional[float]=None, dispense: Optional[float]=None, blow_out: Optional[float]=None) -> None:
... | def set_pipette_speed(self, mount: Mount, aspirate: Optional[float]=None, dispense: Optional[float]=None, blow_out: Optional[float]=None) -> None:
...<|docstring|>Set a pipette's rate of liquid handling in linear speed units.<|endoftext|> |
023af9c36535f6e1485a6176b0e3443cf84cba0c335a244b71b5a419bd941e13 | def get_instrument_max_height(self, mount: Mount, critical_point: Optional[CriticalPoint]=None) -> float:
'Return max achievable height of the attached instrument\n based on the current critical point\n '
... | Return max achievable height of the attached instrument
based on the current critical point | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | get_instrument_max_height | anuwrag/opentrons | 2 | python | def get_instrument_max_height(self, mount: Mount, critical_point: Optional[CriticalPoint]=None) -> float:
'Return max achievable height of the attached instrument\n based on the current critical point\n '
... | def get_instrument_max_height(self, mount: Mount, critical_point: Optional[CriticalPoint]=None) -> float:
'Return max achievable height of the attached instrument\n based on the current critical point\n '
...<|docstring|>Return max achievable height of the attached instrument
based on the current critical point<|endoftext|> |
c2dd839019ddf9542b610e65649148522687cd86fa38bfb5572c6af7fa9ff55f | async def add_tip(self, mount: Mount, tip_length: float) -> None:
'Inform the hardware that a tip is now attached to a pipette.\n\n This changes the critical point of the pipette to make sure that\n the end of the tip is what moves around, and allows liquid handling.\n '
... | Inform the hardware that a tip is now attached to a pipette.
This changes the critical point of the pipette to make sure that
the end of the tip is what moves around, and allows liquid handling. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | add_tip | anuwrag/opentrons | 2 | python | async def add_tip(self, mount: Mount, tip_length: float) -> None:
'Inform the hardware that a tip is now attached to a pipette.\n\n This changes the critical point of the pipette to make sure that\n the end of the tip is what moves around, and allows liquid handling.\n '
... | async def add_tip(self, mount: Mount, tip_length: float) -> None:
'Inform the hardware that a tip is now attached to a pipette.\n\n This changes the critical point of the pipette to make sure that\n the end of the tip is what moves around, and allows liquid handling.\n '
...<|docstring|>Inform the hardware that a tip is now attached to a pipette.
This changes the critical point of the pipette to make sure that
the end of the tip is what moves around, and allows liquid handling.<|endoftext|> |
10ea7ee12a6c837e7df7d03d9b8b670fa7d06b0b45bb40190fb8f3cdd5bd6138 | async def remove_tip(self, mount: Mount) -> None:
'Inform the hardware that a tip is no longer attached to a pipette.\n\n This changes the critical point of the system to the end of the\n nozzle and prevents further liquid handling commands.\n '
... | Inform the hardware that a tip is no longer attached to a pipette.
This changes the critical point of the system to the end of the
nozzle and prevents further liquid handling commands. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | remove_tip | anuwrag/opentrons | 2 | python | async def remove_tip(self, mount: Mount) -> None:
'Inform the hardware that a tip is no longer attached to a pipette.\n\n This changes the critical point of the system to the end of the\n nozzle and prevents further liquid handling commands.\n '
... | async def remove_tip(self, mount: Mount) -> None:
'Inform the hardware that a tip is no longer attached to a pipette.\n\n This changes the critical point of the system to the end of the\n nozzle and prevents further liquid handling commands.\n '
...<|docstring|>Inform the hardware that a tip is no longer attached to a pipette.
This changes the critical point of the system to the end of the
nozzle and prevents further liquid handling commands.<|endoftext|> |
43a41d179fc26bd016118bdfc88cb13bdaa4d6e83685acab9d7781232ea645cd | def set_current_tiprack_diameter(self, mount: Mount, tiprack_diameter: float) -> None:
'Inform the hardware of the diameter of the tiprack.\n\n This drives the magnitude of the shake commanded for pipettes that need\n a shake after dropping or picking up tips.\n '
... | Inform the hardware of the diameter of the tiprack.
This drives the magnitude of the shake commanded for pipettes that need
a shake after dropping or picking up tips. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | set_current_tiprack_diameter | anuwrag/opentrons | 2 | python | def set_current_tiprack_diameter(self, mount: Mount, tiprack_diameter: float) -> None:
'Inform the hardware of the diameter of the tiprack.\n\n This drives the magnitude of the shake commanded for pipettes that need\n a shake after dropping or picking up tips.\n '
... | def set_current_tiprack_diameter(self, mount: Mount, tiprack_diameter: float) -> None:
'Inform the hardware of the diameter of the tiprack.\n\n This drives the magnitude of the shake commanded for pipettes that need\n a shake after dropping or picking up tips.\n '
...<|docstring|>Inform the hardware of the diameter of the tiprack.
This drives the magnitude of the shake commanded for pipettes that need
a shake after dropping or picking up tips.<|endoftext|> |
29f37b92c8416498348fd9fa9a84a5b1debbeed7bb9b67c1831b31dfc76abb03 | def set_working_volume(self, mount: Mount, tip_volume: int) -> None:
'Inform the hardware how much volume a pipette can aspirate.\n\n This will set the limit of aspiration for the pipette, and is\n necessary for backcompatibility.\n '
... | Inform the hardware how much volume a pipette can aspirate.
This will set the limit of aspiration for the pipette, and is
necessary for backcompatibility. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | set_working_volume | anuwrag/opentrons | 2 | python | def set_working_volume(self, mount: Mount, tip_volume: int) -> None:
'Inform the hardware how much volume a pipette can aspirate.\n\n This will set the limit of aspiration for the pipette, and is\n necessary for backcompatibility.\n '
... | def set_working_volume(self, mount: Mount, tip_volume: int) -> None:
'Inform the hardware how much volume a pipette can aspirate.\n\n This will set the limit of aspiration for the pipette, and is\n necessary for backcompatibility.\n '
...<|docstring|>Inform the hardware how much volume a pipette can aspirate.
This will set the limit of aspiration for the pipette, and is
necessary for backcompatibility.<|endoftext|> |
9546db12a8f289c27d6434caf4943bc928ca9043125e07ebecbea9588b8e024f | @property
def hardware_instruments(self) -> Dict[(Mount, Optional[Pipette])]:
'Return the underlying hardware representation of the instruments.\n\n This should rarely be used. Do not write new code that uses it.\n '
... | Return the underlying hardware representation of the instruments.
This should rarely be used. Do not write new code that uses it. | api/src/opentrons/hardware_control/protocols/instrument_configurer.py | hardware_instruments | anuwrag/opentrons | 2 | python | @property
def hardware_instruments(self) -> Dict[(Mount, Optional[Pipette])]:
'Return the underlying hardware representation of the instruments.\n\n This should rarely be used. Do not write new code that uses it.\n '
... | @property
def hardware_instruments(self) -> Dict[(Mount, Optional[Pipette])]:
'Return the underlying hardware representation of the instruments.\n\n This should rarely be used. Do not write new code that uses it.\n '
...<|docstring|>Return the underlying hardware representation of the instruments.
This should rarely be used. Do not write new code that uses it.<|endoftext|> |
93f87055911cf4f24fa4fb6bfb444bb66637dc7ba6e050201eeb1c59df51bdac | def longprint(df):
'Prints out a full dataframe'
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df) | Prints out a full dataframe | src/npi/utils/utils.py | longprint | akilby/npi | 0 | python | def longprint(df):
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df) | def longprint(df):
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df)<|docstring|>Prints out a full dataframe<|endoftext|> |
ce24c8798b21cef7ad26c34ffaba0bd35290a9a44966d2d8710f760e7afbb878 | def urlread(url):
'Prints all the text at specified URL'
response = requests.get(url)
pprint(response.text) | Prints all the text at specified URL | src/npi/utils/utils.py | urlread | akilby/npi | 0 | python | def urlread(url):
response = requests.get(url)
pprint(response.text) | def urlread(url):
response = requests.get(url)
pprint(response.text)<|docstring|>Prints all the text at specified URL<|endoftext|> |
e2232c9e1538932bea0c9c63ec8a7fd5b3d9f91ec0e714af3143f4b4b9ea8df8 | def wget_checkfirst(url, to_dir, destructive=False):
'\n Wgets a download path to to_dir, checking first\n if that filename exists in that path\n\n Does not overwrite unless destructive=True\n '
filename = wget.detect_filename(url)
destination_path = os.path.join(to_dir, filename)
if (os.path.isfile(destination_path) and (not destructive)):
print(('File already downloaded to: %s' % destination_path))
else:
print(('WGetting url: %s' % url))
try:
wget.download(url, out=to_dir)
except urllib.error.HTTPError:
print('Failed')
return None
return destination_path | Wgets a download path to to_dir, checking first
if that filename exists in that path
Does not overwrite unless destructive=True | src/npi/utils/utils.py | wget_checkfirst | akilby/npi | 0 | python | def wget_checkfirst(url, to_dir, destructive=False):
'\n Wgets a download path to to_dir, checking first\n if that filename exists in that path\n\n Does not overwrite unless destructive=True\n '
filename = wget.detect_filename(url)
destination_path = os.path.join(to_dir, filename)
if (os.path.isfile(destination_path) and (not destructive)):
print(('File already downloaded to: %s' % destination_path))
else:
print(('WGetting url: %s' % url))
try:
wget.download(url, out=to_dir)
except urllib.error.HTTPError:
print('Failed')
return None
return destination_path | def wget_checkfirst(url, to_dir, destructive=False):
'\n Wgets a download path to to_dir, checking first\n if that filename exists in that path\n\n Does not overwrite unless destructive=True\n '
filename = wget.detect_filename(url)
destination_path = os.path.join(to_dir, filename)
if (os.path.isfile(destination_path) and (not destructive)):
print(('File already downloaded to: %s' % destination_path))
else:
print(('WGetting url: %s' % url))
try:
wget.download(url, out=to_dir)
except urllib.error.HTTPError:
print('Failed')
return None
return destination_path<|docstring|>Wgets a download path to to_dir, checking first
if that filename exists in that path
Does not overwrite unless destructive=True<|endoftext|> |
d555dd5d579648cc6482061bf6970460c15ec4cc92b2c0d91faa1f2e6ba222c7 | def download_checkfirst(url, output_path, destructive=False):
'\n Downloads a download path to output_path, checking first\n if that filename exists in that path\n\n Does not overwrite unless destructive=True\n '
try:
filename = detect_filename(url)
except urllib.error.HTTPError:
print('Failed')
return None
if os.path.isdir(output_path):
output_path = os.path.join(output_path, filename)
if (os.path.isfile(output_path) and (not destructive)):
print(('File already downloaded to: %s' % output_path))
else:
print(('Downloading url: %s' % url))
try:
download_url(url, output_path)
except urllib.error.HTTPError:
print('Failed')
return None
return output_path | Downloads a download path to output_path, checking first
if that filename exists in that path
Does not overwrite unless destructive=True | src/npi/utils/utils.py | download_checkfirst | akilby/npi | 0 | python | def download_checkfirst(url, output_path, destructive=False):
'\n Downloads a download path to output_path, checking first\n if that filename exists in that path\n\n Does not overwrite unless destructive=True\n '
try:
filename = detect_filename(url)
except urllib.error.HTTPError:
print('Failed')
return None
if os.path.isdir(output_path):
output_path = os.path.join(output_path, filename)
if (os.path.isfile(output_path) and (not destructive)):
print(('File already downloaded to: %s' % output_path))
else:
print(('Downloading url: %s' % url))
try:
download_url(url, output_path)
except urllib.error.HTTPError:
print('Failed')
return None
return output_path | def download_checkfirst(url, output_path, destructive=False):
'\n Downloads a download path to output_path, checking first\n if that filename exists in that path\n\n Does not overwrite unless destructive=True\n '
try:
filename = detect_filename(url)
except urllib.error.HTTPError:
print('Failed')
return None
if os.path.isdir(output_path):
output_path = os.path.join(output_path, filename)
if (os.path.isfile(output_path) and (not destructive)):
print(('File already downloaded to: %s' % output_path))
else:
print(('Downloading url: %s' % url))
try:
download_url(url, output_path)
except urllib.error.HTTPError:
print('Failed')
return None
return output_path<|docstring|>Downloads a download path to output_path, checking first
if that filename exists in that path
Does not overwrite unless destructive=True<|endoftext|> |
226741ddaca7ce7070cb928b8b4e130d22a19199ac1ba149d616d4a3cd7141a0 | def unzip_checkfirst_check(path, to_dir):
'\n Returns false if it looks like it is already unzipped\n '
if os.path.isdir(to_dir):
s1 = os.path.getsize(path)
s2 = sum([os.path.getsize(path) for path in glob.glob(os.path.join(to_dir, '*'))])
else:
(s1, s2) = (100, 0)
return (False if (s2 >= s1) else True) | Returns false if it looks like it is already unzipped | src/npi/utils/utils.py | unzip_checkfirst_check | akilby/npi | 0 | python | def unzip_checkfirst_check(path, to_dir):
'\n \n '
if os.path.isdir(to_dir):
s1 = os.path.getsize(path)
s2 = sum([os.path.getsize(path) for path in glob.glob(os.path.join(to_dir, '*'))])
else:
(s1, s2) = (100, 0)
return (False if (s2 >= s1) else True) | def unzip_checkfirst_check(path, to_dir):
'\n \n '
if os.path.isdir(to_dir):
s1 = os.path.getsize(path)
s2 = sum([os.path.getsize(path) for path in glob.glob(os.path.join(to_dir, '*'))])
else:
(s1, s2) = (100, 0)
return (False if (s2 >= s1) else True)<|docstring|>Returns false if it looks like it is already unzipped<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.