hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9acad0ded6269eef35484dbc21c3aee4d10f2db7 | 24,052 | md | Markdown | fontlab7/srcdocs/mkdocs/fontlab.YPanelManager.md | fontlabcom/fontlab-python | e0c1083c2cee2d7a25b1197c5ae47da235fd5007 | [
"Apache-2.0"
] | 3 | 2020-09-16T02:13:27.000Z | 2022-03-09T14:23:31.000Z | fontlab7/srcdocs/mkdocs/fontlab.YPanelManager.md | fontlabcom/fontlab-python | e0c1083c2cee2d7a25b1197c5ae47da235fd5007 | [
"Apache-2.0"
] | null | null | null | fontlab7/srcdocs/mkdocs/fontlab.YPanelManager.md | fontlabcom/fontlab-python | e0c1083c2cee2d7a25b1197c5ae47da235fd5007 | [
"Apache-2.0"
] | null | null | null |
<a name="fontlab.YPanelManager"></a>
# `YPanelManager`
<dt class="class"><h2><span class="class-name">fontlab.YPanelManager</span> = <a name="fontlab.YPanelManager" href="#fontlab.YPanelManager">class YPanelManager</a>(PythonQt.PythonQtInstanceWrapper)</h2></dt><dd class="class"><dd>
<pre class="doc" markdown="0"></pre>
</dd> <div class="mro"><dl class="mro"><dt>Method resolution order:</dt><dd><a href="./fontlab.html#YPanelManager">YPanelManager</a></dd><dd>PythonQt.PythonQtInstanceWrapper</dd><dd><a href="./__builtin__.html#object">__builtin__.object</a></dd></dl></div><h4 class="head-desc">Descriptors </h4><dl class="descriptor"><dt>__dict__</dt>
<dd>
<pre class="doc" markdown="0">dictionary for instance variables (if defined)</pre>
</dd>
</dl>
<dl class="descriptor"><dt>__weakref__</dt>
<dd>
<pre class="doc" markdown="0">list of weak references to the object (if defined)</pre>
</dd>
</dl>
<h4 class="head-attrs">Attributes </h4><dl><dt><span class="other-name">blockSignals</span> = <unbound qt slot blockSignals of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-blockSignals">blockSignals</a>(a, b) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">childEvent</span> = <unbound qt slot py_q_childEvent of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-childEvent">childEvent</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">children</span> = <unbound qt slot children of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-children">children</a>(a) -> tuple</pre>
</dd></dl>
<dl><dt><span class="other-name">className</span> = <built-in method className of PythonQt.PythonQtClassWrapper object><dd>
<pre class="doc" markdown="0">Return the classname of the object</pre>
</dd></dl>
<dl><dt><span class="other-name">connect</span> = <unbound qt slot connect of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-connect">connect</a>(a, b, c, d, e) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">customEvent</span> = <unbound qt slot py_q_customEvent of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-customEvent">customEvent</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">delete</span> = <built-in method delete of PythonQt.PythonQtClassWrapper object><dd>
<pre class="doc" markdown="0">Deletes the given C++ object</pre>
</dd></dl>
<dl><dt><span class="other-name">deleteLater</span> = <unbound qt slot deleteLater of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-deleteLater">deleteLater</a>()</pre>
</dd></dl>
<dl><dt><span class="other-name">desktopClientRectChanged</span> = <unbound qt signal desktopClientRectChanged of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">destroyed</span> = <unbound qt signal destroyed of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">disconnect</span> = <unbound qt slot disconnect of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-disconnect">disconnect</a>(a, b, c, d) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">dumpObjectInfo</span> = <unbound qt slot dumpObjectInfo of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-dumpObjectInfo">dumpObjectInfo</a>(a)</pre>
</dd></dl>
<dl><dt><span class="other-name">dumpObjectTree</span> = <unbound qt slot dumpObjectTree of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-dumpObjectTree">dumpObjectTree</a>(a)</pre>
</dd></dl>
<dl><dt><span class="other-name">dynamicPropertyNames</span> = <unbound qt slot dynamicPropertyNames of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-dynamicPropertyNames">dynamicPropertyNames</a>(a)</pre>
</dd></dl>
<dl><dt><span class="other-name">event</span> = <unbound qt slot py_q_event of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-event">event</a>(a, b) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">eventFilter</span> = <unbound qt slot py_q_eventFilter of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-eventFilter">eventFilter</a>(a, b, c) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">findChild</span> = <unbound qt slot findChild of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-findChild">findChild</a>(a, b, c) -> PythonQt.private.QObject</pre>
</dd></dl>
<dl><dt><span class="other-name">findChildren</span> = <unbound qt slot findChildren of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-findChildren">findChildren</a>(a, b, c) -> tuple</pre>
</dd></dl>
<dl><dt><span class="other-name">help</span> = <built-in method help of PythonQt.PythonQtClassWrapper object><dd>
<pre class="doc" markdown="0">Shows the help of available methods for this class</pre>
</dd></dl>
<dl><dt><span class="other-name">inherits</span> = <built-in method inherits of PythonQt.PythonQtClassWrapper object><dd>
<pre class="doc" markdown="0">Returns if the class inherits or is of given type name</pre>
</dd></dl>
<dl><dt><span class="other-name">installEventFilter</span> = <unbound qt slot installEventFilter of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-installEventFilter">installEventFilter</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">isSignalConnected</span> = <unbound qt slot isSignalConnected of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-isSignalConnected">isSignalConnected</a>(a, b) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">isWidgetType</span> = <unbound qt slot isWidgetType of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-isWidgetType">isWidgetType</a>(a) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">isWindowType</span> = <unbound qt slot isWindowType of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-isWindowType">isWindowType</a>(a) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">killTimer</span> = <unbound qt slot killTimer of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-killTimer">killTimer</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">metaObject</span> = <unbound qt slot metaObject of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-metaObject">metaObject</a>(a) -> PythonQt.QtCore.QMetaObject</pre>
</dd></dl>
<dl><dt><span class="other-name">moveToThread</span> = <unbound qt slot moveToThread of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-moveToThread">moveToThread</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">objectName</span> = None</dt></dl>
<dl><dt><span class="other-name">objectNameChanged</span> = <unbound qt signal objectNameChanged of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">panelClosed</span> = <unbound qt signal panelClosed of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">parent</span> = <unbound qt slot parent of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-parent">parent</a>(a) -> PythonQt.private.QObject</pre>
</dd></dl>
<dl><dt><span class="other-name">property</span> = <unbound qt slot property of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-property">property</a>(a, b) -> object</pre>
</dd></dl>
<dl><dt><span class="other-name">removeEventFilter</span> = <unbound qt slot removeEventFilter of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-removeEventFilter">removeEventFilter</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">sender</span> = <unbound qt slot sender of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-sender">sender</a>(a) -> PythonQt.private.QObject</pre>
</dd></dl>
<dl><dt><span class="other-name">senderSignalIndex</span> = <unbound qt slot senderSignalIndex of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-senderSignalIndex">senderSignalIndex</a>(a) -> int</pre>
</dd></dl>
<dl><dt><span class="other-name">setObjectName</span> = <unbound qt slot setObjectName of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-setObjectName">setObjectName</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">setParent</span> = <unbound qt slot setParent of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-setParent">setParent</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">setProperty</span> = <unbound qt slot setProperty of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-setProperty">setProperty</a>(a, b, c) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">showContextMenu</span> = <unbound qt signal showContextMenu of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">signalsBlocked</span> = <unbound qt slot signalsBlocked of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-signalsBlocked">signalsBlocked</a>(a) -> bool</pre>
</dd></dl>
<dl><dt><span class="other-name">startTimer</span> = <unbound qt slot startTimer of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-startTimer">startTimer</a>(a, b, c) -> int</pre>
</dd></dl>
<dl><dt><span class="other-name">thread</span> = <unbound qt slot thread of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-thread">thread</a>(a)</pre>
</dd></dl>
<dl><dt><span class="other-name">timerEvent</span> = <unbound qt slot py_q_timerEvent of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-timerEvent">timerEvent</a>(a, b)</pre>
</dd></dl>
<dl><dt><span class="other-name">toolbarClosed</span> = <unbound qt signal toolbarClosed of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">toolbarOpened</span> = <unbound qt signal toolbarOpened of YPanelManager type></dt></dl>
<dl><dt><span class="other-name">tr</span> = <unbound qt slot tr of YPanelManager type><dd>
<pre class="doc" markdown="0">X.<a href="#fontlab.YPanelManager-tr">tr</a>(a, b, c, d) -> str</pre>
</dd></dl>
<h4 class="head-methods">Methods from PythonQt.PythonQtInstanceWrapper</h4><dl class="function"><dt><a name="YPanelManager-__delattr__" href="#YPanelManager-__delattr__"><span class="function-name">__delattr__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__delattr__">__delattr__</a>('name') <==> del x.name</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__eq__" href="#YPanelManager-__eq__"><span class="function-name">__eq__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__eq__">__eq__</a>(y) <==> x==y</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__ge__" href="#YPanelManager-__ge__"><span class="function-name">__ge__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__ge__">__ge__</a>(y) <==> x>=y</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__getattribute__" href="#YPanelManager-__getattribute__"><span class="function-name">__getattribute__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__getattribute__">__getattribute__</a>('name') <==> x.name</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__gt__" href="#YPanelManager-__gt__"><span class="function-name">__gt__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__gt__">__gt__</a>(y) <==> x>y</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__hash__" href="#YPanelManager-__hash__"><span class="function-name">__hash__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__hash__">__hash__</a>() <==> hash(x)</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__init__" href="#YPanelManager-__init__"><span class="function-name">__init__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__init__">__init__</a>(...) initializes x; see <a href="#fontlab.YPanelManager-help">help</a>(type(x)) for signature</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__le__" href="#YPanelManager-__le__"><span class="function-name">__le__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__le__">__le__</a>(y) <==> x<=y</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__lt__" href="#YPanelManager-__lt__"><span class="function-name">__lt__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__lt__">__lt__</a>(y) <==> x<y</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__ne__" href="#YPanelManager-__ne__"><span class="function-name">__ne__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__ne__">__ne__</a>(y) <==> x!=y</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__nonzero__" href="#YPanelManager-__nonzero__"><span class="function-name">__nonzero__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__nonzero__">__nonzero__</a>() <==> x != 0</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__repr__" href="#YPanelManager-__repr__"><span class="function-name">__repr__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__repr__">__repr__</a>() <==> repr(x)</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__setattr__" href="#YPanelManager-__setattr__"><span class="function-name">__setattr__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__setattr__">__setattr__</a>('name', value) <==> x.name = value</pre>
</dd></dl>
<dl class="function"><dt><a name="YPanelManager-__str__" href="#YPanelManager-__str__"><span class="function-name">__str__</span></a><span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">x.<a href="#fontlab.YPanelManager-__str__">__str__</a>() <==> str(x)</pre>
</dd></dl>
<h4 class="head-attrs">Attributes from PythonQt.PythonQtInstanceWrapper</h4><dl><dt><span class="other-name">__new__</span> = <built-in method __new__ of PythonQt.PythonQtClassWrapper object><dd>
<pre class="doc" markdown="0">T.<a href="#fontlab.YPanelManager-__new__">__new__</a>(S, ...) -> a new object with type S, a subtype of T</pre>
</dd></dl>
</dd>
<a name="fontlab.YPanelManager.blockSignals"></a>
## `blockSignals`
<span class="other-name">fontlab.YPanelManager.blockSignals</span> = <unbound qt slot blockSignals of YPanelManager type>
<a name="fontlab.YPanelManager.childEvent"></a>
## `childEvent`
<span class="other-name">fontlab.YPanelManager.childEvent</span> = <unbound qt slot py_q_childEvent of YPanelManager type>
<a name="fontlab.YPanelManager.children"></a>
## `children`
<span class="other-name">fontlab.YPanelManager.children</span> = <unbound qt slot children of YPanelManager type>
<a name="fontlab.YPanelManager.className"></a>
## `className`
<dl class="function"><dt><a name="-fontlab.YPanelManager.className" href="#-fontlab.YPanelManager.className"><span class="function-name">fontlab.YPanelManager.className</span></a> = className<span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">Return the classname of the object</pre>
</dd></dl>
<a name="fontlab.YPanelManager.connect"></a>
## `connect`
<span class="other-name">fontlab.YPanelManager.connect</span> = <unbound qt slot connect of YPanelManager type>
<a name="fontlab.YPanelManager.customEvent"></a>
## `customEvent`
<span class="other-name">fontlab.YPanelManager.customEvent</span> = <unbound qt slot py_q_customEvent of YPanelManager type>
<a name="fontlab.YPanelManager.delete"></a>
## `delete`
<dl class="function"><dt><a name="-fontlab.YPanelManager.delete" href="#-fontlab.YPanelManager.delete"><span class="function-name">fontlab.YPanelManager.delete</span></a> = delete<span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">Deletes the given C++ object</pre>
</dd></dl>
<a name="fontlab.YPanelManager.deleteLater"></a>
## `deleteLater`
<span class="other-name">fontlab.YPanelManager.deleteLater</span> = <unbound qt slot deleteLater of YPanelManager type>
<a name="fontlab.YPanelManager.disconnect"></a>
## `disconnect`
<span class="other-name">fontlab.YPanelManager.disconnect</span> = <unbound qt slot disconnect of YPanelManager type>
<a name="fontlab.YPanelManager.dumpObjectInfo"></a>
## `dumpObjectInfo`
<span class="other-name">fontlab.YPanelManager.dumpObjectInfo</span> = <unbound qt slot dumpObjectInfo of YPanelManager type>
<a name="fontlab.YPanelManager.dumpObjectTree"></a>
## `dumpObjectTree`
<span class="other-name">fontlab.YPanelManager.dumpObjectTree</span> = <unbound qt slot dumpObjectTree of YPanelManager type>
<a name="fontlab.YPanelManager.dynamicPropertyNames"></a>
## `dynamicPropertyNames`
<span class="other-name">fontlab.YPanelManager.dynamicPropertyNames</span> = <unbound qt slot dynamicPropertyNames of YPanelManager type>
<a name="fontlab.YPanelManager.event"></a>
## `event`
<span class="other-name">fontlab.YPanelManager.event</span> = <unbound qt slot py_q_event of YPanelManager type>
<a name="fontlab.YPanelManager.eventFilter"></a>
## `eventFilter`
<span class="other-name">fontlab.YPanelManager.eventFilter</span> = <unbound qt slot py_q_eventFilter of YPanelManager type>
<a name="fontlab.YPanelManager.findChild"></a>
## `findChild`
<span class="other-name">fontlab.YPanelManager.findChild</span> = <unbound qt slot findChild of YPanelManager type>
<a name="fontlab.YPanelManager.findChildren"></a>
## `findChildren`
<span class="other-name">fontlab.YPanelManager.findChildren</span> = <unbound qt slot findChildren of YPanelManager type>
<a name="fontlab.YPanelManager.help"></a>
## `help`
<dl class="function"><dt><a name="-fontlab.YPanelManager.help" href="#-fontlab.YPanelManager.help"><span class="function-name">fontlab.YPanelManager.help</span></a> = help<span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">Shows the help of available methods for this class</pre>
</dd></dl>
<a name="fontlab.YPanelManager.inherits"></a>
## `inherits`
<dl class="function"><dt><a name="-fontlab.YPanelManager.inherits" href="#-fontlab.YPanelManager.inherits"><span class="function-name">fontlab.YPanelManager.inherits</span></a> = inherits<span class="argspec">(...)</span></dt><dd>
<pre class="doc" markdown="0">Returns if the class inherits or is of given type name</pre>
</dd></dl>
<a name="fontlab.YPanelManager.installEventFilter"></a>
## `installEventFilter`
<span class="other-name">fontlab.YPanelManager.installEventFilter</span> = <unbound qt slot installEventFilter of YPanelManager type>
<a name="fontlab.YPanelManager.isSignalConnected"></a>
## `isSignalConnected`
<span class="other-name">fontlab.YPanelManager.isSignalConnected</span> = <unbound qt slot isSignalConnected of YPanelManager type>
<a name="fontlab.YPanelManager.isWidgetType"></a>
## `isWidgetType`
<span class="other-name">fontlab.YPanelManager.isWidgetType</span> = <unbound qt slot isWidgetType of YPanelManager type>
<a name="fontlab.YPanelManager.isWindowType"></a>
## `isWindowType`
<span class="other-name">fontlab.YPanelManager.isWindowType</span> = <unbound qt slot isWindowType of YPanelManager type>
<a name="fontlab.YPanelManager.killTimer"></a>
## `killTimer`
<span class="other-name">fontlab.YPanelManager.killTimer</span> = <unbound qt slot killTimer of YPanelManager type>
<a name="fontlab.YPanelManager.metaObject"></a>
## `metaObject`
<span class="other-name">fontlab.YPanelManager.metaObject</span> = <unbound qt slot metaObject of YPanelManager type>
<a name="fontlab.YPanelManager.moveToThread"></a>
## `moveToThread`
<span class="other-name">fontlab.YPanelManager.moveToThread</span> = <unbound qt slot moveToThread of YPanelManager type>
<a name="fontlab.YPanelManager.parent"></a>
## `parent`
<span class="other-name">fontlab.YPanelManager.parent</span> = <unbound qt slot parent of YPanelManager type>
<a name="fontlab.YPanelManager.property"></a>
## `property`
<span class="other-name">fontlab.YPanelManager.property</span> = <unbound qt slot property of YPanelManager type>
<a name="fontlab.YPanelManager.removeEventFilter"></a>
## `removeEventFilter`
<span class="other-name">fontlab.YPanelManager.removeEventFilter</span> = <unbound qt slot removeEventFilter of YPanelManager type>
<a name="fontlab.YPanelManager.sender"></a>
## `sender`
<span class="other-name">fontlab.YPanelManager.sender</span> = <unbound qt slot sender of YPanelManager type>
<a name="fontlab.YPanelManager.senderSignalIndex"></a>
## `senderSignalIndex`
<span class="other-name">fontlab.YPanelManager.senderSignalIndex</span> = <unbound qt slot senderSignalIndex of YPanelManager type>
<a name="fontlab.YPanelManager.setObjectName"></a>
## `setObjectName`
<span class="other-name">fontlab.YPanelManager.setObjectName</span> = <unbound qt slot setObjectName of YPanelManager type>
<a name="fontlab.YPanelManager.setParent"></a>
## `setParent`
<span class="other-name">fontlab.YPanelManager.setParent</span> = <unbound qt slot setParent of YPanelManager type>
<a name="fontlab.YPanelManager.setProperty"></a>
## `setProperty`
<span class="other-name">fontlab.YPanelManager.setProperty</span> = <unbound qt slot setProperty of YPanelManager type>
<a name="fontlab.YPanelManager.signalsBlocked"></a>
## `signalsBlocked`
<span class="other-name">fontlab.YPanelManager.signalsBlocked</span> = <unbound qt slot signalsBlocked of YPanelManager type>
<a name="fontlab.YPanelManager.startTimer"></a>
## `startTimer`
<span class="other-name">fontlab.YPanelManager.startTimer</span> = <unbound qt slot startTimer of YPanelManager type>
<a name="fontlab.YPanelManager.thread"></a>
## `thread`
<span class="other-name">fontlab.YPanelManager.thread</span> = <unbound qt slot thread of YPanelManager type>
<a name="fontlab.YPanelManager.timerEvent"></a>
## `timerEvent`
<span class="other-name">fontlab.YPanelManager.timerEvent</span> = <unbound qt slot py_q_timerEvent of YPanelManager type>
<a name="fontlab.YPanelManager.tr"></a>
## `tr`
<span class="other-name">fontlab.YPanelManager.tr</span> = <unbound qt slot tr of YPanelManager type>
| 38.238474 | 336 | 0.711084 | yue_Hant | 0.342629 |
9acae4841906c14d897847adce550b39989fea08 | 4,362 | md | Markdown | BENCHMARK.md | janosch-x/immutable_set | 8224cc7664d8c96b7b55a2e3d97413af7b980876 | [
"MIT"
] | 1 | 2018-06-27T00:13:16.000Z | 2018-06-27T00:13:16.000Z | BENCHMARK.md | janosch-x/immutable_set | 8224cc7664d8c96b7b55a2e3d97413af7b980876 | [
"MIT"
] | null | null | null | BENCHMARK.md | janosch-x/immutable_set | 8224cc7664d8c96b7b55a2e3d97413af7b980876 | [
"MIT"
] | null | null | null | Results of `rake:benchmark` on ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-darwin17]
Note: `stdlib` refers to `SortedSet` without the `rbtree` gem. If the `rbtree` gem is present, `SortedSet` will [use it](https://github.com/ruby/ruby/blob/b1a8c64/lib/set.rb#L709-L724) and become even slower.
```
#- with 5M overlapping items
gem: 6.6 i/s
gem w/o c: 0.8 i/s - 7.85x slower
stdlib: 0.7 i/s - 9.51x slower```
```
#- with 5M distinct items
gem: 1429392.7 i/s
gem w/o c: 1414260.7 i/s - same-ish
stdlib: 1.0 i/s - 1456728.62x slower```
```
#^ with 5M overlapping items
gem: 0.9 i/s
gem w/o C: 0.4 i/s - 2.12x slower
stdlib: 0.4 i/s - 2.16x slower
```
```
#^ with 5M distinct items
gem w/o C: 0.8 i/s
gem: 0.6 i/s - 1.25x slower
stdlib: 0.5 i/s - 1.65x slower
```
```
#intersect? with 5M intersecting items
gem: 266.8 i/s
gem w/o C: 8.2 i/s - 32.53x slower
stdlib: 2.2 i/s - 121.88x slower
```
```
#intersect? with 5M sparse items (rare case?)
gem w/o C: 1442.5 i/s
gem: 185.2 i/s - 7.79x slower
stdlib: 2.0 i/s - 712.75x slower
```
```
#intersect? with 5M distinct items
gem: 1376038.3 i/s
gem w/o C: 1375048.5 i/s - same-ish
stdlib: 2.0 i/s - 675307.67x slower
```
```
#& with 5M intersecting items
gem: 6.4 i/s
gem w/o C: 2.6 i/s - 2.49x slower
Array#&: 1.3 i/s - 4.83x slower
stdlib: 0.9 i/s - 6.90x slower
```
```
#& with 5M sparse items (rare case?)
gem: 88.3 i/s
gem w/o C: 19.6 i/s - 4.50x slower
stdlib: 2.0 i/s - 44.46x slower
Array#&: 1.8 i/s - 49.61x slower
```
```
#& with 5M distinct items
gem w/o C: 578891.9 i/s
gem: 571604.2 i/s - same-ish
stdlib: 2.1 i/s - 281016.75x slower
Array#&: 1.8 i/s - 316493.80x slower
```
```
#inversion with 5M items
gem: 1.8 i/s
gem w/o C: 0.7 i/s - 2.58x slower
stdlib #-: 0.3 i/s - 6.67x slower
```
```
#inversion with 100k items
gem: 239.5 i/s
gem w/o C: 62.8 i/s - 3.81x slower
stdlib #-: 29.2 i/s - 8.22x slower
```
```
#minmax with 10M items
gem: 3180102.2 i/s
gem w/o C: 3170355.3 i/s - same-ish
stdlib: 5.3 i/s - 595743.46x slower
```
```
#minmax with 1M items
gem: 3247178.7 i/s
gem w/o C: 3231669.0 i/s - same-ish
stdlib: 52.8 i/s - 61535.19x slower
```
```
::new with 5M Range items
gem: 0.8 i/s
gem w/o C: 0.6 i/s - 1.27x slower
stdlib: 0.4 i/s - 1.78x slower
```
```
::new with 100k Range items
gem: 126.7 i/s
gem w/o C: 69.2 i/s - 1.83x slower
stdlib: 33.1 i/s - 3.83x slower
```
```
::new with 10k Range items in 10 non-continuous Ranges
gem: 3117.6 i/s
gem w/o C: 1326.2 i/s - 2.35x slower
stdlib: 666.7 i/s - 4.68x slower
```
```
#(proper_)subset/superset? with 5M subset items
gem: 50.8 i/s
gem w/o C: 1.4 i/s - 37.61x slower
stdlib: 1.3 i/s - 37.71x slower
```
```
#(proper_)subset/superset? with 5M overlapping items
gem: 51.0 i/s
gem w/o C: 1.4 i/s - 36.49x slower
stdlib: 1.4 i/s - 36.74x slower
```
```
#(proper_)subset/superset? with 100k overlapping items
gem: 3238.3 i/s
stdlib: 302.9 i/s - 10.69x slower
gem w/o C: 281.8 i/s - 11.49x slower
```
```
#+ with 5M overlapping items
gem: 1.4 i/s
stdlib: 1.2 i/s - 1.19x slower
gem w/o C: 0.9 i/s - 1.49x slower
```
| 33.045455 | 208 | 0.44475 | kor_Hang | 0.508332 |
9acae81eaacc7a27d9b3acd7f16bf7c0e7be4347 | 173 | md | Markdown | CHANGELOG.md | samAroundGitHub/eslint-plugin-bud | eed64a583ac7043b250894e263144d520d22c5ef | [
"MIT"
] | null | null | null | CHANGELOG.md | samAroundGitHub/eslint-plugin-bud | eed64a583ac7043b250894e263144d520d22c5ef | [
"MIT"
] | 4 | 2020-09-06T13:48:29.000Z | 2021-09-01T19:29:40.000Z | CHANGELOG.md | samAroundGitHub/eslint-plugin-bud | eed64a583ac7043b250894e263144d520d22c5ef | [
"MIT"
] | null | null | null | #ChangeLog
## 0.1.0 - 2019.07.19
### Added
* 新增统计eslint规则
* 新增统计eslint-plugin-import规则
* 新增统计eslint-plugin-jsx-a11y规则
* 新增统计eslint-plugin-react规则
* 新增统计eslint-plugin-vue规则 | 17.3 | 30 | 0.751445 | yue_Hant | 0.491172 |
9acb5fc9cb5017a71b6f42acdc3ee420a700191f | 5,469 | md | Markdown | README.md | jstty/beelzebub | c78f0bb984669f79011d7fc6d3820b9ab000c574 | [
"MIT"
] | 10 | 2016-11-09T17:48:57.000Z | 2019-10-17T11:09:24.000Z | README.md | jstty/beelzebub | c78f0bb984669f79011d7fc6d3820b9ab000c574 | [
"MIT"
] | 52 | 2016-08-18T07:47:20.000Z | 2022-03-28T00:11:08.000Z | README.md | jstty/beelzebub | c78f0bb984669f79011d7fc6d3820b9ab000c574 | [
"MIT"
] | 4 | 2016-07-28T17:24:56.000Z | 2016-11-09T17:45:29.000Z | <!-- # Beelzebub - One hell of a task master! -->
<center id="top"><img src="./assets/bz-logo-full.png" /></center>
[](http://travis-ci.org/jstty/beelzebub)
[](https://coveralls.io/github/jstty/beelzebub?branch=master)

[](https://david-dm.org/jstty/beelzebub)
[](https://david-dm.org/jstty/beelzebub#info=devDependencies)
<center>
<div><a target="_blank" href="https://nodei.co/npm/beelzebub/"><img src="https://nodei.co/npm/beelzebub.png" /></a>
</div>
<div>
<a target="_blank" href="http://twitter.com/beelzebubio"><img width="32px" src="assets/twitter-logo.svg" /></a>
</div>
</center>
## Description
A modern task runner pipeline framework.
Allows your Tasks to be Modular, Extendable, Flexible, Manageable, and Fire Resistant!
## Features
1. Tasks are based on Promises, support:
* Generator ([Example](./examples/api/async.js))
* Using [co wrapping](https://github.com/tj/co)
* Async/Await ([Example](./examples/api/async.js))
* Streams ([Example](./examples/api/stream.js))
* Compatible with your existing `gulp` tasks
2. ES6 Class base class
* Extending from other Tasks ([Example](./examples/api/extend.js))
3. Sub Tasks
* Static - simply by adding another task class to a tasks sub class. ([Example](./examples/api/subtasksSimple.js))
* Dynamic - create sub tasks based on configuration ([Example](./examples/api/subtasksAdvanced.js))
4. Run other tasks in an task
* Parallel ([Example](./examples/api/parallel.js))
* Sequence ([Example](./examples/api/sequence.js))
5. Before and After ([Simple Example](./examples/api/beforeAfter.js), [Adv Example](./examples/api/beforeAfterAdvanced.js))
* each task
* all tasks
6. Decorators
* Setting Default Task ([Example](./examples/api/decoratorHelp.js))
* Help Docs ([Example](./examples/api/decoratorHelp.js))
* Vars Definitions (for help and set defaults) ([Example](./examples/api/decoratorVars.js))
7. Auto Help Docs ([ALI Example](./examples/api/helpDocs.js), [CLI Example](./examples/cli/helpDocs.js))
8. Passing Options (Vars) to a task or globally ([ALI Example](./examples/api/passingVars.js), [CLI Example](./examples/cli/defineVars.js))
9. CLI ([Examples](./examples/cli)) and full Javascript API ([Examples](./examples/api))
10. **Totally bad *ss logo!**
-------
# Install
## API
```shell
$ npm install beelzebub
```
## CLI
```shell
$ npm install beelzebub -g
```
-------
# DOCS
### [Task Class](./docs/taskClass.md)
-------
# API
### [Examples](./examples/api)
# Simple Example
```javascript
const Beelzebub = require('beelzebub');
class MyTasks extends Beelzebub.Tasks {
task1() {
this.logger.log('MyTasks task1');
}
}
// Add Task to BZ, it will now be registered
Beelzebub.add( MyTasks );
// ------------------------------------
// Runs the task, returning a promise
Beelzebub.run('MyTasks.task1');
```
-------
# CLI
### [Examples](./examples/cli)
## Reserved Global Flags
* `--help` or `-h`
* Prints Usage, List of Task Help Docs and Vars Definitions
* `--version` or `-v`
* Prints Beelzebub version
* `--file=<file path>` or `-f=<file path>`
* Uses this file instead of the `beelzebub.js` or `beelzebub.json` file
<!--
# File Loader
TODO
-->
## Passing Vars
The CLI uses [yargs](https://github.com/yargs/yargs) and thus the vars parsing is handled by [yargs-parser](https://github.com/yargs/yargs-parser).
```shell
$ bz <global vars> TaskPath <vars to pass to this Task> AnotherTaskPath <vars will only pass to the preceding Task>
```
--------
## Simple Example
### `beelzebub.js` file
```javascript
const Beelzebub = require('beelzebub');
class MyTasks extends Beelzebub.Tasks {
task() {
this.logger.log('MyTasks task');
}
}
module.exports = MyTasks;
```
```shell
$ bz MyTasks.task
```
--------
## Vars Example
### `beelzebub.js` file
```javascript
const Beelzebub = require('beelzebub');
class MyTasks1 extends Beelzebub.Tasks {
default(aVars) {
const gVars = this.$getGlobalVars();
this.logger.log(`MyTasks1 default ${gVars.myGlobalVar} ${aVars.v1}`);
}
}
class MyTasks2 extends Beelzebub.Tasks {
task(aVars) {
const gVars = this.$getGlobalVars();
this.logger.log(`MyTasks1 task ${gVars.myGlobalVar} ${aVars.v1}`);
}
}
module.exports = [MyTasks1, MyTasks2];
```
```shell
$ bz --myGlobalVar=hello MyTasks1 --v1=1 MyTasks2.task --v1=2
```
--------
## Load File Example
### `appTasks.js` file
```javascript
module.exports = [
require('bz-frontend-react'),
require('bz-frontend-babel'),
require('./mytask.js')
];
```
```shell
$ bz --file=./appTasks.js MyTasks.task1
```
--------
## Special Thanks
To everyone supporting the development and cost to the project.
I would also like to thank the logo artist [Irving Gerardo](https://thenounproject.com/irvinggerardo)!!!
--------
## License
It should be an obvious choice or you totally missed the [badge at the top](#top).
However for completeness;
*"I Beelzebub, declare myself to be under the [MIT licence](LICENSE)"*
| 28.936508 | 168 | 0.674712 | eng_Latn | 0.533351 |
9acb9a62f8eff002df49b339e5b4663a1d1aa8fb | 10,711 | md | Markdown | packages/insights-common-typescript/CHANGELOG.md | aferd/insights-common-typescript | 439a8749258427c55647787a1f5abc63c4010116 | [
"Apache-2.0"
] | null | null | null | packages/insights-common-typescript/CHANGELOG.md | aferd/insights-common-typescript | 439a8749258427c55647787a1f5abc63c4010116 | [
"Apache-2.0"
] | 56 | 2020-07-07T21:25:56.000Z | 2022-02-23T18:10:56.000Z | packages/insights-common-typescript/CHANGELOG.md | aferd/insights-common-typescript | 439a8749258427c55647787a1f5abc63c4010116 | [
"Apache-2.0"
] | 4 | 2020-07-07T21:34:19.000Z | 2021-06-29T19:05:31.000Z | # Change Log
All notable changes to this project will be documented in this file.
See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.
# [4.12.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.11.0...@redhat-cloud-services/insights-common-typescript@4.12.0) (2021-11-08)
### Features
* allow to hide filters in the Primary Toolbar ([5639267](https://github.com/RedHatInsights/insights-common-typescript/commit/563926710ac36ddcbc299636c47e0a222d61171e))
# [4.11.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.10.0...@redhat-cloud-services/insights-common-typescript@4.11.0) (2021-09-27)
### Features
* allow to specify the chip value ([84c7855](https://github.com/RedHatInsights/insights-common-typescript/commit/84c785513bb46173d633eae7acb539501676b60a))
# [4.10.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.9.0...@redhat-cloud-services/insights-common-typescript@4.10.0) (2021-07-28)
### Bug Fixes
* add gov to prod envs and govStage to stage envs ([7962671](https://github.com/RedHatInsights/insights-common-typescript/commit/7962671c347248d10fa274742a02c5d8c2f7f35b))
* update Environment to match insights-chrome ([1c6a6ac](https://github.com/RedHatInsights/insights-common-typescript/commit/1c6a6acf115dde71193305e0b2cea1b88a848e44))
### Features
* POL-535 Prepare for fedramp environments ([fb6462f](https://github.com/RedHatInsights/insights-common-typescript/commit/fb6462f98117c81ce72642c4dd08e8c25ef496e9))
# [4.9.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.8.0...@redhat-cloud-services/insights-common-typescript@4.9.0) (2021-07-15)
### Bug Fixes
* addition of the optional field which defaults to reset filters ([c59787c](https://github.com/RedHatInsights/insights-common-typescript/commit/c59787ccad8ab04d29194148aad93083f05fd68b))
* linting error ([0fbba27](https://github.com/RedHatInsights/insights-common-typescript/commit/0fbba275ffb7773a1e94a0485d60bb278df0c4d3))
### Features
* addition of opitional field which defaults to Reset Filters ([afc1095](https://github.com/RedHatInsights/insights-common-typescript/commit/afc109570af514f1c01efd20d4392f06aee96308))
# [4.8.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.7.0...@redhat-cloud-services/insights-common-typescript@4.8.0) (2021-06-16)
### Features
* adds fromUTC function ([ba43d87](https://github.com/RedHatInsights/insights-common-typescript/commit/ba43d87c934a20fdb3e5de82ba646d7d61b4008c))
# [4.7.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.6.3...@redhat-cloud-services/insights-common-typescript@4.7.0) (2021-05-31)
### Features
* adds ci, qa, stage to environments presets ([2db3245](https://github.com/RedHatInsights/insights-common-typescript/commit/2db324536d919413d3f248b2109268e337ee5d47))
* adds SemiPartial type for specifying which attributes are not optional ([1dcd861](https://github.com/RedHatInsights/insights-common-typescript/commit/1dcd8618ab28712a7d58f86475c9531a1108c5c3))
* more optional configuration for Delete/Edit Modals ([c9c6ca9](https://github.com/RedHatInsights/insights-common-typescript/commit/c9c6ca97bb3e2817461d750440666d8dc8c3dcf9))
## [4.6.3](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.6.2...@redhat-cloud-services/insights-common-typescript@4.6.3) (2021-05-12)
**Note:** Version bump only for package @redhat-cloud-services/insights-common-typescript
## [4.6.2](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.6.1...@redhat-cloud-services/insights-common-typescript@4.6.2) (2021-04-15)
### Bug Fixes
* updates wording on EmailOptIn ([292fd5c](https://github.com/RedHatInsights/insights-common-typescript/commit/292fd5c86cbe5f0e01c1ffdf61a4815c4666e355))
## [4.6.1](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.6.0...@redhat-cloud-services/insights-common-typescript@4.6.1) (2021-03-29)
### Bug Fixes
* insightsEmailOptIn doesnt need bundle ([21a56d5](https://github.com/RedHatInsights/insights-common-typescript/commit/21a56d5d0f755d4fc73e236c63214ac6952a9b92))
# [4.6.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.5.2...@redhat-cloud-services/insights-common-typescript@4.6.0) (2021-03-29)
### Features
* uses notification preferences instead of email-preferences (require bundle) ([d3a692c](https://github.com/RedHatInsights/insights-common-typescript/commit/d3a692c9e81aa33fe9c31feae76951e7bd9a754f))
## [4.5.2](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.5.1...@redhat-cloud-services/insights-common-typescript@4.5.2) (2021-02-09)
### Bug Fixes
* use flat imports on react-tokens/icons to decrease size when importing components ([719ee4b](https://github.com/RedHatInsights/insights-common-typescript/commit/719ee4b188519de9a5278fc3751c1f35e5a280f4))
## [4.5.1](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.5.0...@redhat-cloud-services/insights-common-typescript@4.5.1) (2021-01-28)
### Bug Fixes
* allow to use PF Textarea props in the Formik adapter ([bffb090](https://github.com/RedHatInsights/insights-common-typescript/commit/bffb09078773afc405f54aa87cd5a8c89f561058))
# [4.5.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.4.0...@redhat-cloud-services/insights-common-typescript@4.5.0) (2021-01-26)
### Features
* allows to configure the redux store with initialState and reducers ([f75938b](https://github.com/RedHatInsights/insights-common-typescript/commit/f75938b5d0ead59d5c18d84217956b4f0e56dfd7))
# [4.4.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.3.1...@redhat-cloud-services/insights-common-typescript@4.4.0) (2021-01-15)
### Features
* update Rbac to handle better the Rbac permissions from the server ([3c0782f](https://github.com/RedHatInsights/insights-common-typescript/commit/3c0782f3c94e690b46aed60def3bb64c8d795667))
## [4.3.1](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.3.0...@redhat-cloud-services/insights-common-typescript@4.3.1) (2021-01-13)
### Bug Fixes
* titleIconVariant was supposed to be warning on DeleteModal ([f4196f8](https://github.com/RedHatInsights/insights-common-typescript/commit/f4196f8143ae1c56662765a86c7791241c1195a8))
# [4.3.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.2.2...@redhat-cloud-services/insights-common-typescript@4.3.0) (2021-01-12)
### Features
* adding titleIconVariant to ActionModal (& friends) ([73c6e67](https://github.com/RedHatInsights/insights-common-typescript/commit/73c6e6733db664239d53b8776414d82d46bafbdf))
* allows to specify a notification description as a node ([f6ca0f6](https://github.com/RedHatInsights/insights-common-typescript/commit/f6ca0f6f52504fd9a7db590bc109ad9054830921))
## [4.2.2](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.2.1...@redhat-cloud-services/insights-common-typescript@4.2.2) (2021-01-04)
### Bug Fixes
* cancel button is of type "link" (blue color, not gray) ([9d0ee3f](https://github.com/RedHatInsights/insights-common-typescript/commit/9d0ee3f5bb84d77554a3286a9c88fc1eb011faf7))
* changed text from Delete to Remove ([772010a](https://github.com/RedHatInsights/insights-common-typescript/commit/772010aca886d3be5192a3b15e2f2c2021e0d698))
## [4.2.1](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.2.0...@redhat-cloud-services/insights-common-typescript@4.2.1) (2020-11-26)
### Bug Fixes
* fix stage environment name ([76d7cf7](https://github.com/RedHatInsights/insights-common-typescript/commit/76d7cf78ce7623f226769de0f7140db0fc4e7504))
# [4.2.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.1.0...@redhat-cloud-services/insights-common-typescript@4.2.0) (2020-11-17)
### Features
* add hooks to represent feature flags ([3b3b8bc](https://github.com/RedHatInsights/insights-common-typescript/commit/3b3b8bca76d678896ed5c643a58dc8515e10f398))
# [4.1.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@4.0.0...@redhat-cloud-services/insights-common-typescript@4.1.0) (2020-11-13)
### Features
* add support for more feature flags ([85cb915](https://github.com/RedHatInsights/insights-common-typescript/commit/85cb91536636f6a933749d47c7a9d66e7e2713b2))
# [4.0.0](https://github.com/RedHatInsights/insights-common-typescript/compare/@redhat-cloud-services/insights-common-typescript@3.0.7...@redhat-cloud-services/insights-common-typescript@4.0.0) (2020-11-04)
### Features
* allows filters to select multiple elements from options list (exclusive = false) ([6f70f20](https://github.com/RedHatInsights/insights-common-typescript/commit/6f70f205e1ea176c947867d9e31ea88803e55b55))
* filter now allows arrays of objects ([cf03bfc](https://github.com/RedHatInsights/insights-common-typescript/commit/cf03bfc3982c5e4d67202367c7fe0111a7099c1c))
### BREAKING CHANGES
* To support arrays on the useFilters, had to set the default as undefined, so the
options are undefined | T | array<T>
## 3.0.7 (2020-10-21)
**Note:** Version bump only for package @redhat-cloud-services/insights-common-typescript
## 3.0.6 (2020-10-13)
**Note:** Version bump only for package @redhat-cloud-services/insights-common-typescript
## 3.0.5 (2020-10-13)
**Note:** Version bump only for package @redhat-cloud-services/insights-common-typescript
## 3.0.4 (2020-10-12)
**Note:** Version bump only for package @redhat-cloud-services/insights-common-typescript
| 37.714789 | 209 | 0.789562 | eng_Latn | 0.208027 |
9acc7d925bc3a35f552388065663d1905e4925f3 | 34 | md | Markdown | README.md | peter-budo/bookschelf | bed6681ece7f60fcbc1afb5b9cbb37748fa20937 | [
"Apache-2.0"
] | null | null | null | README.md | peter-budo/bookschelf | bed6681ece7f60fcbc1afb5b9cbb37748fa20937 | [
"Apache-2.0"
] | null | null | null | README.md | peter-budo/bookschelf | bed6681ece7f60fcbc1afb5b9cbb37748fa20937 | [
"Apache-2.0"
] | null | null | null | # bookschelf
Database of my books
| 11.333333 | 20 | 0.794118 | eng_Latn | 0.900259 |
9acc8cbd8e1d5944585bb63ac11f565defc71994 | 3,080 | md | Markdown | _posts/2012-05-07-remote-config-files-in-codeigniter.md | mikedfunk/mikedfunk.github.io | ba5757bb36284c05626aa44965b8d37e2dce9a6c | [
"MIT"
] | null | null | null | _posts/2012-05-07-remote-config-files-in-codeigniter.md | mikedfunk/mikedfunk.github.io | ba5757bb36284c05626aa44965b8d37e2dce9a6c | [
"MIT"
] | 2 | 2021-09-27T21:32:40.000Z | 2022-02-26T04:00:09.000Z | _posts/2012-05-07-remote-config-files-in-codeigniter.md | mikedfunk/mikedfunk.github.io | ba5757bb36284c05626aa44965b8d37e2dce9a6c | [
"MIT"
] | null | null | null | ---
title: Remote Config Files In CodeIgniter
layout: post
---
I ran into a situation recently where I had multiple CodeIgniter apps which depended on the same config values. No problem, right? Just use a common third_party folder. That would work, except they were on different servers! My solution was to echo the config as JSON in one place, grab the JSON in other apps and load them as config values. Now you can do the same!
<!--more-->
## Setup
1. Install sparks at [GetSparks.org](http://getsparks.org)
2. Install the [curl_load](http://getsparks.org/packages/curl_load/show) spark
## Usage
First, do this in your config file:
{% highlight php %}
<?php // DON'T put the usual !defined(BASEPATH) part up here
$config['this_key'] = 'value';
$config['that_key'] = 'value';
// if it's not loaded by CodeIgniter, echo it as JSON so
// we can grab the keys/values remotely
if (!defined(BASEPATH)) echo json_encode($config);
{% endhighlight %}
Now in your controller, use the curl_load spark to load the config file:
{% highlight php %}
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
class test_controller extends CI_Controller
{
public function index()
{
// replace x.x.x with version number
$this->load->spark('curl_load/x.x.x');
$config_url = 'http://example.com/path/to/config.php';
$this->curl_load->load_config($config_url);
}
}
{% endhighlight %}
If you need to secure the values of the config file, you can add optional http authentication credentials:
{% highlight php %}
$this->curl_load->load_config(
'http://example.com/path/to/config.php',
'http_auth_username',
'http_auth_password'
);
{% endhighlight %}
Or you could load an array of config files:
{% highlight php %}
$this->curl_load->load_config(
array(
array(
'url' => 'http://url1.com/config.php'
),
array(
'url' => 'http://url2.com/config.php',
'username' => 'optional_http_auth_username',
'password' => 'optional_http_auth_password'
)
)
);
{% endhighlight %}
Last but not least: You can set it to autoload config files in ```config/curl_load.php```. Just add your config files in the same array format as above:
{% highlight php %}
$config['curl_autoload'] = array(
array(
'url' => 'http://url1.com/config.php'
),
array(
'url' => 'http://url2.com/config.php',
'username' => 'optional_http_auth_username',
'password' => 'optional_http_auth_password'
)
);
{% endhighlight %}
This is really cool, especially when you autoload the spark as well. Then your config values will automatically be loaded without you having to do anything!
{% highlight php %}
$autoload['sparks'] = array('curl_load/x.x.x');
{% endhighlight %}
If this helps you, leave me a comment. Have fun!
| 31.752577 | 366 | 0.628571 | eng_Latn | 0.93037 |
9acd1e9cdf2f80801e08c2231bcf1491bd059718 | 1,415 | md | Markdown | README.md | epigos/data-explorer | d90be65fe046b49025dc6fa422baa5c127dcb734 | [
"MIT"
] | 1 | 2017-01-18T08:40:40.000Z | 2017-01-18T08:40:40.000Z | README.md | epigos/data-explorer | d90be65fe046b49025dc6fa422baa5c127dcb734 | [
"MIT"
] | null | null | null | README.md | epigos/data-explorer | d90be65fe046b49025dc6fa422baa5c127dcb734 | [
"MIT"
] | null | null | null | # Data Exploration with Matplotlib and D3.js
[](https://travis-ci.org/epigos/data-explorer)
[](https://badge.fury.io/py/dexplorer)
This is a small library built with Tornado, Matplotlib and Pandas to summarize and visualize any datasource in the browser.
Currently supports `Python 3.x`
Demo
------------

Installation
------------
To install, simply:
$ pip install dexplorer
Usage
-----
Starting Data Explorer:
from dexplorer import DataExplorer
dte = DataExplorer()
dte.read_csv('example.csv') # connect csv data source
dte.start() # starts a new server on port 9011 by default;
Basically this is how it works;
1. Data is loaded using the `Pandas` library
2. The server start with websocket support to the browser
3. Descriptive summary (Descriptive Statistics) of columns is generated and sent through the socket and rendered in the browser.
4. Distribution of values (`Boxplot` and `Barplot`) in each columns is then generated using `Matplotlib` and sent to the browser as `json`.
5. `D3.js` is then used to render the `json` plot in the browser
Documentation
-----------------
Documentation can be found [here]()
Running the Tests
-----------------
To run tests
python -m tornado.testing tests.functests
| 25.727273 | 139 | 0.714488 | eng_Latn | 0.945755 |
261d46d8ba232dad7c29e705a45ba2913ad9a5a0 | 2,761 | md | Markdown | content/post/2007-06-21-cliff-ive-used-vmware-in-production.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 9 | 2018-12-19T09:50:28.000Z | 2022-03-31T00:40:39.000Z | content/post/2007-06-21-cliff-ive-used-vmware-in-production.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 2 | 2018-04-23T13:45:38.000Z | 2020-01-24T23:04:16.000Z | content/post/2007-06-21-cliff-ive-used-vmware-in-production.md | scottslowe/weblog | dcf9c6a5d0a8d9b7fb507ce7b6fcee1b11eb065f | [
"MIT"
] | 9 | 2018-04-22T05:43:46.000Z | 2022-03-02T20:28:45.000Z | ---
author: slowe
categories: Rant
comments: true
date: 2007-06-21T23:55:26Z
slug: cliff-ive-used-vmware-in-production
tags:
- Virtualization
- VMware
title: Cliff, I've Used VMware in Production
url: /2007/06/21/cliff-ive-used-vmware-in-production/
wordpress_id: 475
---
"How many people have deployed VMware or Xen or even Microsoft Virtual Server in a real production environment?" That's the question Cliff Saran's IT FUD blog [asked yesterday](http://www.computerweekly.com/blogs/IT-FUD-blog/2007/06/does-virtual-make-sense.html). It's an interesting question to ask an engineer such as myself who specializes in VMware deployments for customers. And while I can't give out the names of some of my customers, I can tell you that more than a couple of them are using [VMware Virtual Infrastructure 3](http://www.vmware.com/products/vi/) in production environments _right now._
Here are some examples of how these customers are using VMware:
* One customer has over 20 dual-processor server blades (older HP p-Class blades, by the way---trying to get them to transition to the newer c-Class blades) running [ESX Server 3.0.1](http://www.vmware.com/products/vi/esx/) in a big DRS cluster hosting over 120 virtual servers. These virtual servers are application servers, middleware servers, Exchange front-end servers, and Citrix Presentation Servers, to name a few.
* Another customer, still early in their VI3 deployment, has a three-node DRS/HA cluster running a variety of workloads, such as [Microsoft Office Project Server](http://office.microsoft.com/en-us/projectserver/default.aspx) and a couple other application servers, on VMware ESX Server.
* One very small customer I have is using [Virtual Server](http://www.microsoft.com/windowsserversystem/virtualserver/) to host a few workloads, including a middleware server for a web-based application with an SQL backend.
And these are just the examples that I know about. What about all the other VMware engineers in my company, not to mention all the other VARs with strong VMware practices? And Cliff says that he's having a hard time finding references? That doesn't make sense to me.
While I love virtualization (and VMware products in particular), I'll be the first to admit that virtualization is not the "be all/end all" that some make it out to be. Is it useful in many organizations? Yes, absolutely. Will it fit in every situation? No, it won't. There _will_ be some situations where virtualization is not the answer. However, those situations are fairly limited, and growing more limited by the day.
What about you? Cliff wants stories of people using VMware in production environments, so let's give 'em to him. Tell us about your production VMware environment below in the comments.
| 92.033333 | 608 | 0.788845 | eng_Latn | 0.995294 |
261d5a5a64c9282d9c54f01aacad0bc428bc3357 | 2,346 | md | Markdown | _posts/2011-07-04-bit-of-a-struggle.md | david74a/david74a.github.io | 9c03796423d8d0545c12541eada7ac9123e0e7f3 | [
"MIT"
] | null | null | null | _posts/2011-07-04-bit-of-a-struggle.md | david74a/david74a.github.io | 9c03796423d8d0545c12541eada7ac9123e0e7f3 | [
"MIT"
] | null | null | null | _posts/2011-07-04-bit-of-a-struggle.md | david74a/david74a.github.io | 9c03796423d8d0545c12541eada7ac9123e0e7f3 | [
"MIT"
] | null | null | null | ---
layout: post
title: Bit of a struggle
published: true
---
# Figueira de Foz to Aveiro and Sao Jactim
*Sao Jactim* 
I think it's fair to say the marina at Figueira de Foz is
a\) not the most efficient in the world - it's fine to be relaxed, but too near the horizontal is not so good
b\) not the best value at €30 per night
c\) the best value marina on the Portuguese coast
The Monday forecast was relatively good for a north going boat in a land of prevailing northerlies. It was about a F4/5 with a hint of westerly added to the northerly wind, with low swell. So I decided to set off at about 09.00 for the 32 nm trip to Aveiro. The timing matters as there is a small'ish recommended window for Aveiro, with reported ebb tides of up to 8 knots.
Unfortunately the marina staff had other ideas. The office opens at 08.30, and I needed to pay and return the door token to recover my deposit. By 10.00 I was becoming a little 'tense' as there was no sign of the marina office opening. I set off, but then had to re-berth at the reception pontoon where the customs people cover the marina role out of office hours. The helpful man there returned my deposit, but said they don't take any money, and if the marina was not open, that was they're fault and I should just go.
Not one to ignore authority, I set off on my way, €60 better off. Such good value that marina.
The trip to Aveiro felt a long sail, actually covering over 42 nm thanks mainly to the foul current. I went in as close as I dared, arguably even closer. At one point, as I wondering if I could recognise the vegetation on the sand dunes, with the background noise of breakers on the beach, I decided perhaps I was getting a little too close in and got a bit more sea room.
I arrived at Aveiro in the right time window, despite the best attempts of the Fog de Foz marina to subvert my plans, and had a 4 knot tide pushing me upstream at just over 10 knots.
I anchored at Sao Jactim and enjoyed the lack of swell as the lagoon is effectively inshore, just a little movement occasionally from passing small boats and the ferry. My G&T with the setting sun felt well deserved. Off tomorrow, north once more.
This blog entry was made whilst actually motoring toward Leixoes - isn't technoology wonderful?
| 73.3125 | 524 | 0.76769 | eng_Latn | 0.999918 |
261d86c00d8588373deed52843193fabdb3dd5f0 | 2,061 | md | Markdown | README.md | dnaase/QRF_spark | fb8982c9e7d468ba14d15ebe8db3b49e11d52431 | [
"MIT"
] | null | null | null | README.md | dnaase/QRF_spark | fb8982c9e7d468ba14d15ebe8db3b49e11d52431 | [
"MIT"
] | null | null | null | README.md | dnaase/QRF_spark | fb8982c9e7d468ba14d15ebe8db3b49e11d52431 | [
"MIT"
] | null | null | null | # QtlWater
A meQTL/eQTL detection method based on Gradient Boost Tree model by using recombination rate and HiC signal to boost the power.
Liu Y & Kellis M. QtlWater: boosting molecular quantitative trait loci mapping power by incorporating recombination rate and chromatin conformation changes. In preparation.
## Table of Contents
1. [Quick start](#quick-start)
2. [Installation](#installation)
3. [Usage](#usage)
## Quick start
1. Install
```
git clone --recursive https://github.com/dnaase/QtlWater.git
cd QtlWater
mvn clean package
```
2. Run the [example test file](configure.txt)
```
unzip test_data_zip.zip
mv test_data_zip test_data
perl QtlWater_pipeline.pl test configure.txt
```
This should produce the following files:
* cis-qtl.matrixEQtlAll.SampleSize-133.test.chr1.txt (MatrixEQTL result)
* cis-qtl.matrixEQtlAll.SampleSize-133.test.chr1.sig.sameChr.addCor.uniq.log10p_hic_recomb.afterQtlWater.fdr.txt (QtlWater FDR result)
## Installation
#### Prerequisites
* Java 7 (Oracle)
* R-3.0 or above (https://www.r-project.org/)
* Perl 5
* MatrixEQTL package in R (http://www.bios.unc.edu/research/genomic_software/Matrix_eQTL/runit.html or https://cran.r-project.org/web/packages/MatrixEQTL/index.html)
* bigWigAverageOverBed from UCSC utils (http://hgdownload.soe.ucsc.edu/admin/exe/)
Compilation requires mvn 3.0 or above
```
git clone --recursive https://github.com/dnaase/QtlWater.git
cd QtlWater
mvn clean package
```
#### required files
1. recombination rate big wig file, which could be download from 1000 Genome ftp fite
2. Hi-C signal file, which could be download from (Rao et al. 2015 Cell)
## Usage
### Training
#### Examples
```
perl QtlWater_pipeline.pl test configure.txt --mode 2 --ground_truth_data test_data/cis-qtl.matrixEQtlAll.SampleSize-133.test.chr1.txt --ground_truth_data_indexs 1 --ground_truth_data_indexs 2 --ground_truth_data_rawp 5 --ground_truth_data_fdr 6
```
### Compute enhanced p value and FDR (BH)
#### Examples
```
perl QtlWater_pipeline.pl test configure.txt
```
| 26.088608 | 246 | 0.755459 | eng_Latn | 0.499053 |
261f64d3fd77c1ad6a078f188c12492dbbe0c117 | 872 | md | Markdown | commands/navigation/cl-navigation.md | danielmoi/command-line-tute | e192ece8ddf25c6d78913e4269998b4c4203874b | [
"Apache-2.0"
] | 1 | 2018-02-17T23:14:31.000Z | 2018-02-17T23:14:31.000Z | commands/navigation/cl-navigation.md | danielmoi/command-line-tute | e192ece8ddf25c6d78913e4269998b4c4203874b | [
"Apache-2.0"
] | null | null | null | commands/navigation/cl-navigation.md | danielmoi/command-line-tute | e192ece8ddf25c6d78913e4269998b4c4203874b | [
"Apache-2.0"
] | null | null | null | # Command Line Navigation
## Find command
```
<Up> Show the previous command
CTRL-P Pressing <Up> again will cycle backwards through the list
[chars]<CTRL-R> Show the first command in the Search History that match chars
Pressing <CTRL-R> again will cycle backwards through the matches
Also known as:
- bck-i-search
- Reverse Incremental Search
eg. cd<CTRL-R> - will show most recent command starting with "cd"
[chars]<CTRL-D> Show all commands in the Search History that match that partial string
```
## Edit command
```
CTRL-K Clear command line
```
## Move cursor
```
CTRL-E End of line
CTRL-A Beginning of line
OPT-F Forward 1 word
CTRL-F Forward 1 character
OPT-B Back 1 word
CTRL-B Back 1 character
```
| 25.647059 | 88 | 0.607798 | eng_Latn | 0.984101 |
261fb1393de88242863596dbc38416cd43bef7e0 | 606 | md | Markdown | catalog/beautiful-gunbari/en-US_beautiful-gunbari.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/beautiful-gunbari/en-US_beautiful-gunbari.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/beautiful-gunbari/en-US_beautiful-gunbari.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Beautiful Gunbari

- **type**: manhwa
- **original-name**: 뷰티풀 군바리
- **start-date**: 2015-02-23
## Tags
- comedy
- drama
- slice-of-life
- military
## Authors
- Seoli (Story)
- Yoon
- Sung-Won (Art)
## Sinopse
What if a woman goes to the army? Join Su-ah as she shares her personal experience of being part of the army in an action packed slice of life manhwa that shows you the true power of women!
(Source: MU)
## Links
- [My Anime list](https://myanimelist.net/manga/119608/Beautiful_Gunbari)
| 19.548387 | 189 | 0.679868 | eng_Latn | 0.826287 |
262011c6a9ac6b210a816f24c94d12bd95b171c0 | 397 | md | Markdown | MusicFestival/samples/EPiServer.ContentApi.MusicFestival.Frontend/README.md | episerver/ascend2018-lab-extend-ui | 1adc7b55efd553de1f9325cf1581033d70d5785d | [
"Apache-2.0"
] | 3 | 2018-03-13T21:18:45.000Z | 2018-03-19T15:29:58.000Z | MusicFestival/samples/EPiServer.ContentApi.MusicFestival.Frontend/README.md | episerver/ascend2018-lab-extend-ui | 1adc7b55efd553de1f9325cf1581033d70d5785d | [
"Apache-2.0"
] | 1 | 2020-05-10T06:19:30.000Z | 2020-05-10T06:20:05.000Z | MusicFestival/samples/EPiServer.ContentApi.MusicFestival.Frontend/README.md | seriema/ascend2018-lab-extend-ui | 1adc7b55efd553de1f9325cf1581033d70d5785d | [
"Apache-2.0"
] | null | null | null | # EPiServer Headless, Demo App
A demo project for the EPiServer Headless project. Built with Vue to be served in a WebView Xamarin App.
## Build Setup
``` bash
# install dependencies
npm install
# serve with hot reload at localhost:8080
npm run dev
# build for production with minification
npm run build
# build for production and view the bundle analyzer report
npm run build --report
```
| 18.904762 | 104 | 0.763224 | eng_Latn | 0.980091 |
262038b1c6df3d858a8f645d59cb384d9737ab38 | 1,889 | md | Markdown | README.md | coreyja/devto-view-count-graphs | e7d175005b75c59c50ca12624235cdf726214d10 | [
"MIT"
] | 1 | 2021-01-11T19:48:20.000Z | 2021-01-11T19:48:20.000Z | README.md | coreyja/devto-view-count-graphs | e7d175005b75c59c50ca12624235cdf726214d10 | [
"MIT"
] | null | null | null | README.md | coreyja/devto-view-count-graphs | e7d175005b75c59c50ca12624235cdf726214d10 | [
"MIT"
] | 1 | 2021-01-12T06:47:05.000Z | 2021-01-12T06:47:05.000Z | # DEV.to View Count
This project was created for the DEV.to Digital Ocean Hackathon: https://dev.to/devteam/announcing-the-digitalocean-app-platform-hackathon-on-dev-2i1k
It is a Rails app that tracks your DEV.to Article stats, and can create graphs of your views and commnets over time!
## Deploying To Digital Ocean
[](https://cloud.digitalocean.com/apps/new?repo=https://github.com/coreyja/devto-view-count-graphs/tree/main)
Click the button above to get started! Currently Digital Ocean only supports specifying a single service via the template. So this will create a web service and the required DB.
You will need to add a `worker` service to run the DelayedJobs. The template file at `.do/deploy.template.yaml` does have a workers section already filled out, both for reference and
in case Digital Ocean supports those in the future!
### Choosing ENV Vars
Required:
- `DEV_TO_AUTH_TOKEN`: Your dev.to API Token from https://dev.to/settings/account
Optional (These can be removed from the wizard is desired):
- `BASIC_AUTH_USERNAME`: If this and the follow are provided, the site will ONLY be available over HTTP Basic Auth
- `BASIC_AUTH_PASSWORD`: If this and the above are provided, the site will ONLY be available over HTTP Basic Auth
- `RAILS_SENTRY_DSN`: The URL to send errors to Sentry
### Post Deploy Setup
After getting the app deployed you will need to kick off the first DelayedJob to start fetching stats every 10 minutes.
This first job will enqueue the next one to run, so this is only needed once during initial setup.
The second job here will fetch your user information from the DEV.to API, including the profile picture
In a rail console (`bundle exec rails console`) run the following command:
```
Delayed::Job.enqueue FetchAllMyArticlesJob.new
Delayed::Job.enqueue FetchUserDetailsJob
```
| 52.472222 | 182 | 0.782954 | eng_Latn | 0.989134 |
2620e36e6ecdc8a43da8214982f82891368dc267 | 1,140 | md | Markdown | firmware/docs/build_environment/win2_clean.md | SLA00/Kermite | 324c9fcc50baad893c0c8aa2e67a899bca35e176 | [
"MIT"
] | null | null | null | firmware/docs/build_environment/win2_clean.md | SLA00/Kermite | 324c9fcc50baad893c0c8aa2e67a899bca35e176 | [
"MIT"
] | null | null | null | firmware/docs/build_environment/win2_clean.md | SLA00/Kermite | 324c9fcc50baad893c0c8aa2e67a899bca35e176 | [
"MIT"
] | null | null | null | # OSの環境をなるべく汚染しない構成
基本的な構成は[OSの設定でパスを通す場合](./win1_default.md)と同様です。
開発環境の汚染を防ぐため、全部のツールにはパスを通さず、Makeにだけパスを通している点が異なります。
## OSの環境設定でパスを通すもの
* Make for Windows
## Makefileの中でパスを指定して利用するもの
* AVR-GCC
* arm-none-eabi-gcc
* GOW (or CoreUtils)
* MinGW
GNU Makeをグローバルにインストールしてパスを通し、他のツールはパスを通さずMakefile内だけから参照するようにします。追加するツールのうち、コマンドプロンプトから直接呼び出せるものはmakeだけにします。
他のプロジェクトのビルド環境に影響を与えたくない場合にこの構成を検討してください。
## ツールの導入
### Make for Windows
| ツール | [Make for Windows](http://gnuwin32.sourceforge.net/packages/make.htm)
| -------- | :------------------------------------------ |
| ファイル | make-3.81.exe |
| 導入方法 | DL, インストール, binにパスを通す |
Make for Windowsを取得して導入します。インストーラを使用します。
### Make以外のソフトウェア
Make以外のツールの導入手順は[OSの設定でパスを通す場合](./win1_default.md)と同様ですが、パスを通す設定を行いません。
GnuWin32のCoreUtilsとMakeが同じフォルダにインストールされてしまうため、インストールするフォルダを分けるか、CoreUtilsは導入せず代わりにGOWを使用しても良いでしょう。
## Makefile内でのパスの指定
Makefile.user.exampleをコピーしてMakefile.userを作成します。
Makefile.userはビルド時に本体のMakefileから読み込まれる、ユーザ環境での固有の設定などを書いておくためのものです。
ファイルの先頭で、以下のように記述して環境変数のPATHを上書きします。
```
export PATH:=$(PATH);<追加するパス1>;<追加するパス2>;...
```
| 33.529412 | 109 | 0.739474 | yue_Hant | 0.611182 |
2621601b3239b9afaa3d961d80f85faa979dbeaf | 2,279 | md | Markdown | expertise.md | regulawolf/github.io | a43c0185ed1f375255b37331a45edba26119cc36 | [
"CC-BY-3.0"
] | null | null | null | expertise.md | regulawolf/github.io | a43c0185ed1f375255b37331a45edba26119cc36 | [
"CC-BY-3.0"
] | null | null | null | expertise.md | regulawolf/github.io | a43c0185ed1f375255b37331a45edba26119cc36 | [
"CC-BY-3.0"
] | 1 | 2019-09-17T09:45:28.000Z | 2019-09-17T09:45:28.000Z | ---
title: Expertise
date: '2018-09-07T13:24:01.000+00:00'
layout: page
menu_title: Expertise
banner_title: Expertise
banner_subtitle: ''
seo_title: Regula Wolf I Stiftungs- und Public Management
banner_image: "/uploads/steinwasser_2_Ausschnitt.jpg"
keywords:
- Fördermassnahmen
- Fördermodelle
- Förderbeiträge
- Regula Wolf
- Gesuchsmanagement
- ceps
- NADEL
- IDHEAP
- Stiftungsmanagement
- Migros-Kulturprozent
- Bundesamt für Kultur
- Master in Public Management
- Institut für Entwicklungszusammenarbeit
- Regionalentwicklung
- Expertise
- Schweiz
- Kultur
- Soziales
- Bildung
- Entwicklungszusammenarbeit
- Umweltschutz
- Artenschutz
- Tierschutz
- Forschung
- Wissenschaft
description: Expertise Regula Wolf, Angebote für Stiftungen, private und öffentliche
Förderorganisationen für Förderkonzepte und umsetzungsbereite Fördermassnahmen,
Analysen, Recherchen, Organisation der Gesuchsbearbeitung, Begleitung der Neupositionierung
Date_String: 2019-09-24T22:00:00.000+00:00
---
#### Über mich
Seit 2017 begleite ich Förderstiftungen auf vielfältige Weise und in unterschiedlichsten Themen im Zusammenhang mit ihrer Förderung. Zuvor arbeitete ich 16 Jahre lang in der institutionellen Förderung: Erst beim Bundesamt für Kultur, wo ich u.a. für die Förderung der kulturellen Organisationen zuständig war, dann elf Jahre als Leiterin der Abteilung „Förderbeiträge“ und Mitglied im Leitungsgremium der Direktion Kultur und Soziales des Migros-Genossenschafts-Bund (nationales Migros-Kulturprozent). Beim Migros-Kulturprozent war ich zudem zuständig für den Fonds für Entwicklungszusammenarbeit und Regionalentwicklung. Ich habe während dieser Zeit mehrere Förderkonzepte ausgearbeitet und umgesetzt.
In den Themenfeldern **Kultur, Soziales, Bildung und Entwicklungszusammenarbeit** verfüge ich über Expertise und kenne die öffentlichen als auch privaten Akteure in der Schweiz sowie erfolgreiche nationale und internationale Fördermodelle.
Bei meiner Arbeit orientiere ich mich an zeitgemässen Management-Tools. Ich verfüge über einen Master in Public Management (IDHEAP/ Universität Lausanne), eine Vertiefung im Stiftungsmanagement (CEPS/ Universität Basel) und habe mehrere Weiterbildungen am Institut für Entwicklungszusammenarbeit (NADEL/ ETH Zürich) absolviert. | 47.479167 | 702 | 0.82975 | deu_Latn | 0.991453 |
26222a279723a915b6adde7f3e0a4987e072b3ab | 57 | md | Markdown | README.md | RichardSaxion/HyEnModel | 2e57b433a127ef386fcb0c5dfb65297521c1bc6f | [
"CC0-1.0"
] | null | null | null | README.md | RichardSaxion/HyEnModel | 2e57b433a127ef386fcb0c5dfb65297521c1bc6f | [
"CC0-1.0"
] | null | null | null | README.md | RichardSaxion/HyEnModel | 2e57b433a127ef386fcb0c5dfb65297521c1bc6f | [
"CC0-1.0"
] | null | null | null | # HyEnModel
Hydrogen System Model as part of energy hubs
| 19 | 44 | 0.807018 | eng_Latn | 0.901658 |
262272b2f40527a922ba67b7b633c245e22a99b4 | 652 | md | Markdown | guide/spanish/miscellaneous/html-elements/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 10 | 2019-08-09T19:58:19.000Z | 2019-08-11T20:57:44.000Z | guide/spanish/miscellaneous/html-elements/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 2,056 | 2019-08-25T19:29:20.000Z | 2022-02-13T22:13:01.000Z | guide/spanish/miscellaneous/html-elements/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 5 | 2018-10-18T02:02:23.000Z | 2020-08-25T00:32:41.000Z | ---
title: HTML Elements
localeTitle: Elementos HTML
---
La mayoría de los elementos HTML tienen una etiqueta de apertura y una etiqueta de cierre.
Las etiquetas de apertura tienen este aspecto: `<h1>` y las etiquetas de Cierre tienen este aspecto: `</h1>` .
Tenga en cuenta que la única diferencia entre abrir y cerrar etiquetas es que las etiquetas de cierre tienen una barra diagonal después de su ángulo de apertura.
También hay algunos elementos HTML que se cierran automáticamente.
Por ejemplo, la etiqueta de imagen `<img />` y la etiqueta de `<input />` .
[Pruébalo aquí!](http://www.freecodecamp.com/challenges/say-hello-to-html-elements) | 43.466667 | 161 | 0.768405 | spa_Latn | 0.994657 |
26227fe7ddc812b46127b8ee6bef1c53287292ad | 420 | md | Markdown | README.md | stemnic/xv6-rv32-systemc | 6f2dc6e34f99f59d88697a2ff7030dd49422a589 | [
"MIT-0"
] | 8 | 2020-10-09T15:14:17.000Z | 2021-09-30T19:20:18.000Z | README.md | riscv2os/xv6-rv32 | 73b634560546a13f3c1a209c09cc6b584e1de68f | [
"MIT-0"
] | null | null | null | README.md | riscv2os/xv6-rv32 | 73b634560546a13f3c1a209c09cc6b584e1de68f | [
"MIT-0"
] | 6 | 2021-01-15T03:33:39.000Z | 2022-03-05T04:13:09.000Z | # xv6-rv32
This is a port of MIT's xv6 OS [1] to 32 bit RISC V (rv32ia).
This currently runs in qemu-system-riscv32 (tested with qemu-5.0.0) using virtio drivers.
The official version of xv6 supports x86 [1] and 64 bit RISC V (rv64) [3]. See the
original documentation in README.
[1] https://pdos.csail.mit.edu/6.828/2012/xv6.html
[2] https://github.com/mit-pdos/xv6-public
[3] https://github.com/mit-pdos/xv6-riscv
| 35 | 89 | 0.72381 | eng_Latn | 0.510885 |
2622d296e446a4aa2a5aafcfb947b666342d2556 | 3,565 | md | Markdown | papers/mdp_homomorphisms_plannable_approximations.md | smspillaz/reading | b9c014906296162db61886e4e9e8600dbfce2b84 | [
"MIT"
] | 1 | 2022-03-10T06:07:19.000Z | 2022-03-10T06:07:19.000Z | papers/mdp_homomorphisms_plannable_approximations.md | smspillaz/reading | b9c014906296162db61886e4e9e8600dbfce2b84 | [
"MIT"
] | null | null | null | papers/mdp_homomorphisms_plannable_approximations.md | smspillaz/reading | b9c014906296162db61886e4e9e8600dbfce2b84 | [
"MIT"
] | null | null | null | # Plannable Approximatiosn to MDP Homomorphisms: Equivariance under Actions
tl;dr:
- Symmetries may exist in MDPs
- Introduces a contrastive loss function which enforces action equivariance on the learned representations
- When the loss is zero, you have a homomorphism of a deterministic MDP
Equivalence classes: Is there some mapping that you can do of states and actions such that there is a symmetry, eg, taking a class of actions in a class of states and its the same as taking some other class of actions in some other class of states.
- Basic idea: Use a neural network to map states to latent states, actions to latent actions
- Should collapse according to the homomorphism
Bisimulation Metrics: Matching reward and transition functions, allowing states to be compared with each other:
- $d(s, s') = \max_d (c_R|R(s, a) - R(s', a)| + c_T d_P(T(s, a), T(s', a)))$
- $R(s, a)$ is the reward of taking $a$ in $s$
- $T(s, a)$. is the transition probability
- $d_P$ is wasserstein or KL divergence
- $c_R$, $c_T$ are constants
Loss function:
$$
\mathcal{L}(\theta, \phi, \xi) = \frac{1}{N} \sum^TN [d(Z_{\theta}(s'_n), T_{\phi})(z_n, \bar A_{\phi})(z_n, a_n)] + d(R(s_n), \bar R_{\xi} (z_n))]
$$
$Z_{\theta}$ is a neural network mapping states to latent sates.
$d(z, z')$ is MSE.
$\bar T_{\phi}$ maps $z$ to $z'$, by predicting some action-effect., eg $T_{\phi}(z, \bar a)) = z + \bar A_{\phi}(z, a)$ where $\bar A_{\phi}$ is a neural network.
$R_{\xi}$ predicts the reward from $z$
Preventing the trivial solution: if you map everything to zero, then all the distance terms also go to zero. This is no-good.
A trick to prevent this outcome is a contrastive loss which you tack on the end:
$$
\max (0, \epsilon - \sum_{s \not \in S} d(Z_{\theta} (s), \bar T_{\theta} (z_n, \bar A_{\phi} (z_n, a_n))))
$$
the idea being that you want to maximize distances between latents for unrelated states.
## Finding an abstract MDP from the structured latent space
Discretization: Construct a discrete set $\mathcal{X}$ of prototype latent states, as well as discrete rtansitions and reward functions. To sample all the states, use the replay memory and encode the states, pruning duplicates.
Reward function: During planning, you can use the predicted reward $R_{\xi}$
Transition function: If two states are connected by an action in the state space, then they should be close after applying the latent space action. Transition function is a distribution over next latent states, use a temperature softmax:
$$
\hat T_{\phi} (z_j| z_i, a) = \frac{e^{-d(z_j, z_i + A_{\phi}(z_i, a))/ \tau}}{\sum_{k \in \mathcal{X}} e^{-d(z_k, z_i + A_{\phi} (x_i, a)) / \tau}}
$$
If an action moves two states close to each other, the nthe weight of their connection increases.
## Proof that this converges to an MDP homomorphism
![[mdp_homomorphisms_latents.png]]
if the loss converges, all the individual loss terms also go to zero.
- $d(\bar T_{\phi}(z, \bar a), z')$
- $-d (T\phi(z, \bar a), \tilde z))$
- $d(R(s), \bar R_{\xi}(z))$
Positive samples: $d(\bar T_{\phi}(z, \bar a), z')})$ has gone to zero and so $d_{+} \lt \tau$, so $e^{-d_{+} / \tau} \approx 1$
Negative examples: $-d(\bar T_{\phi} (z, \bar a), \tilde z)$ goes to zero, meaning that distance to all negative samples ($d_{-}$) $\ge \epsilon$, meaning that $\tau < \epsilon < d_{-}$, meanign that $1 \le \frac{d_{-}}{\tau}$ meaning that $e^{-d_{-} / \tau} \approx 1$
So since $M$ is deterministic, $T(s'|s, a)$ transitions to one state with probability 1 and 0 for the otehrs. | 48.835616 | 269 | 0.690323 | eng_Latn | 0.994567 |
2622f50be56ae8df10dd432307ccc83cb06b51fc | 802 | md | Markdown | docs/api/bindings/ThemeContext.md | theghostyced/fela | e58f5b722ae52acd08c1f5030c9b2549beb61b0e | [
"MIT"
] | 1 | 2020-04-08T17:05:23.000Z | 2020-04-08T17:05:23.000Z | docs/api/bindings/ThemeContext.md | theghostyced/fela | e58f5b722ae52acd08c1f5030c9b2549beb61b0e | [
"MIT"
] | null | null | null | docs/api/bindings/ThemeContext.md | theghostyced/fela | e58f5b722ae52acd08c1f5030c9b2549beb61b0e | [
"MIT"
] | null | null | null | # ThemeContext
ThemeContext is the internal instance of `React.createContext` that is provided by the new [Context API](https://facebook.github.io/react/docs/context.html). It is exposed to be used with React's new [useContext API](https://reactjs.org/docs/hooks-reference.html#usecontext).
> **Note**: Although it is exposed for react-fela as well as preact-fela and inferno-fela, there are no useContext equivalents for the latter yet.
## Imports
```javascript
import { ThemeContext } from 'react-fela'
import { ThemeContext } from 'preact-fela'
import { ThemeContext } from 'inferno-fela'
```
## Example
```javascript
import { useContext } from 'react'
import { ThemeContext } from 'react-fela'
function Button() {
const theme = useContext(ThemeContext)
// do something with the theme
}
``` | 34.869565 | 275 | 0.745636 | eng_Latn | 0.833251 |
262302c79db6437a735e404fd50fd78bb74aa9bd | 13,190 | md | Markdown | windows-driver-docs-pr/netcx/writing-an-mbbcx-client-driver.md | k-takai/windows-driver-docs.ja-jp | f28c3b8e411a2502e6378eaeef88cbae054cd745 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/netcx/writing-an-mbbcx-client-driver.md | k-takai/windows-driver-docs.ja-jp | f28c3b8e411a2502e6378eaeef88cbae054cd745 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/netcx/writing-an-mbbcx-client-driver.md | k-takai/windows-driver-docs.ja-jp | f28c3b8e411a2502e6378eaeef88cbae054cd745 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: MBB NetAdapterCx クライアント ドライバーを作成します。
description: MBB NetAdapter クラスの拡張機能と、クライアント ドライバーが MBB moderm に対して実行する必要がありますタスクの動作について説明します。
ms.assetid: FE69E832-848F-475A-9BF1-BBB198D08A86
keywords:
- (MBB モバイル ブロード バンド) WDF クラスの拡張機能、MBBCx、モバイル ブロード バンド NetAdapterCx
ms.date: 03/19/2018
ms.localizationpriority: medium
ms.openlocfilehash: 18fdbe9f8861a0db9217fdbf4c6276003547a815
ms.sourcegitcommit: 0cc5051945559a242d941a6f2799d161d8eba2a7
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/23/2019
ms.locfileid: "63353409"
---
# <a name="writing-an-mbbcx-client-driver"></a>MBBCx クライアント ドライバーの作成
[!include[MBBCx Beta Prerelease](../mbbcx-beta-prerelease.md)]
>[!WARNING]
>このトピックの「シーケンス図では、あくまで説明のため。 パブリック コントラクトが、今後変更される可能性が。
## <a name="inf-files-for-mbbcx-client-drivers"></a>MBBCx クライアント ドライバーの INF ファイル
MBBCx クライアント ドライバーの INF ファイルでは、その他の NetAdapterCx クライアント ドライバーと同じです。 詳細については、次を参照してください。 [NetAdapterCx クライアント ドライバーの INF ファイル](inf-files-for-netadaptercx-client-drivers.md)します。
## <a name="initialize-the-device"></a>デバイスを初期化します。
これらのタスクの NetAdapterCx で必要なだけでなく[NetAdapter デバイスの初期化](device-and-adapter-initialization.md)、MBB のクライアント ドライバーで次のタスクを実行する必要があります、 [ *EvtDriverDeviceAdd* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfdriver/nc-wdfdriver-evt_wdf_driver_device_add)コールバック関数。
1. 呼び出す[ **MbbDeviceInitConfig** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbdeviceinitconfig)呼び出した後[ *NetAdapterDeviceInitConfig* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadapterdeviceinitconfig)呼び出す前に[ *WdfDeviceCreate*](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfdevice/nf-wdfdevice-wdfdevicecreate)、同じ参照[ **WDFDEVICE\_INIT** ](../wdf/wdfdevice_init.md)フレームワークによってオブジェクトが渡されます。
2. 呼び出す[ **MbbDeviceInitialize** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbdeviceinitialize) MBB デバイスに固有のコールバックを登録する機能が初期化されたを使用して[ **MBB_DEVICE_CONFIG** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/ns-mbbcx-_mbb_device_config)構造体と WDFDEVICE オブジェクトから取得*WdfDeviceCreate*します。
次の例では、MBB デバイスを初期化する方法を示します。 エラー処理はわかりやすくするため省略してをいます。
```C++
status = NetAdapterDeviceInitConfig(deviceInit);
status = MbbDeviceInitConfig(deviceInit);
// Set up other callbacks such as Pnp and Power policy
status = WdfDeviceCreate(&deviceInit, &deviceAttributes, &wdfDevice);
MBB_DEVICE_CONFIG mbbDeviceConfig;
MBB_DEVICE_CONFIG_INIT(&mbbDeviceConfig,
EvtMbbDeviceSendMbimFragment,
EvtMbbDeviceReceiveMbimFragment,
EvtMbbDeviceSendServiceSessionData,
EvtMbbDeviceCreateAdapter);
status = MbbDeviceInitialize(wdfDevice, &mbbDeviceConfig);
```
NetAdapterCx ドライバーの他の種類とは異なり MBB クライアント ドライバーする必要がありますいないオブジェクトを作成、NETADAPTER 内から、 *EvtDriverDeviceAdd*コールバック関数。 代わりに、後でを MBBCx によって指示されるは。
次に、クライアント ドライバーを呼び出す必要があります[ **MbbDeviceSetMbimParameters**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbdevicesetmbimparameters)、通常、 [ *EvtDevicePrepareHardware* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfdevice/nc-wdfdevice-evt_wdf_device_prepare_hardware)コールバック次の関数。
このメッセージのフロー図は、初期化プロセスを示しています。

このメッセージのフロー図は、初期化プロセスを示しています。

## <a name="handling-mbim-control-messages"></a>MBIM コントロール メッセージの処理
MBBCx は、MBIM 仕様バージョン 1.0 では、8、9、および 10 のセクションでは、コントロール プレーンので定義されている標準 MBIM 制御コマンドを使用します。 コマンドと応答は、一連のクライアント ドライバーによって提供されるコールバック関数と MBBCx で提供される Api を介して交換されます。 MBBCx は、MBIM 仕様バージョン 1.0、5.3、セクションでこれらの関数呼び出しを使用して定義されている、MBIM デバイスの運用モデルを模倣します。
- MBBCx クライアント ドライバーに呼び出すことによって MBIM コマンド メッセージを送信するその[ *EvtMbbDeviceSendMbimFragment* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/_netvista/nc-mbbcx-evt_mbb_device_send_mbim_fragment)コールバック関数。 クライアント ドライバーは非同期的に呼び出すことによってこの送信要求を完了[ **MbbRequestComplete**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbrequestcomplete)します。
- クライアント ドライバーは、呼び出すことによって、結果の可用性を通知[ **MbbDeviceResponseAvailable**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbdeviceresponseavailable)します。
- MBBCx クライアント ドライバーから呼び出すことによって、MBIM 応答メッセージをフェッチするその[ *EvtMbbDeviceReceiveMbimFragment* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nc-mbbcx-evt_mbb_device_receive_mbim_fragment)コールバック関数。 クライアント ドライバーは非同期的に呼び出すことでこの取得応答の要求を完了[ **MbbRequestCompleteWithInformation**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbrequestcompletewithinformation)します。
- MBB クライアント ドライバーは呼び出すことによって要請されていないデバイス イベントの MBBCx を通知する可能性があります**MbbDeviceResponseAvailable**します。 MBBCx し、情報を取得しますクライアント ドライバーから同様にする MBIM 応答メッセージがフェッチされます。
次の図は、MBBCx クライアント ドライバーのメッセージ交換フローを示しています。

### <a name="synchronization-of-mbim-control-messages"></a>MBIM コントロール メッセージの同期
MBBCx フレームワークは、クライアント ドライバーへの呼び出しを常にシリアル化*EvtMbbDeviceSendMbimFragment*と*EvtMbbDeviceReceiveMbimFragment*コールバック関数。 新しい呼び出しは行われません、フレームワークによって、クライアント ドライバーでは、いずれかを呼び出すまで**MbbRequestComplete**または**MbbRequestCompleteWithInformation**します。
クライアント ドライバーがオーバー ラップを受け取らないように保証中*EvtMbbDeviceSendMbimFragment*または*EvtMbbDeviceReceiveMbimFragment*コールバックを受け取るを連続してそれらを複数回呼び出す場合があります前に、前のコマンドに対する応答を指定する場合は、デバイスで実行できます。
デバイスがない場合、 *D0* MBBCx フレームワークが D0 にデバイスを最初には、状態 (つまり、呼び出す[ *EvtDeviceD0Entry*](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfdevice/nc-wdfdevice-evt_wdf_device_d0_entry))を呼び出す前に*EvtMbbDeviceSendMbimFragment*または*EvtMbbDeviceReceiveMbimFragment*します。 MBBCx フレームワークも保証されることが維持されますデバイス D0 状態では呼び出しませんつまり[ *EvtDeviceD0Exit*](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfdevice/nc-wdfdevice-evt_wdf_device_d0_exit)クライアントを呼び出すまで、 **MbbRequestComplete**または**MbbRequestCompleteWithInformation**します。
## <a name="creating-the-netadapter-interface-for-the-pdp-contexteps-bearer"></a>PDP コンテキスト/EPS ベアラーの NetAdapter インターフェイスの作成
データのセッションを確立する前に MBBCx が NETADAPTER オブジェクトを作成するクライアント ドライバーを指示してデータ セッションがアクティブ化のネットワーク インターフェイスを表す MBBCx で使用されます。 これは、MBBCx クライアント ドライバーへの呼び出しによって実現されます[ *EvtMbbDeviceCreateAdapter* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nc-mbbcx-evt_mbb_device_create_adapter)コールバック関数。
実装では、 *EvtMbbDeviceCreateAdapter*コールバック関数、MBBCx クライアント ドライバーはすべて NetAdapterCx クライアント ドライバー NETADAPTER オブジェクトを作成するために必要な同じタスクを実行する必要がありますまずします。 さらに、次の追加タスクを実行にする必要があります。
1. 呼び出す[ **MbbAdapterInitialize** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbadapterinitialize)によって作成された NETADAPTER オブジェクトで[ *NetAdapterCreate*](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadaptercreate)します。
2. 呼び出した後*MbbAdapterinitialize*、呼び出す[ **MbbAdapterGetSessionId** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbadaptergetsessionid)を取得する MBBCx のデータのセッション ID がこの NETADAPTER オブジェクトを使用します。 たとえば、返される値が 0 の場合に MBBCx では、この NETADAPTER インターフェイスを使用してデータ セッションがプライマリの PDP コンテキストと既定の EPS ベアラーを確立するにはなります。
3. MBBCx クライアント ドライバーが作成された NETADAPTER オブジェクトおよび返された間の内部マッピングを維持することをお勧めします。 *SessionId*します。 これにより、複数の PDP コンテキスト/EPS 担ぎがアクティブになったときに特に便利ですが、データ セッション-NETADAPTER にオブジェクト リレーションシップを追跡できます。
4. 返す前に*EvtMbbDeviceCreateAdapter*、クライアント ドライバーはアダプターを呼び出すことによって開始する必要があります[ **NetAdapterStart**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadapterstart)します。 1 つ以上のこれらの関数を呼び出すことによって、アダプターの機能も設定、必要に応じて、*する前に*呼び出し**NetAdapterStart**:
- [**NetAdapterSetDatapathCapabilities**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadaptersetdatapathcapabilities)
- [**NetAdapterSetLinkLayerCapabilities**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadaptersetlinklayercapabilities)
- [**NetAdapterSetLinkLayerMtuSize**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadaptersetlinklayermtusize)
- [**NetAdapterSetPowerCapabilities**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/netadapter/nf-netadapter-netadaptersetpowercapabilities)
MBBCx は、常にプライマリの PDP コンテキストと既定の EPS ベアラーの 1 つの NETADPATER オブジェクトがあるため、少なくとも 1 回このコールバック関数を呼び出します。 複数の PDP コンテキスト/EPS 担ぎがアクティブになる場合 MBBCx がこのコールバック関数を呼び出します他にも 1 回が確立されているすべてのデータ セッション。 次の図に示すように、NETADAPTER オブジェクトおよびデータのセッションで表される、ネットワーク インターフェイスの間で一対一のリレーションシップが必要があります。

次の例では、データ セッション NETADAPTER オブジェクトを作成する方法を示します。 エラー処理とアダプターの機能を設定するために必要なコードを簡潔にするためにわかりやすくするため左はことに注意してください。
```C++
NTSTATUS
EvtMbbDeviceCreateAdapter(
WDFDEVICE Device,
PNETADAPTER_INIT AdapterInit
)
{
// Get the client driver defined per-device context
PMY_DEVICE_CONTEXT deviceContext = MyGetDeviceContext(Device);
// Set up the client driver defined per-adapter context
WDF_OBJECT_ATTRIBUTES adapterAttributes;
WDF_OBJECT_ATTRIBUTES_INIT_CONTEXT_TYPE(&adapterAttributes,
MY_NETADAPTER_CONTEXT);
// Create the NETADAPTER object
NETADAPTER netAdapter;
NTSTATUS status = NetAdapterCreate(AdapterInit,
&adapterAttributes,
&netAdapter);
// Initialize the adapter for MBB
status = MbbAdapterInitialize(netAdapter);
// Retrieve the Session ID and use an array to store
// the session <-> NETADAPTER object mapping
ULONG sessionId;
PMY_NETADAPTER_CONTEXT netAdapterContext = MyGetNetAdapterContext(netAdapter);
netAdapterContext->NetAdapter = netAdapter;
sessionId = MbbAdapterGetSessionId(netAdapter);
netAdapterContext->SessionId = sessionId;
deviceContext->Sessions[sessionId].NetAdapterContext = netAdapterContext;
//
// Optional: set adapter capabilities
//
...
NetAdapterSetDatapathCapabilities(netAdapter,
&txCapabilities,
&rxCapabilities);
...
NetAdapterSetLinkLayerCapabilities(netAdapter,
&linkLayerCapabilities);
...
NetAdapterSetLinkLayerMtuSize(netAdapter,
MY_MAX_PACKET_SIZE - ETHERNET_HEADER_LENGTH);
//
// Required: start the adapter
//
status = NetAdapterStart(netAdapter);
return status;
}
```
データパス機能の設定のコード例では、次を参照してください。[ネットワーク データ バッファー管理](network-data-buffer-management.md)します。
呼び出していることを保証 MBBCx *EvtMbbDeviceCreateAdapter*要求する前に**MBIM_CID_CONNECT**同じのセッション ID に置き換えます。 次のフロー図は、NETADAPTER オブジェクトを作成するクライアント ドライバーと、クラス拡張の間の相互作用を示しています。

プライマリの PDP コンテキストと既定の EPS ベアラーは MBBCx によって開始されるは、NETADAPTER オブジェクトを作成するためのフローと[ *EvtDevicePrepareHardware* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfdevice/nc-wdfdevice-evt_wdf_device_prepare_hardware)が正常に完了します。
によってセカンダリ PDP コンテキスト/専用の EPS ベアラーがトリガーされるは、NETADAPTER オブジェクトを作成するためのフロー *WwanSvc*アプリケーションがオンデマンドでの接続を要求するたびにします。
### <a name="lifetime-of-the-netadapter-object"></a>NETADAPTER オブジェクトの有効期間
クライアント ドライバーによって作成された NETADAPTER オブジェクトが自動的にで破壊する MBBCx 使用がの場合。 たとえば、追加の PDP コンテキスト/EPS 担ぎが非アクティブ化した後にこれが発生します。 **MBBCx クライアント ドライバーを呼び出してはならない[WdfObjectDelete](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfobject/nf-wdfobject-wdfobjectdelete) NETADAPTER オブジェクトを作成します。**
クライアント ドライバーを NETADAPTER オブジェクトに関連付けられているコンテキスト データをクリーンアップする必要がある場合は指定する必要があります、 [ *EvtDestroyCallback* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/wdfobject/nc-wdfobject-evt_wdf_object_context_destroy)関数、オブジェクトの属性の構造を呼び出すときに**NetAdapterCreate**します。
## <a name="power-management-of-the-mbb-device"></a>MBB デバイスの電源管理
電源管理のためのクライアント ドライバーが NETPOWERSETTINGS オブジェクトを使用します。[などの他の種類のクライアント ドライバーの NetAdapterCx](configuring-power-management.md)します。
## <a name="handling-device-service-sessions"></a>デバイス サービス セッションの処理
アプリケーションでは、モデム デバイス DSS データを送信する MBBCx 呼び出しますクライアント ドライバーの[ *EvtMbbDeviceSendServiceSessionData* ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nc-mbbcx-evt_mbb_device_send_device_service_session_data)コールバック関数。 クライアント ドライバーし、データの送信は非同期的に呼び出しとデバイスに[ **MbbDeviceSendDeviceServiceSessionDataComplete** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbdevicesenddeviceservicesessiondatacomplete)送信が完了すると、そのため MBBCx し、メモリを解放できますデータに割り当てられます。
逆に、クライアント ドライバーを呼び出す[ **MbbDeviceReceiveDeviceServiceSessionData** ](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/mbbcx/nf-mbbcx-mbbdevicereceivedeviceservicesessiondata) MBBCx 経由でアプリケーションまで、データの受け渡しします。
| 65.95 | 531 | 0.793556 | yue_Hant | 0.949781 |
26241fb4e545f055df3b2be85cfe76c0feb098d9 | 1,774 | md | Markdown | windows-driver-docs-pr/devtest/wdftester-installation.md | DriverRestore/windows-driver-docs | 444b23fdd0c241c8fd841c23d3b069f51dc3c333 | [
"CC-BY-3.0"
] | null | null | null | windows-driver-docs-pr/devtest/wdftester-installation.md | DriverRestore/windows-driver-docs | 444b23fdd0c241c8fd841c23d3b069f51dc3c333 | [
"CC-BY-3.0"
] | null | null | null | windows-driver-docs-pr/devtest/wdftester-installation.md | DriverRestore/windows-driver-docs | 444b23fdd0c241c8fd841c23d3b069f51dc3c333 | [
"CC-BY-3.0"
] | null | null | null | ---
title: WdfTester Installation
description: WdfTester Installation
ms.assetid: 39645ca4-3f4e-4a1f-bf62-7b44856ce58e
---
# WdfTester Installation
Before you can run the WdfTester tool on your driver, you must first copy the WdfTester files to a working directory and run an installation script.
**To install WdfTester**
1. Copy the following list of files from the WDK (*%WDKRoot%*\\tools\\*<platform>*) to a local folder that contains a copy of your driver binary.
Wdftester.sys
Wdftester.inf
Wdftester.ctl
Wdftester.tmf
WdftesterScript.wsf
2. Open a Command Prompt window (be sure to **Run as Administrator** on Windows Vista), and then type the following command:cscript WdfTesterScript.wsf install
This command installs the Wdftester.sys driver and starts the service.
3. Press Enter.
[Send comments about this topic to Microsoft](mailto:wsddocfb@microsoft.com?subject=Documentation%20feedback%20[devtest\devtest]:%20WdfTester%20Installation%20%20RELEASE:%20%2811/17/2016%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http://privacy.microsoft.com/default.aspx. "Send comments about this topic to Microsoft")
| 50.685714 | 931 | 0.797069 | eng_Latn | 0.736121 |
262445d0c8e51bcf2526aee3caba426c20db406f | 260 | md | Markdown | OpenLayers-Tests/mapWithInfoOnClick/README.md | Gouga34/OpenGeo | 0b8842fe5cdb5b195a4df872753328a23429441e | [
"MIT"
] | null | null | null | OpenLayers-Tests/mapWithInfoOnClick/README.md | Gouga34/OpenGeo | 0b8842fe5cdb5b195a4df872753328a23429441e | [
"MIT"
] | null | null | null | OpenLayers-Tests/mapWithInfoOnClick/README.md | Gouga34/OpenGeo | 0b8842fe5cdb5b195a4df872753328a23429441e | [
"MIT"
] | null | null | null | Le but est d'afficher les informations relatives à une des parcelles affichées lorsque l'on clique sur celle-ci.
Ce code est basé sur l'exemple de la doc : [WMS GetFeatureInfo (Image Layer)](http://openlayers.org/en/v3.14.2/examples/getfeatureinfo-image.html)
| 86.666667 | 146 | 0.788462 | fra_Latn | 0.956363 |
2624744353c06150ff006c0abd740d1b4fa50fed | 2,572 | md | Markdown | articles/vs-azure-tools-access-private-azure-clouds-with-visual-studio.md | ggailey777/azure-docs | 4520cf82cb3d15f97877ba445b0cfd346c81a034 | [
"CC-BY-3.0"
] | null | null | null | articles/vs-azure-tools-access-private-azure-clouds-with-visual-studio.md | ggailey777/azure-docs | 4520cf82cb3d15f97877ba445b0cfd346c81a034 | [
"CC-BY-3.0"
] | null | null | null | articles/vs-azure-tools-access-private-azure-clouds-with-visual-studio.md | ggailey777/azure-docs | 4520cf82cb3d15f97877ba445b0cfd346c81a034 | [
"CC-BY-3.0"
] | 1 | 2019-03-31T17:25:38.000Z | 2019-03-31T17:25:38.000Z | ---
title: Accessing private Azure clouds with Visual Studio | Microsoft Docs
description: Learn how to access private cloud resources by using Visual Studio.
services: visual-studio-online
documentationcenter: na
author: TomArcher
manager: douge
editor: ''
ms.assetid: 9d733c8d-703b-44e7-a210-bb75874c45c8
ms.service: multiple
ms.devlang: dotnet
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: multiple
ms.date: 08/15/2016
ms.author: tarcher
---
# Accessing private Azure clouds with Visual Studio
## Overview
By default, Visual Studio supports public Azure cloud REST endpoints. This can be a problem, though, if you're using Visual Studio with a private Azure cloud. You can use certificates to configure Visual Studio to access private Azure cloud REST endpoints. You can get these certificates through your Azure publish settings file.
## To access a private Azure cloud in Visual Studio
1. In the [Azure classic portal](http://go.microsoft.com/fwlink/?LinkID=213885) for the private cloud, download your publish settings file, or contact your administrator for a publish settings file. On the public version of Azure, the link to download this is [https://manage.windowsazure.com/publishsettings/](https://manage.windowsazure.com/publishsettings/). (The file you download should have a .publishsettings extension.)
2. In **Server Explorer** in Visual Studio, choose the **Azure** node and, on the shortcut menu, choose the **Manage Subscriptions** command.

3. In the **Manage Microsoft Azure Subscriptions** dialog box, choose the **Certificates** tab, and then choose the **Import** button.

4. In the **Import Microsoft Azure Subscriptions** dialog box, browse to the folder where you saved the publish settings file and choose the file, then choose the **Import** button. This imports the certificates in the publish settings file into Visual Studio. You should now be able to interact with your private cloud resources.

## Next steps
[Publishing to an Azure Cloud Service from Visual Studio](https://msdn.microsoft.com/library/azure/ee460772.aspx)
[How to: Download and Import Publish Settings and Subscription Information](https://msdn.microsoft.com/library/dn385850\(v=nav.70\).aspx)
| 62.731707 | 427 | 0.783826 | eng_Latn | 0.896297 |
262641a3cee71862eaad933da3ff14b9498d736d | 56 | md | Markdown | node_modules/gregorian-calendar-format/HISTORY.md | shengwenjia/ReactNews | e2e9751d59f1cdb016ee99735cd07a32ce75b15a | [
"MIT"
] | 1 | 2017-02-20T11:53:20.000Z | 2017-02-20T11:53:20.000Z | node_modules/gregorian-calendar-format/HISTORY.md | shengwenjia/ReactNews | e2e9751d59f1cdb016ee99735cd07a32ce75b15a | [
"MIT"
] | null | null | null | node_modules/gregorian-calendar-format/HISTORY.md | shengwenjia/ReactNews | e2e9751d59f1cdb016ee99735cd07a32ce75b15a | [
"MIT"
] | null | null | null | # History
----
## 4.1.0 / 2016-01-22
- support YY/YYYY | 9.333333 | 21 | 0.553571 | eng_Latn | 0.204051 |
2626dcd1c5de1cbb93256a5fc85dd75f03b6fbf6 | 1,080 | md | Markdown | content/appcenter/market/serviceprovider/10_prerequisite.md | Wciel/yiqiyun-government-docs | 401baef9981aadfe6af243dbee8326fab20c82d9 | [
"Apache-2.0"
] | null | null | null | content/appcenter/market/serviceprovider/10_prerequisite.md | Wciel/yiqiyun-government-docs | 401baef9981aadfe6af243dbee8326fab20c82d9 | [
"Apache-2.0"
] | 41 | 2021-10-11T05:37:26.000Z | 2022-01-20T03:44:06.000Z | content/appcenter/market/serviceprovider/10_prerequisite.md | Wciel/yiqiyun-government-docs | 401baef9981aadfe6af243dbee8326fab20c82d9 | [
"Apache-2.0"
] | 8 | 2021-10-09T02:40:37.000Z | 2022-01-20T03:17:06.000Z | ---
title: "入驻须知"
description: 入驻须知
weight: 10
draft: false
---
若您需要在**山东省计算中心云平台应用市场**发布应用,需要 [申请入驻](http://appcenter.yiqiyun.sd.cegn.cn/apply) 山东省计算中心云平台应用市场。
入驻山东省计算中心云平台应用市场需要满足[企业入驻条件](#企业条件)。在应用上架之前,需要完成[企业资质](#企业资质)的审核和[相关合约](#相关合约)的签署。
<img src="../../_images/um_appserver_apply.png" style="zoom:60%;" />
## 企业资质
[申请入驻](http://appcenter.yiqiyun.sd.cegn.cn/apply) 时需要提供以下企业资质:
### 企业条件
- 中华人民共和国境内的合法企业
- 具有纳税人资格
### 提交材料
- 营业执照副本:需完成有效年检且所发布的应用/服务属于经营范围内
- 税务登记证副本:国税、地税均可;请优先上传盖有国税局印章的税务登记
- 组织机构代码证副本
- 法定代表人身份证:正反面扫描件(正反面身份证复印区域都需盖上企业公章,也就是两个公章)
> **注意:**
>
> **若为2015年10月1日“三证合一 一照一码”后的新版营业执照,无需提供税务登记证副本及组织机构代码证副本**
### 注意事项
- 请服务商正确上传对应的扫描件并确保清晰
- 所有扫描件需要加盖企业公章(红色公章)
- 营业执照需 2013 年年检(若为 2014 年新版营业执照可不年检)
- 组织机构代码证需2013年年检(若为2014年注册可不年检,目前上海北京不需要年检)
## 相关合约
企业资质审核通过之后,意味着用户已经成为“山东省计算中心云平台应用 AppCenter服务商”。根据双方沟通结果,山东省计算中心云平台管理员会将生成的合约通过通知中心发送给服务商进行确认和签署,包括:
- 签署《山东省计算中心云平台 AppCenter服务商专属条款》
- 签署应用相关的《服务商产品清单》
> **注意:**
>
> 合约当中会涉及**保证金**和**基础服务费**。
>
> **保证金**需要在合约生效之后 5 个工作日内缴纳。
>
> **基础服务费会**在名下第一个应用通过审核之后由山东省计算中心云平台系统自动扣除,并且会自动开具对应发票。 | 17.704918 | 100 | 0.738889 | yue_Hant | 0.642113 |
26273e7e021c29eb7addc47301914750e89a18db | 1,938 | md | Markdown | _posts/2019-01-02-emerald.md | kokipedia/kokipedia.github.io | 0c32737aee323773b0edc7afd16de1fd1631bfbd | [
"Apache-2.0"
] | null | null | null | _posts/2019-01-02-emerald.md | kokipedia/kokipedia.github.io | 0c32737aee323773b0edc7afd16de1fd1631bfbd | [
"Apache-2.0"
] | null | null | null | _posts/2019-01-02-emerald.md | kokipedia/kokipedia.github.io | 0c32737aee323773b0edc7afd16de1fd1631bfbd | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "Kangaroo Emerald Cookware"
category: cookware
image: cover.jpg
---
Peralatan masak ekonomis berbahan aluminium dengan lapisan _xylan_ yang anti lengket. Cocok banget untuk digunakan sehari-hari.
***
## Simak Videonya...
<div class="video-container">
<iframe src="https://www.youtube.com/embed/L7tKV4yvolE?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
***
Sangat pas buat Anda yang baru memulai keluarga kecil atau sebagai kado spesial untuk sahabat tercinta.

Dilengkapi dengan keunggulan _spiral bottom_ yang berfungsi untuk 3M: **M**empercepat, **M**eratakan, dan **M**empertahankan panas.
Dapatkan set cookware berkualitas ini dengan harga bersahabat yang ga bikin kantong bolong.
## Keunggulan Produk :
- Lapisan xylan anti lengket yang aman, memasak hanya dengan sedikit minyak jadi lebih sehat
- Dilengkapi teknologi spiral bottom untuk mempercepat, meratakan, dan mempertahankan panas

- Gagang dilapisi bakelite anti slip, nyaman dan aman saat digenggam

- Desain modern
- Warna lime yang fresh dan eye catchy

- Aman digunakan untuk memasak segala jenis makanan
- Set peralatan basic yang cocok untuk keluarga kecil, penggunaan sehari – hari, maupun kado istimewa
## 1 set terdiri dari 5 item berikut:
- 1 pc Sauce Pan diameter 18 cm
- 1 pc Tutup Kaca
- 1 pc Fry Pan diameter 22 cm
- 1 pc Spatula Nylon
- 1 pc Centong Nylon

## Detail Paket
- Dimensi Produk: 37 x 22 x 14 cm
- Berat satu set: 1,9 kg
## Dengan semua keunggulan di atas, Anda hanya perlu merogoh kocek seharga:
<p style="text-align: center; font-size: 30px; padding: 30px 0; color: #cc0000;">Rp. 189.000,-</p>
Wah, hemat banget kan?
| 31.258065 | 183 | 0.757482 | ind_Latn | 0.884381 |
26275e367e93f7c43b69a8013da8699729faae4c | 418 | md | Markdown | README.md | bmstu-iu9/utp2019-4-chat | d8c5d09f170751cbaa6b564e0a71ff5c46fa4ec8 | [
"MIT"
] | 2 | 2019-08-31T22:31:30.000Z | 2019-08-31T22:32:05.000Z | README.md | bmstu-iu9/utp2019-4-chat | d8c5d09f170751cbaa6b564e0a71ff5c46fa4ec8 | [
"MIT"
] | 7 | 2019-07-23T11:30:09.000Z | 2019-09-22T16:03:21.000Z | README.md | bmstu-iu9/utp2019-4-chat | d8c5d09f170751cbaa6b564e0a71ff5c46fa4ec8 | [
"MIT"
] | 1 | 2019-08-05T18:31:40.000Z | 2019-08-05T18:31:40.000Z | # utp2019-4-chat
Чат (капитан Герман Кульчицкий)
## Члены команды:
* [Кульчицкий Герман](https://github.com/jetsnake)
* [Хробак Юлия](https://github.com/yukhrobak)
* [Ковайкин Роман](https://github.com/kovrom777)
* [Базартинова Фарида](https://github.com/farichase)
* [Хилядникова Ива](https://github.com/ivvhis)
* [Медведев Иван](https://github.com/TherealoneIvan)
* [Лобаев Никита](https://github.com/NikitaLobaev)
| 34.833333 | 52 | 0.739234 | bjn_Latn | 0.071605 |
2627be0bd7b44743911bd64b0a904da8c894acd9 | 103 | md | Markdown | iota-streams-core-mss/README.md | JakeSCahill/streams | ef7fcacf8aec5ab88610f0c9951e09fdee9d549b | [
"MIT",
"Apache-2.0",
"MIT-0"
] | null | null | null | iota-streams-core-mss/README.md | JakeSCahill/streams | ef7fcacf8aec5ab88610f0c9951e09fdee9d549b | [
"MIT",
"Apache-2.0",
"MIT-0"
] | null | null | null | iota-streams-core-mss/README.md | JakeSCahill/streams | ef7fcacf8aec5ab88610f0c9951e09fdee9d549b | [
"MIT",
"Apache-2.0",
"MIT-0"
] | 3 | 2020-10-26T20:22:54.000Z | 2021-10-03T04:46:02.000Z | # A rust implementation of the IOTA Streams Merkle signature scheme over Winternitz one-time signature
| 51.5 | 102 | 0.834951 | eng_Latn | 0.779734 |
26283a2cfcb268c0dd4d8f30f862579bcb8b6284 | 67 | md | Markdown | docs/DevelopersGuide/BrowserUI/options.md | enja-oss/ChromeExtensions | baa34ff4c7a0b88ed3bfbbd66a2a4edf4f5f47bf | [
"CC-BY-3.0"
] | 1 | 2015-03-16T11:13:37.000Z | 2015-03-16T11:13:37.000Z | docs/DevelopersGuide/BrowserUI/options.md | enja-oss/ChromeExtensions | baa34ff4c7a0b88ed3bfbbd66a2a4edf4f5f47bf | [
"CC-BY-3.0"
] | 1 | 2020-08-02T05:12:51.000Z | 2020-08-02T05:12:51.000Z | docs/DevelopersGuide/BrowserUI/options.md | enja-oss/ChromeExtensions | baa34ff4c7a0b88ed3bfbbd66a2a4edf4f5f47bf | [
"CC-BY-3.0"
] | null | null | null | # [Options](https://developer.chrome.com/extensions/options.html)
| 22.333333 | 65 | 0.761194 | kor_Hang | 0.638554 |
2629037a8c6a2b6cff3e74645617863635fc3be3 | 1,218 | md | Markdown | docs/csharp/misc/cs1511.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs1511.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs1511.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Erro do Compilador CS1511
title: Erro do Compilador CS1511
ms.date: 07/20/2015
f1_keywords:
- CS1511
helpviewer_keywords:
- CS1511
ms.assetid: c04b5268-5bc3-41db-af6b-463ab1d802b4
ms.openlocfilehash: 678287a3f4d5382ce9d7f11002430f636495d298
ms.sourcegitcommit: 5b475c1855b32cf78d2d1bbb4295e4c236f39464
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/24/2020
ms.locfileid: "91152046"
---
# <a name="compiler-error-cs1511"></a>Erro do Compilador CS1511
A palavra-chave ' base ' não está disponível em um método estático
A palavra-chave [base](../language-reference/keywords/base.md) foi usada em um método [estático](../language-reference/keywords/static.md) . `base` Só pode ser chamado em um construtor de instância, um método de instância ou um acessador de instância.
## <a name="example"></a>Exemplo
O exemplo a seguir gera CS1511.
```csharp
// CS1511.cs
// compile with: /target:library
public class A
{
public int j = 0;
}
class C : A
{
public void Method()
{
base.j = 3; // base allowed here
}
public static int StaticMethod()
{
base.j = 3; // CS1511
return 1;
}
}
```
| 24.857143 | 254 | 0.688834 | por_Latn | 0.857148 |
262914a743334d39ffc8ca28e40dfe66a8547e07 | 283 | md | Markdown | 2-resources/_GENERAL-RESOURCES/awesome-resources/awesome-list-master/AWESOME.md | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | 2-resources/_GENERAL-RESOURCES/awesome-resources/awesome-list-master/AWESOME.md | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | null | null | null | 2-resources/_GENERAL-RESOURCES/awesome-resources/awesome-list-master/AWESOME.md | eengineergz/Lambda | 1fe511f7ef550aed998b75c18a432abf6ab41c5f | [
"MIT"
] | 1 | 2021-11-05T07:48:26.000Z | 2021-11-05T07:48:26.000Z | # Awesome Lists
a awesome list of awesome lists :joy:
## Awesome Tech
- [Awesome-Tech](https://awesome-tech.readthedocs.io/elasticsearch/)
## Elasticsearch
- [@dzharii](https://github.com/dzharii/awesome-elasticsearch)
## Plotly
- [@ucg8](https://github.com/ucg8j/awesome-dash)
| 20.214286 | 68 | 0.727915 | eng_Latn | 0.263384 |
262a94e0d35f1666ad212a3abbb1222ddd25229e | 1,373 | md | Markdown | catalog/clockwork-planet/en-US_clockwork-planet-manga.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/clockwork-planet/en-US_clockwork-planet-manga.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/clockwork-planet/en-US_clockwork-planet-manga.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Clockwork Planet

- **type**: manga
- **volumes**: 10
- **chapters**: 51
- **original-name**: クロックワーク・プラネット
- **start-date**: 2013-09-26
- **end-date**: 2013-09-26
## Tags
- fantasy
- sci-fi
- shounen
## Authors
- Kamiya
- Yuu (Story)
- Himana
- Tsubaki (Story)
- Kuro (Art)
## Sinopse
Clockwork Planet is an artificial celestial body created by the elusive and enigmatic clockwork master known as Y. He single-handedly rebuilt the Earth with nothing more than reams of data and gear parts, breathing life into a planet dead for a millennia.
Naota Miura is a not-so-ordinary high school student who possesses exceptionally acute hearing. His gift allows him to hear the turning of even the smallest gears on the planet. He is obsessed with clockwork, so he spends his days toying with gears, making him an outcast at school.
One day, while Naoto is in the shower, a large container suddenly crashes into his apartment. Inside it, he discovers a beautifully preserved automaton by the name of RyuZU, and decides to repair her. Little does Naoto know that this encounter with RyuZU will turn the gears of fate, forever changing the course of humanity.
[Source My Anime List]
## Links
- [My Anime list](https://myanimelist.net/manga/65155/Clockwork_Planet)
| 35.205128 | 324 | 0.736344 | eng_Latn | 0.991984 |
262aa4dcc4a2b92858a303b077dc3761b7c26f7e | 2,533 | md | Markdown | sdk-api-src/content/directxmath/nf-directxmath-xmvectornegativemultiplysubtract.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/directxmath/nf-directxmath-xmvectornegativemultiplysubtract.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/directxmath/nf-directxmath-xmvectornegativemultiplysubtract.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:directxmath.XMVectorNegativeMultiplySubtract
title: XMVectorNegativeMultiplySubtract function (directxmath.h)
description: Computes the difference of a third vector and the product of the first two vectors.
helpviewer_keywords: ["Use DirectX..XMVectorNegativeMultiplySubtract","XMVectorNegativeMultiplySubtract","XMVectorNegativeMultiplySubtract method [DirectX Math Support APIs]","dxmath.xmvectornegativemultiplysubtract"]
old-location: dxmath\xmvectornegativemultiplysubtract.htm
tech.root: dxmath
ms.assetid: M:Microsoft.directx_sdk.arithmetic.XMVectorNegativeMultiplySubtract(XMVECTOR,XMVECTOR,XMVECTOR)
ms.date: 12/05/2018
ms.keywords: Use DirectX..XMVectorNegativeMultiplySubtract, XMVectorNegativeMultiplySubtract, XMVectorNegativeMultiplySubtract method [DirectX Math Support APIs], dxmath.xmvectornegativemultiplysubtract
req.header: directxmath.h
req.include-header: DirectXMath.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace: Use DirectX.
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- XMVectorNegativeMultiplySubtract
- directxmath/XMVectorNegativeMultiplySubtract
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- directxmathvector.inl
api_name:
- XMVectorNegativeMultiplySubtract
---
# XMVectorNegativeMultiplySubtract function
## -description
Computes the difference of a third vector and the product of the first two vectors.
## -parameters
### -param V1 [in]
Vector multiplier.
### -param V2 [in]
Vector multiplicand.
### -param V3 [in]
Vector subtrahend.
## -returns
Returns the resulting vector. See the remarks.
## -remarks
The following pseudocode demonstrates the operation of the function:
```
XMVECTOR result;
result.x = V3.x - V1.x * V2.x;
result.y = V3.y - V1.y * V2.y;
result.z = V3.z - V1.z * V2.z;
result.w = V3.w - V1.w * V2.w;
return result;
```
<h3><a id="Platform_Requirements"></a><a id="platform_requirements"></a><a id="PLATFORM_REQUIREMENTS"></a>Platform Requirements</h3>
Microsoft Visual Studio 2010 or Microsoft Visual Studio 2012 with the Windows SDK for Windows 8. Supported for Win32 desktop apps, Windows Store apps, and Windows Phone 8 apps.
## -see-also
<a href="/windows/desktop/dxmath/ovw-xnamath-reference-functions-vector-arithmetic">Vector Arithmetic Functions</a> | 26.663158 | 217 | 0.782866 | eng_Latn | 0.51559 |
262aafcb333f2454aefa49f60036c66acc7ee956 | 8,461 | md | Markdown | guide/index.md | rufuspollock/awesome-crypto-critique | 4395459b5ec4fee9d514ef9be098944329301724 | [
"CC0-1.0"
] | 800 | 2022-01-12T13:00:06.000Z | 2022-02-24T10:07:41.000Z | guide/index.md | rufuspollock/awesome-crypto-critique | 4395459b5ec4fee9d514ef9be098944329301724 | [
"CC0-1.0"
] | 31 | 2022-01-13T21:24:06.000Z | 2022-02-24T09:00:51.000Z | guide/index.md | rufuspollock/awesome-crypto-critique | 4395459b5ec4fee9d514ef9be098944329301724 | [
"CC0-1.0"
] | 46 | 2022-01-13T13:50:05.000Z | 2022-02-23T21:43:14.000Z | # Introduction
This page serves as a root from which all other topics branch and can be explored.
## Key Concepts
Understand the terminology used to describe crypto and web3.
* [Web3](/concepts/web3.md)
* [Crypto asset](/concepts/cryptoasset.md)
* [Bitcoin](/concepts/bitcoin.md)
* [Ethereum](/concepts/ethereum.md)
* [Blockchain](/concepts/blockchain.md)
* [Bubble](/concepts/bubble.md)
* [Money](/concepts/money.md)
* [NFT](../concepts/nft.md)
* [Private money](../concepts/private-money.md)
* [Fiat money](../concepts/fiat-money.md)
* [Deflationary asset](../concepts/deflationary.md)
* [Sound money](../concepts/sound-money.md)
* [Ponzi scheme](../concepts/ponzi-scheme.md)
* [Pump and dumps](../concepts/pump-and-dump.md)
* [Exit scam](../concepts/exit-scam.md)
* [Wash trading](../concepts/wash-trading.md)
* [Crypto exchanges](/concepts/crypto-exchange.md)
* [Greater fool theory](../concepts/greater-fool-theory.md)
* [Zero-sum game](../concepts/zero-sum-game.md)
* [Memecoins](../concepts/memecoin.md)
* [Pre-mine](../concepts/pre-mine.md)
* [Assets](/concepts/assets.md)
* [Intrinsic value](../concepts/use-value.md)
* [Store of value](../concepts/store-of-value.md)
* [Speculative asset](/concepts/speculation.md)
* [Market manipulation](../concepts/market-manipulation.md)
* [DeFi](../concepts/defi.md)
* [DAO](/concepts/dao.md)
* [Stablecoin](../concepts/stablecoin.md)
* [Smart contract](../concepts/smart-contracts.md)
* [Regulatory arbitrage](../concepts/regulatory-arbitrage.md)
* [Predatory inclusion](../concepts/predatory-inclusion.md)
* [United States Dollar](/concepts/dollar.md)
* [Central bank digital currency](../concepts/cbdc.md)
* [Proof of Work](../concepts/proof-of-work.md)
* [Proof of Stake](../concepts/proof-of-stake.md)
***
## Contextual
Understand crypto and "web3" in terms of recent news events and interviews and external resources.
* [Video Interviews](/guide/interviews)
1. [Episode #1: Neo-Metallism](../notes/neo-metallism.md)
2. [Episode #2: Market Fundamentalism](../notes/market-fundamentalism.md)
3. [Episode #3: Securities Regulation](/notes/are-crypto-tokens-securities.md)
4. [Episode #4: Post-state Technocracy](../notes/post-state-technocracy.md)
5. [Episode #5: Fintech Incrementalism](../notes/fintech-incrementalism-and-responsible-innovation.md)
6. [Episode #6: Public Goods and Climate Change](../notes/collective-action-problems-and-climate-change.md)
7. [Episode #7: Authoritarianism](../notes/bitcoin-as-anti-authoritarian.md)
8. Episode #8: DeFi and Shadow Banking
* [Recent News Stories](/notes/recent-events.md)
* [Commentary on 'Line Goes Up'](../notes/olson-2022-line-go-up.md)
* [Commentary on 'Web3 is a Libertarian Dystopia'](../notes/web3-dystopia)
* [Commentary on Secretary Yellen's Speech](../notes/yellen-treasury-remarks.md)
* [Commentary on Chairman Gensler's Speech](../notes/sec-remarks.md)
* Commentary on 'Driverless Finance'
***
## Ideologies
Explore crypto and "web3" in terms of different perspectives on politics and economics.
* [Market Fundamentalism](../concepts/market-fundamentalism.md)
* [Financial Nihilism](../concepts/financial-nihilism.md)
* [Austrian Economics](../concepts/austrian-economics.md)
* [Post-state Technocracy](../concepts/post-state-technocracy.md)
* [Libertarianism](../concepts/libertarianism.md)
* [Technolibertarianism](../concepts/technolibertarianism.md)
* [Cryptoanarchism](../concepts/cryptoanarchism.md)
* [Anarchocapitalism](../concepts/anarchocapitalism.md)
* [Keynsian Economics](../concepts/keynsian-economics.md)
* [Technosolutionism](../concepts/technosolutionism.md)
* [Technocollectivism](../concepts/techno-collectivism.md)
* [Technopopulism](../concepts/technopopulism.md)
* [Accelerationism](../concepts/accelerationism.md)
* [Crypto-inevitablism](../concepts/inevitablism.md)
* [Capitalism](../concepts/capitalism.md)
***
## Supporting Concepts
Understand the deeper theoretical concepts behind the technical and economic claims.
#### Economics
* [Artificial scarcity](/concepts/artificial-scarcity.md)
* [Asymmetric information](/concepts/asymmetric-information.md)
* [Central banks](/concepts/central-banks.md)
* [Capital formation](../concepts/capital-formation.md)
* [Currency](/concepts/currency.md)
* [Currency peg](/concepts/currency-peg.md)
* [Free rider problem](/concepts/free-rider-problem.md)
* [Gold standard](/concepts/gold-standard.md)
* [Market manipulation](/concepts/market-manipulation.md)
* [Moral hazard](/concepts/moral-hazard.md)
* [Public goods](/concepts/public-goods-problem.md)
* [Zero-sum game](/concepts/zero-sum-game.md)
* [Liquidity](../concepts/liquidity.md)
* [Pump and dump scheme](../concepts/pump-and-dump.md)
* [Multilevel marketing scheme](../concepts/mlm.md)
* [Pyramid scheme](../concepts/pyramid-scheme.md)
* [Ponzi scheme](../concepts/ponzi-scheme.md)
* [Ponzinomics](../concepts/ponzinomics.md)
* [Meme stocks](../concepts/meme-stock.md)
* [Reserve currency](../concepts/reserve-currency.md)
* [Bretton Woods system](../concepts/bretton-woods.md)
* [Wash trading](../concepts/wash-trading.md)
* [Non-economic](../concepts/non-economic.md)
* [Paper wealth](../concepts/paper-wealth.md)
* [Liquidity pool](../concepts/liquidity-pool.md)
* [Bucket shop](../concepts/bucket-shop.md)
* [Artificial demand](../concepts/artificial-demand.md)
* [Financial asset](../concepts/financial-asset.md)
* [Market mania](../concepts/market-mania.md)
* [Unbanked](../concepts/unbanked.md)
* [Assets](../concepts/assets.md)
* [Real estate](../concepts/real-estate.md)
* [Gold](../concepts/gold.md)
* [Art](../concepts/art.md)
* [Commodity](../concepts/commodity.md)
* [Derivative](../concepts/derivative.md)
* [Bond](../concepts/bond.md)
* [Certificate deposit](../concepts/cd.md)
* [Dollar](../concepts/dollar.md)
* [Commercial paper](../concepts/commercial-paper.md)
* [Credit Default Swap](../concepts/cds.md)
#### Technology
* [Blockchain](/concepts/blockchain.md)
* [Cryptoasset](/concepts/cryptoasset.md)
* [Immutability](../concepts/immutability.md)
* [Crypto Wallet](../concepts/wallet.md)
* [Network effect](../concepts/network-effect.md)
* [Decentralized Finance (DeFi)](/concepts/defi.md)
* [Decentralization](/concepts/decentralization.md)
* [Initial Coin Offering (ICO)](/concepts/ico.md)
* [Non-fungible token (NFT)](/concepts/nft.md)
* [Ransomware](/concepts/ransomware.md)
* [Smart contracts](/concepts/smart-contracts.md)
* [Stablecoin](/concepts/stablecoin.md)
* [Mining](/concepts/mining.md)
* [Permissioned blockchain](../concepts/permissioned-blockchain.md)
* [Central Bank Digital Currency (CBDC)](/concepts/cbdc.md)
* [Automated Market Maker (AMM)](../concepts/amm.md)
* [Decentralized Exchange (DEX)](../concepts/dex.md)
* [Yield Farming](../concepts/yield-farming.md)
* [Hard fork](../concepts/hard-fork.md)
#### Regulation
* [Howey test](../concepts/howey-test.md)
* [Securities framework](/concepts/security.md)
* [Anti-money laundering law](/concepts/aml.md)
* [Know Your Customer Law](/concepts/kyc.md)
* [Money laundering](../concepts/money-laundering.md)
* [Regulatory capture](../concepts/regulatory-capture.md)
* [Money services business](../concepts/money-services-business.md)
* [Deposit Insurance](../concepts/deposit-insurance.md)
* [Counter-terrorism financing](../concepts/ctf.md)
* [Illicit financing](../concepts/illicit-financing.md)
* [Broker-dealer](../concepts/broker.md)
#### Sociology
* [Value](../concepts/value.md)
* [Use value](../concepts/use-value.md)
* [Sign value](../concepts/sign-value.md)
* [Whataboutism](../concepts/whataboutism.md)
* [Bubbles](../concepts/bubble.md)
* [Madness of crowds](../concepts/madness-crowds.md)
* [Decentralization](../concepts/decentralization.md)
* [Recentralization](../concepts/recentralization.md)
* [High control groups](../concepts/high-control-group.md)
* [Thought terminating cliche](../concepts/thought-terminating-cliches.md)
* [Techno-obscurantism](../concepts/techno-obscurantism.md)
* [Bandwagon bias](../concepts/bandwagon-bias.md)
* [Tinkerbell effect](../concepts/tinkerbell-effect.md)
* [Endowment effect](../concepts/endowment-effect.md)
* [Predatory inclusion](../concepts/predatory-inclusion.md)
* [Enclosure](../concepts/enclosure.md)
#### Meta
* [Value](../concepts/value.md)
* [Risk](../concepts/risk.md)
* [Regulation](../concepts/regulation.md)
* [Market manipulation](../concepts/market-manipulation.md)
* [Externalities](../concepts/externalities.md)
| 41.886139 | 108 | 0.726746 | yue_Hant | 0.311583 |
262ab453d1c6b215b120b253e295595e31cbe680 | 3,707 | md | Markdown | docs/visual-basic/programming-guide/language-features/strings/walkthrough-validating-that-passwords-are-complex.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/language-features/strings/walkthrough-validating-that-passwords-are-complex.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/language-features/strings/walkthrough-validating-that-passwords-are-complex.md | Youssef1313/docs.it-it | 15072ece39fae71ee94a8b9365b02b550e68e407 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Convalida della complessità delle password
ms.date: 07/20/2015
helpviewer_keywords:
- String data type [Visual Basic], validation
ms.assetid: 5d9a918f-6c1f-41a3-a019-b5c2b8ce0381
ms.openlocfilehash: 6e8697379a6fbb5cc15b60291e5b822897c2c013
ms.sourcegitcommit: 17ee6605e01ef32506f8fdc686954244ba6911de
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 11/22/2019
ms.locfileid: "74348334"
---
# <a name="walkthrough-validating-that-passwords-are-complex-visual-basic"></a>Procedura dettagliata: verifica della complessità delle password (Visual Basic)
Questo metodo verifica la presenza di alcune caratteristiche di password complesse e aggiorna un parametro di stringa con informazioni sui controlli che la password ha esito negativo.
Le password possono essere utilizzate in un sistema sicuro per autorizzare un utente. Tuttavia, le password devono essere difficili da indovinare per utenti non autorizzati. Gli utenti malintenzionati possono usare un programma di *attacco del dizionario* , che scorre tutte le parole in un dizionario (o più dizionari in linguaggi diversi) e verifica se una delle parole funziona come la password di un utente. È possibile indovinare rapidamente password vulnerabili, ad esempio "Yankee" o "Mustang". Password più complesse, ad esempio "? È molto meno probabile che venga indovinato il 'L1N3vaFiNdMeyeP@sSWerd!'. Un sistema protetto da password deve garantire che gli utenti scelgano password complesse.
Una password complessa è complessa (contenente una combinazione di caratteri maiuscoli, minuscoli, numerici e speciali) e non è una parola. Questo esempio illustra come verificare la complessità.
## <a name="example"></a>Esempio
### <a name="code"></a>Codice
[!code-vb[VbVbcnRegEx#1](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbcnRegEx/VB/Class1.vb#1)]
## <a name="compiling-the-code"></a>Compilazione del codice
Chiamare questo metodo passando la stringa che contiene tale password.
L'esempio presenta i requisiti seguenti:
- Accedere ai membri dello spazio dei nomi <xref:System.Text.RegularExpressions>. Aggiungere un'istruzione `Imports` se i nomi dei membri all'interno del codice non sono specificati in modo completo. Per altre informazioni, vedere [Istruzione Imports (tipo e spazio dei nomi .NET)](../../../../visual-basic/language-reference/statements/imports-statement-net-namespace-and-type.md).
## <a name="security"></a>Sicurezza
Se si sta migrando la password in una rete, è necessario usare un metodo sicuro per il trasferimento dei dati. Per altre informazioni, vedere [sicurezza dell'applicazione Web ASP.NET](https://docs.microsoft.com/previous-versions/aspnet/330a99hc(v=vs.100)).
Per migliorare l'accuratezza della funzione di `ValidatePassword`, è possibile aggiungere ulteriori controlli di complessità:
- Confrontare la password e le relative sottostringhe con il nome dell'utente, l'identificatore utente e un dizionario definito dall'applicazione. Inoltre, quando si eseguono i confronti, i caratteri visivi simili sono considerati equivalenti. Ad esempio, considerare le lettere "l" e "e" come equivalenti ai numeri "1" e "3".
- Se è presente un solo carattere maiuscolo, assicurarsi che non sia il primo carattere della password.
- Verificare che gli ultimi due caratteri della password siano caratteri letterali.
- Non consentire le password in cui tutti i simboli vengono immessi dalla riga superiore della tastiera.
## <a name="see-also"></a>Vedere anche
- <xref:System.Text.RegularExpressions.Regex>
- [Protezione delle applicazioni Web ASP.NET](https://docs.microsoft.com/previous-versions/aspnet/330a99hc(v=vs.100))
| 74.14 | 707 | 0.787969 | ita_Latn | 0.997007 |
262b1f2c8b8ce919c3904961caae1306bea23761 | 4,568 | md | Markdown | docset/winserver2016-ps/dcbqos/Get-NetQosDcbxSetting.md | e0i/windows-powershell-docs | f6f7b8522cd6aeb5d26afdcfc01917239b024536 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docset/winserver2016-ps/dcbqos/Get-NetQosDcbxSetting.md | e0i/windows-powershell-docs | f6f7b8522cd6aeb5d26afdcfc01917239b024536 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docset/winserver2016-ps/dcbqos/Get-NetQosDcbxSetting.md | e0i/windows-powershell-docs | f6f7b8522cd6aeb5d26afdcfc01917239b024536 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: Kateyanne
description: Use this topic to help manage Windows and Windows Server technologies with Windows PowerShell.
external help file: MSFT_NetQosDcbxSetting.cdxml-help.xml
manager: jasgro
Module Name: DcbQoS
ms.author: v-kaunu
ms.date: 12/27/2016
ms.mktglfcycl: manage
ms.prod: w10
ms.reviewer:
ms.sitesec: library
ms.technology:
ms.topic: reference
online version: https://docs.microsoft.com/powershell/module/dcbqos/get-netqosdcbxsetting?view=windowsserver2016-ps&wt.mc_id=ps-gethelp
schema: 2.0.0
title: Get-NetQosDcbxSetting
---
# Get-NetQosDcbxSetting
## SYNOPSIS
Gets data center bridging exchange settings.
## SYNTAX
### ByIfAlias (Default)
```
Get-NetQosDcbxSetting [[-InterfaceAlias] <String>] [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>]
[-AsJob] [<CommonParameters>]
```
### ByIfIndex
```
Get-NetQosDcbxSetting [[-InterfaceIndex] <UInt32>] [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>]
[-AsJob] [<CommonParameters>]
```
## DESCRIPTION
The **Get-NetQosDcbxSetting** cmdlet gets data center bridging exchange (DCBX) settings.
The only setting that Windows Server® 2012 allows you to configure is whether the network adapters in the computer that runs Windows Server 2012 or later accepts data center bridging (DCB) configurations from the computer or from a remote device.
DCB is the IEEE standard.
DCBX is part of the DCB standard.
Windows Server 2012 and later does not implement DCBX.
Some network adapters may implement DCBX.
## EXAMPLES
### Example 1: Display settings for a computer
```
PS C:\> Get-NetQosDcbxSetting
Willing
-------
True
```
This command shows that the computer is willing to accept configurations from a remote peer.
## PARAMETERS
### -AsJob
Runs the cmdlet as a background job. Use this parameter to run commands that take a long time to complete.
The cmdlet immediately returns an object that represents the job and then displays the command prompt.
You can continue to work in the session while the job completes.
To manage the job, use the `*-Job` cmdlets.
To get the job results, use the [Receive-Job](https://go.microsoft.com/fwlink/?LinkID=113372) cmdlet.
For more information about Windows PowerShell background jobs, see [about_Jobs](https://go.microsoft.com/fwlink/?LinkID=113251).
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -CimSession
Runs the cmdlet in a remote session or on a remote computer.
Enter a computer name or a session object, such as the output of a [New-CimSession](https://go.microsoft.com/fwlink/p/?LinkId=227967) or [Get-CimSession](https://go.microsoft.com/fwlink/p/?LinkId=227966) cmdlet.
The default is the current session on the local computer.
```yaml
Type: CimSession[]
Parameter Sets: (All)
Aliases: Session
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -InterfaceAlias
```yaml
Type: String
Parameter Sets: ByIfAlias
Aliases: IfAlias
Required: False
Position: 0
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -InterfaceIndex
```yaml
Type: UInt32
Parameter Sets: ByIfIndex
Aliases: IfIndex
Required: False
Position: 0
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -ThrottleLimit
```yaml
Type: Int32
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### None
## OUTPUTS
### Microsoft.Management.Infrastructure.CimInstance#ROOT/StandardCimv2/MSFT_NetQosDcbxSettingData
The `Microsoft.Management.Infrastructure.CimInstance` object is a wrapper class that displays Windows Management Instrumentation (WMI) objects.
The path after the pound sign (`#`) provides the namespace and class name for the underlying WMI object.
This cmdlet returns a **MSFT_NetQosDcbxSettingData** object that contains the DCBX setting configured in Windows Server 2012 and later.
## NOTES
## RELATED LINKS
[Set-NetQosDcbxSetting](./Set-NetQosDcbxSetting.md)
| 28.55 | 316 | 0.773643 | eng_Latn | 0.689682 |
262bd5a5fb9be85028bd07f629084cc7a2f237a3 | 72 | md | Markdown | README.md | knowankit/starter-template | 0ce2326a8fcc92db2d568649aac6cf1c559fb8c3 | [
"MIT"
] | null | null | null | README.md | knowankit/starter-template | 0ce2326a8fcc92db2d568649aac6cf1c559fb8c3 | [
"MIT"
] | null | null | null | README.md | knowankit/starter-template | 0ce2326a8fcc92db2d568649aac6cf1c559fb8c3 | [
"MIT"
] | 1 | 2019-10-24T00:18:40.000Z | 2019-10-24T00:18:40.000Z | # StartUpTemplate
A basic template for representing any startup company
| 24 | 53 | 0.847222 | eng_Latn | 0.992516 |
262bd72451207d202778df7d0a1d958e0e3b8eaf | 1,808 | md | Markdown | README.md | 0xtreelike/Earec | c9c62e97fa8fa6b96cab09bca1aacc61d6c9e5f3 | [
"MIT"
] | 1 | 2022-02-25T12:07:13.000Z | 2022-02-25T12:07:13.000Z | README.md | 0xtreelike/Earec | c9c62e97fa8fa6b96cab09bca1aacc61d6c9e5f3 | [
"MIT"
] | null | null | null | README.md | 0xtreelike/Earec | c9c62e97fa8fa6b96cab09bca1aacc61d6c9e5f3 | [
"MIT"
] | null | null | null | # Earec CLI
# Introduction
The Earec command-line interface (Earec CLI) is a set of commands used to create and manage activities of an Event and is designed to get you working quickly for collecting time-series data to gain insights.
# Documentation
### Earec CLI commands
1. **Event**
- *create, view, delete, update*
2. **Activity**
- *create, view, delete, update*
3. **Scope**
- *create, delete, update*
### List of available arguments
- *--name*
>name of an event, scope or activity
- *--desc*
>the detailed description of an event
- *--se*
>the name of an event to select
- *--sc*
>the name of a scope to select
- *--act*
>an activity name
- *--to*
>new name of an event, scope or activity to update
### Examples
1. Create
- Event
- earec event create --name **cats** --desc **lifespan**
- Scope
- earec scope create --se **cats** --name **mycat**
- Activity
- earec activity create --sc **mycat** --act **'newborn kittens'**
- earec activity create --sc **mycat** --act **'eyes begin to open'**
2. View
- Event
- earec event view --name **cats**
- Activity
- earec activity view --sc **mycat**
3. Delete
- Event
- earec event delete --name **cats**
- Scope
- earec scope delete --se **cats** --name **mycat**
- Activity
- earec activity delete --sc **mycat** --name **'newborn kittens'**
4. Update
- Event
- earec event update --name **cats** --to **animals**
- Scope
- earec scope update --se **cats** --name **mycat** --to **cats**
- Activity
- earec activity update --sc **cats** --name **'eyes begin to open'** --to **'eyes are opening'**
| 28.698413 | 207 | 0.56969 | eng_Latn | 0.915382 |
262c03afe9e93f788bdd3f8c02bb57edcca64c4e | 2,616 | md | Markdown | Skype/SfbServer/schema-reference/call-detail-recording-cdr-database-schema/conferences.md | GabGonzalezPM/OfficeDocs-SkypeForBusiness | 97269bfc910d64bda83ff270da636a1c2434a9d7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-07T17:18:48.000Z | 2020-07-07T17:18:48.000Z | Skype/SfbServer/schema-reference/call-detail-recording-cdr-database-schema/conferences.md | GabGonzalezPM/OfficeDocs-SkypeForBusiness | 97269bfc910d64bda83ff270da636a1c2434a9d7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Skype/SfbServer/schema-reference/call-detail-recording-cdr-database-schema/conferences.md | GabGonzalezPM/OfficeDocs-SkypeForBusiness | 97269bfc910d64bda83ff270da636a1c2434a9d7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Conferences table in Skype for Business Server 2015"
ms.reviewer:
ms.author: v-lanac
author: lanachin
manager: serdars
ms.date: 7/15/2015
audience: ITPro
ms.topic: article
ms.prod: skype-for-business-itpro
f1.keywords:
- NOCSH
localization_priority: Normal
ms.assetid: c3da6271-b3c6-4898-894f-10456ec794d0
description: "Each record in this table contains call details about one conference."
---
# Conferences table in Skype for Business Server 2015
Each record in this table contains call details about one conference.
|**Column**|**Data Type**|**Key/Index**|**Details**|
|:-----|:-----|:-----|:-----|
|**SessionIdTime** <br/> |datetime <br/> |Primary <br/> |Time that the conference request was captured by the CDR agent. Used only as a primary key to uniquely identify a conference instance. <br/> |
|**SessionIdSeq** <br/> |int <br/> |Primary <br/> |ID number to identify the session. Used in conjunction with **SessionIdTime** to uniquely identify a conference instance. * <br/> |
|**ConferenceUriId** <br/> |int <br/> |Foreign <br/> |Conference URI. See the [ConferenceUris table in Skype for Business Server 2015](conferenceuris.md) for more information. <br/> |
|**ConfInstance** <br/> |uniqueidentifier <br/> | <br/> |Useful for recurring conferences; each instance of a recurring conference has the same **ConferenceUri**, but will have a different **ConfInstance**. <br/> |
|**ConferenceStartTime** <br/> |datetime <br/> | <br/> |Conference start time. <br/> |
|**ConferenceEndTime** <br/> |datetime <br/> | <br/> |Conference start time. <br/> |
|**PoolId** <br/> |int <br/> |Foreign <br/> |ID number to identify the pool in which the conference was captured. See the [Pools table](pools.md) for more information. <br/> |
|**OrganizerId** <br/> |Int <br/> |Foreign <br/> |ID number to identify the organizer URI of this conference. See the [Users table](users.md) for more information. <br/> |
|**Flag** <br/> |smallint <br/> || A bit mask that contains Conference Attributes. Possible values are: <br/> 0X01 <br/> Synthetic <br/> Transaction <br/> |
|**Processed** <br/> |bit <br/> ||Internal field used by the Monitoring service. <br/> This field was introduced in Microsoft Lync Server 2013. <br/> |
|**LastModifiedTime** <br/> |Datetime <br/> ||For internal use by the Monitoring service. <br/> This field was introduced in Skype for Business Server 2015. <br/> |
\* For most sessions, SessionIdSeq will have the value of 1. If two sessions start at exactly the same time, the SessionIdSeq for one will be 1, and for the other will be 2, and so on.
| 67.076923 | 215 | 0.699159 | eng_Latn | 0.913711 |
262dae75e258e5cb43157b6490fbb682eda7997a | 22,580 | md | Markdown | articles/service-fabric/service-fabric-cluster-capacity.md | fuadi-star/azure-docs.nl-nl | 0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T07:45:43.000Z | 2021-04-20T21:12:50.000Z | articles/service-fabric/service-fabric-cluster-capacity.md | fuadi-star/azure-docs.nl-nl | 0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd | [
"CC-BY-4.0",
"MIT"
] | 575 | 2017-08-30T07:14:53.000Z | 2022-03-04T05:36:23.000Z | articles/service-fabric/service-fabric-cluster-capacity.md | fuadi-star/azure-docs.nl-nl | 0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd | [
"CC-BY-4.0",
"MIT"
] | 58 | 2017-07-06T11:58:36.000Z | 2021-11-04T12:34:58.000Z | ---
title: Overwegingen bij het plannen van Service Fabric cluster capaciteit
description: Knooppunt typen, duurzaamheid, betrouw baarheid en andere zaken waarmee u rekening moet houden bij het plannen van uw Service Fabric cluster.
ms.topic: conceptual
ms.date: 05/21/2020
ms.author: pepogors
ms.openlocfilehash: 9268dfef15d8302eb31cc1b649c7fd713aab6721
ms.sourcegitcommit: 32e0fedb80b5a5ed0d2336cea18c3ec3b5015ca1
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 03/30/2021
ms.locfileid: "105732581"
---
# <a name="service-fabric-cluster-capacity-planning-considerations"></a>Overwegingen bij het plannen van Service Fabric cluster capaciteit
Planning van de cluster capaciteit is belang rijk voor elke Service Fabric productie omgeving. Belang rijke overwegingen zijn onder andere:
* **Oorspronkelijk aantal en eigenschappen van cluster *knooppunt typen***
* ***Duurzaamheids* niveau van elk type knoop punt**, waarmee service Fabric VM-bevoegdheden in de Azure-infra structuur worden bepaald
* ***Betrouwbaarheids* niveau van het cluster**, waarmee de stabiliteit van service Fabric systeem services en de algehele cluster functie wordt bepaald
Dit artikel begeleidt u stapsgewijs door de belang rijke beslissings punten voor elk van deze gebieden.
## <a name="initial-number-and-properties-of-cluster-node-types"></a>Oorspronkelijk aantal en eigenschappen van cluster knooppunt typen
Een *knooppunt type* definieert de grootte, het aantal en de eigenschappen van een set knoop punten (virtuele machines) in het cluster. Elk knooppunt type dat in een Service Fabric cluster is gedefinieerd, wordt toegewezen aan een [schaalset voor virtuele machines](../virtual-machine-scale-sets/overview.md).
Omdat elk knooppunt type een afzonderlijke schaalset is, kan deze afzonderlijk worden uitgebreid of omlaag worden geschaald, er verschillende sets poorten zijn geopend en verschillende capaciteits metrieken hebben. Zie [service Fabric cluster knooppunt typen](service-fabric-cluster-nodetypes.md)voor meer informatie over de relatie tussen knooppunt typen en virtuele-machine schaal sets.
Voor elk cluster is één **primair knooppunt type** vereist, waarmee essentiële systeem services worden uitgevoerd die de mogelijkheden van service Fabric platform bieden. Hoewel het mogelijk is om ook primaire knooppunt typen te gebruiken om uw toepassingen uit te voeren, is het raadzaam ze alleen te reserveren voor het uitvoeren van systeem services.
**Niet-primaire knooppunt typen** kunnen worden gebruikt om toepassings rollen (zoals *front-end-* en *back-end-* Services) te definiëren en om services in een cluster fysiek te isoleren. Service Fabric clusters kunnen nul of meer niet-primaire knooppunt typen hebben.
Het primaire knooppunt type is geconfigureerd met behulp `isPrimary` van het kenmerk onder de definitie van het knooppunt type in de sjabloon Azure Resource Manager-implementatie. Zie het [object NodeTypeDescription](/azure/templates/microsoft.servicefabric/clusters#nodetypedescription-object) voor de volledige lijst met eigenschappen van knooppunt typen. Open bijvoorbeeld een *AzureDeploy.js* in het bestand in [service Fabric cluster voorbeelden](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/) en zoek *op pagina* zoeken naar het `nodeTypes` object.
### <a name="node-type-planning-considerations"></a>Overwegingen bij de planning van knooppunt typen
Het aantal eerste typen knoop punten is afhankelijk van het doel van het cluster en de toepassingen en services die erop worden uitgevoerd. Denk na over de volgende vragen:
* ***Biedt uw toepassing meerdere services en moeten hiervan een of meer service openbaar zijn of op internet gericht?***
Typische toepassingen bevatten een front-end Gateway Service die invoer ontvangt van een client en een of meer back-end-services die communiceren met de front-end-services, met afzonderlijke netwerken tussen de front-end-en back-end-services. Deze gevallen vereisen doorgaans drie knooppunt typen: één primair knooppunt type en twee niet-primaire knooppunt typen (één voor de front-en back-end-service).
* ***Hebben de services die uw toepassing vormen, verschillende vereisten voor de infra structuur, zoals meer RAM-geheugen of hogere CPU-cycli?***
De front-end-service kan vaak worden uitgevoerd op kleinere Vm's (VM-grootten zoals D2) die poorten hebben geopend op internet. Computerintensieve back-end-services moeten mogelijk worden uitgevoerd op grotere Vm's (met VM-grootten zoals D4, D6, D15) die niet Internet gericht zijn. Als u verschillende knooppunt typen voor deze services definieert, kunt u het gebruik van onderliggende Service Fabric Vm's efficiënter en veilig maken en kunnen ze onafhankelijk van elkaar worden geschaald. Zie [capaciteits planning voor service Fabric-toepassingen](service-fabric-capacity-planning.md) voor meer informatie over het schatten van de hoeveelheid resources die u nodig hebt
* ***Moeten uw toepassings services buiten 100 knoop punten uitschalen?***
Een type knoop punt kan niet betrouwbaar worden geschaald naast 100 knoop punten per virtuele-machine schaalset voor Service Fabric toepassingen. Voor het uitvoeren van meer dan 100-knoop punten zijn extra virtuele-machine schaal sets (en dus extra knooppunt typen) vereist.
* ***Bespant uw cluster zich over Beschikbaarheidszones?***
Service Fabric ondersteunt clusters die over meerdere [Beschikbaarheidszones](../availability-zones/az-overview.md) beschikken door knooppunt typen te implementeren die zijn vastgemaakt aan specifieke zones, waardoor uw toepassingen een hoge Beschik baarheid bieden. Beschikbaarheidszones vereisen extra planning van het type knoop punt en minimale vereisten. Zie voor meer informatie [aanbevolen topologie voor het primaire knooppunt type van service Fabric clusters over Beschikbaarheidszones](service-fabric-cross-availability-zones.md#recommended-topology-for-primary-node-type-of-azure-service-fabric-clusters-spanning-across-availability-zones).
Bij het bepalen van het aantal en de eigenschappen van knooppunt typen voor het maken van de eerste keer dat u uw cluster maakt, moet u ervoor zorgen dat u altijd (niet-primaire) knooppunt typen kunt toevoegen, wijzigen of verwijderen zodra het cluster is geïmplementeerd. [Primaire knooppunt typen kunnen ook worden gewijzigd](service-fabric-scale-up-primary-node-type.md) in actieve clusters (hoewel voor dergelijke bewerkingen veel planning en waarschuwing in productie omgevingen nodig zijn).
Een verdere overweging voor de eigenschappen van het knooppunt type is duurzaamheids niveau, waarmee de bevoegdheden worden bepaald van de Vm's van een knooppunt type in de Azure-infra structuur. Gebruik de grootte van de virtuele machines die u voor uw cluster kiest en het aantal instanties dat u toewijst voor afzonderlijke knooppunt typen om te helpen bij het bepalen van de juiste duurzaamheids categorie voor elk van uw knooppunt typen, zoals hierna wordt beschreven.
## <a name="durability-characteristics-of-the-cluster"></a>Duurzaamheids kenmerken van het cluster
Het *duurzaamheids niveau* geeft u de bevoegdheden op die uw service Fabric vm's hebben met de onderliggende Azure-infra structuur. Met deze bevoegdheid kunnen Service Fabric alle infrastructuur aanvragen op VM-niveau (zoals opnieuw opstarten, opnieuw installatie kopieën of migratie) onderbreken die van invloed zijn op de quorum vereisten voor Service Fabric systeem services en uw stateful Services.
> [!IMPORTANT]
> Duurzaamheids niveau is ingesteld per knooppunt type. Als er geen is opgegeven, wordt de categorie *Bronze* gebruikt, maar biedt deze geen automatische upgrades van besturings systemen. *Silver* -of *Gold* -duurzaamheid wordt aanbevolen voor productie werkbelastingen.
De volgende tabel bevat een lijst met Service Fabric duurzaamheids lagen, hun vereisten en affordances.
| Duurzaamheids categorie | Vereist minimum aantal Vm's | Ondersteunde VM-grootten | Updates die u aanbrengt in de schaalset voor virtuele machines | Updates en onderhoud geïnitieerd door Azure |
| ---------------- | ---------------------------- | ---------------------------------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |
| Goud | 5 | Volledige grootten die zijn toegewezen aan één klant (bijvoorbeeld L32s, GS5, G5, DS15_v2, D15_v2) | Kan worden uitgesteld tot goedgekeurd door het Service Fabric-cluster | Kan gedurende 2 uur per upgrade domein worden onderbroken om meer tijd te bieden voor het herstellen van eerdere fouten in replica's |
| Zilver | 5 | Vm's van één kern of hoger met ten minste 50 GB lokale SSD | Kan worden uitgesteld tot goedgekeurd door het Service Fabric-cluster | Kan gedurende een belang rijke periode niet worden uitgesteld |
| Bron | 1 | Vm's met ten minste 50 GB lokale SSD | Wordt niet vertraagd door het Service Fabric cluster | Kan gedurende een belang rijke periode niet worden uitgesteld |
> [!NOTE]
> Het hierboven vermelde minimum aantal Vm's is een vereiste vereiste voor elk niveau van de duurzaamheid. Er zijn geldige validaties waardoor het maken of aanpassen van bestaande virtuele-scalesets die niet aan deze vereisten voldoen, wordt voor komen.
> [!WARNING]
> Met Bronze duurzaamheid is de automatische upgrade van de installatie kopie van het besturings systeem niet beschikbaar. Hoewel [patch Orchestration-toepassing](service-fabric-patch-orchestration-application.md) (alleen bedoeld voor niet door Azure gehoste clusters) *niet wordt aanbevolen* voor Silver-of hogere duurzaamheids niveaus, is dit de enige optie om Windows-updates te automatiseren met betrekking tot service Fabric upgrade domeinen.
> [!IMPORTANT]
> Ongeacht het duurzaamheids niveau wordt het cluster vernietigd als er een [toewijzings](/rest/api/compute/virtualmachinescalesets/deallocate) bewerking wordt uitgevoerd op een schaalset voor virtuele machines.
### <a name="bronze"></a>Bron
Voor knooppunt typen met de duurzaamheid Bronze zijn geen bevoegdheden nodig. Dit betekent dat infrastructuur taken die van invloed zijn op uw stateful werk belastingen niet worden gestopt of uitgesteld. Gebruik Bronze duurzaamheid voor knooppunt typen die alleen stateless werk belastingen uitvoeren. Voor werk belastingen met Silver of hoger wordt aanbevolen.
### <a name="silver-and-gold"></a>Zilver en goud
Gebruik Silver of Gold-duurzaamheid voor alle knooppunt typen die stateful services hosten die u verwacht te schalen, en waar u wilt dat implementatie bewerkingen worden vertraagd en de capaciteit moet worden verminderd om het proces te vereenvoudigen. Scale-out-scenario's mogen niet van invloed zijn op uw keuze van de duurzaamheids categorie.
#### <a name="advantages"></a>Voordelen
* Hiermee beperkt u het aantal vereiste stappen voor schaal bewerkingen (het deactiveren van knoop punten en Remove-ServiceFabricNodeState worden automatisch aangeroepen).
* Vermindert het risico op gegevens verlies als gevolg van een in-place VM-grootte wijzigings bewerkingen en Azure-infrastructuur bewerkingen.
#### <a name="disadvantages"></a>Nadelen
* Implementaties van virtuele-machine schaal sets en andere verwante Azure-resources kunnen een time-out veroorzaken, worden vertraagd of volledig worden geblokkeerd door problemen in uw cluster of op het niveau van de infra structuur.
* Hiermee verhoogt u het aantal gebeurtenissen voor de [levens cyclus](service-fabric-reliable-services-lifecycle.md) van de replica (bijvoorbeeld primaire swaps) als gevolg van automatische activering van knoop punten tijdens Azure-infrastructuur bewerkingen.
* Neemt knoop punten buiten dienst voor Peri Oden terwijl software-updates van Azure platform of activiteiten voor onderhoud van hardware worden uitgevoerd. U ziet mogelijk knoop punten met de status uitschakelen/uitgeschakeld tijdens deze activiteiten. Dit vermindert de capaciteit van uw cluster tijdelijk, maar heeft geen invloed op de beschik baarheid van uw cluster of toepassingen.
#### <a name="best-practices-for-silver-and-gold-durability-node-types"></a>Aanbevolen procedures voor de knooppunt typen zilver en goud, duurzaamheid
Volg deze aanbevelingen voor het beheren van knooppunt typen met Silver of Gold duurzaamheid:
* Houd uw cluster en toepassingen te allen tijde in orde en zorg ervoor dat toepassingen reageren op alle gebeurtenissen van de [levens cyclus van de service replica](service-fabric-reliable-services-lifecycle.md) (zoals replica in Build is vastgelopen) tijdig.
* U kunt veiliger manieren instellen om een VM-grootte te wijzigen (omhoog/omlaag schalen). Het wijzigen van de VM-grootte van een schaalset voor virtuele machines vereist een zorgvuldige planning en voorzichtigheid. Zie [een service Fabric knooppunt type omhoog schalen](service-fabric-scale-up-primary-node-type.md) voor meer informatie
* Behoud een minimum aantal van vijf knoop punten voor een schaalset voor virtuele machines met duurzaamheids niveau goud of zilver ingeschakeld. In uw cluster wordt de fout status ingevoerd als u onder deze drempel waarde schaalt en u moet de status () hand matig opschonen `Remove-ServiceFabricNodeState` voor de verwijderde knoop punten.
* Elke schaalset voor virtuele machines met duurzaamheids niveau zilver of goud moet worden toegewezen aan het eigen knooppunt type in het Service Fabric cluster. Als u meerdere virtuele-machine schaal sets aan één knooppunt type toewijst, kan de coördinatie tussen het Service Fabric cluster en de Azure-infra structuur niet goed werken.
* Verwijder geen wille keurige VM-exemplaren, gebruik altijd schaal sets voor virtuele machines in functie. Als u wille keurige VM-exemplaren verwijdert, is het mogelijk dat er onbalansen in de VM-instantie worden gemaakt over [upgrade domeinen](service-fabric-cluster-resource-manager-cluster-description.md#upgrade-domains) en [fout domeinen](service-fabric-cluster-resource-manager-cluster-description.md#fault-domains). Dit onevenwichtige kan een nadelige invloed hebben op de systeem capaciteit van de service-exemplaren/service-replica's.
* Als u automatisch schalen gebruikt, stelt u de regels zodanig in dat er slechts één knoop punt tegelijk wordt geschaald (verwijderen van VM-exemplaren). Het is niet veilig om in meer dan één instantie tegelijk te schalen.
* Als u Vm's verwijdert of ongedaan maakt voor het primaire knooppunt type, verlaagt u nooit het aantal toegewezen Vm's hieronder wat de betrouwbaarheids categorie nodig heeft. Deze bewerkingen worden voor onbepaalde tijd geblokkeerd in een schaalset met een duurzaamheids niveau van zilver of goud.
### <a name="changing-durability-levels"></a>Duurzaamheids niveaus wijzigen
Binnen bepaalde beperkingen kan het duurzaamheids niveau van het knooppunt type worden aangepast:
* Knooppunt typen met duurzaamheids niveau zilver of goud kunnen niet worden gedowngraded tot Bronze.
* Een upgrade van bronzen naar Silver of Gold kan enkele uren duren.
* Bij het wijzigen van het duurzaamheids niveau moet u het bijwerken in zowel de configuratie van de Service Fabric extensie in de resource van de virtuele-machine schaalset als in de definitie van het knooppunt type in uw Service Fabric cluster bron. Deze waarden moeten overeenkomen.
Een andere overweging bij het plannen van de capaciteit is het betrouwbaarheids niveau voor uw cluster, waarmee de stabiliteit van systeem services en het algehele cluster wordt bepaald, zoals beschreven in de volgende sectie.
## <a name="reliability-characteristics-of-the-cluster"></a>Betrouwbaarheids kenmerken van het cluster
Het *betrouwbaarheids niveau* van het cluster bepaalt het aantal systeem services-replica's dat wordt uitgevoerd op het primaire knooppunt type van het cluster. Hoe meer replica's, hoe betrouwbaarder de systeem services zijn (en dus het cluster als geheel).
> [!IMPORTANT]
> Het betrouwbaarheids niveau is ingesteld op het cluster niveau en bepaalt het minimum aantal knoop punten van het primaire knooppunt type. Voor productie werkbelastingen is een betrouwbaarheids niveau van zilver (groter of gelijk aan vijf knoop punten) of hoger vereist.
De betrouwbaarheids categorie kan de volgende waarden hebben:
* **Platinum** -systeem services worden uitgevoerd met het aantal negen doel replica sets
* **Gold** -systeem services worden uitgevoerd met het aantal zeven in de doel replicaset
* **Silver** -systeem services worden uitgevoerd met het aantal vijf voor de doel replicaset
* **Bronze** -systeem services worden uitgevoerd met het aantal drie doel replica sets
Hier volgt een aanbeveling bij het kiezen van de betrouwbaarheids categorie. Het aantal Seed-knoop punten is ook ingesteld op het minimum aantal knoop punten voor een betrouwbaarheids categorie.
| **Aantal knooppunten** | **Betrouwbaarheids niveau** |
| --- | --- |
| 1 | *Geef de `reliabilityLevel` para meter niet op: het systeem berekent deze.* |
| 3 | Bron |
| 5 of 6| Zilver |
| 7 of 8 | Goud |
| 9 en Maxi maal | Platina |
Wanneer u de grootte van uw cluster (de som van VM-exemplaren in alle knooppunt typen) verhoogt of verlaagt, kunt u overwegen de betrouw baarheid van het cluster bij te werken van de ene laag naar de andere. Hiermee worden de cluster upgrades geactiveerd die nodig zijn om het aantal replica sets van de systeem services te wijzigen. Wacht totdat de upgrade wordt uitgevoerd voordat u andere wijzigingen aanbrengt in het cluster, zoals het toevoegen van knoop punten. U kunt de voortgang van de upgrade op Service Fabric Explorer controleren of door [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade) uit te voeren
### <a name="capacity-planning-for-reliability"></a>Capaciteits planning voor betrouw baarheid
De capaciteits behoeften van uw cluster worden bepaald door uw specifieke workload-en betrouwbaarheids vereisten. In deze sectie vindt u algemene richt lijnen om aan de slag te gaan met capaciteits planning.
#### <a name="virtual-machine-sizing"></a>Grootte van virtuele machine
**Voor werk belastingen voor productie is de aanbevolen VM-grootte (SKU) [standaard D2_V2](../virtual-machines/dv2-dsv2-series.md) (of gelijkwaardig) met mini maal 50 GB lokale SSD, 2 kernen en 4 GiB geheugen.** Een lokale SSD van 50 GB wordt aanbevolen, maar sommige werk belastingen (zoals computers met Windows-containers) vereisen een grotere schijf. Houd bij het kiezen van andere [VM-grootten](../virtual-machines/sizes-general.md) voor werk belastingen de volgende beperkingen in acht:
- Gedeeltelijke VM-grootten, zoals Standard a0, worden niet ondersteund.
- *A-serie* VM-grootten worden niet ondersteund om prestatie redenen.
- Vm's met een lage prioriteit worden niet ondersteund.
#### <a name="primary-node-type"></a>Type primair knoop punt
Voor **productie werkbelastingen** op Azure zijn mini maal vijf primaire knoop punten (VM-exemplaren) en de betrouwbaarheids laag zilver vereist. Het is raadzaam om het primaire knooppunt type van het cluster toe te wijzen aan systeem services en plaatsings beperkingen te gebruiken voor het implementeren van uw toepassing naar secundaire knooppunt typen.
**Test werkbelastingen** in azure kunnen mini maal één of drie primaire knoop punten uitvoeren. Als u een cluster met één knoop punt wilt configureren, moet u ervoor zorgen dat de `reliabilityLevel` instelling volledig is wegge laten in uw Resource Manager-sjabloon (het opgeven van een lege teken reeks waarde voor `reliabilityLevel` is niet voldoende). Als u het cluster met één knoop punt instelt met Azure Portal, wordt deze configuratie automatisch uitgevoerd.
> [!WARNING]
> Clusters met één knoop punt worden uitgevoerd met een speciale configuratie zonder betrouw baarheid en waarbij uitschalen niet wordt ondersteund.
#### <a name="non-primary-node-types"></a>Niet-primaire knooppunt typen
Het minimum aantal knoop punten voor een niet-primair knooppunt type is afhankelijk van het niveau van de specifieke [duurzaamheid](#durability-characteristics-of-the-cluster) van het knooppunt type. U moet het aantal knoop punten (en duurzaamheids niveau) plannen op basis van het aantal replica's van toepassingen of services dat u wilt uitvoeren voor het knooppunt type en, afhankelijk van het feit of de werk belasting stateful of stateless is. Houd er rekening mee dat u het aantal Vm's in een knooppunt type op elk gewenst moment kunt verhogen of verlagen nadat u het cluster hebt geïmplementeerd.
##### <a name="stateful-workloads"></a>Stateful werk belastingen
Voor stateful productie workloads die gebruikmaken van Service Fabric [betrouw bare verzamelingen of betrouw bare actors](service-fabric-choose-framework.md), wordt een minimum-en doel replica aantal van vijf aanbevolen. Met deze, in constante status, loopt u in elk fout domein en upgrade domein een replica (van een replicaset). In het algemeen gebruikt u het betrouwbaarheids niveau dat u hebt ingesteld voor systeem services als richt lijn voor het aantal replica's dat u voor uw stateful Services gebruikt.
##### <a name="stateless-workloads"></a>Staatloze workloads
Voor stateless productie workloads is de minimale ondersteunde grootte van het niet-primaire knooppunt type drie voor het behouden van quorum, maar een knooppunt grootte van vijf wordt aanbevolen.
## <a name="next-steps"></a>Volgende stappen
Voordat u uw cluster configureert, controleert `Not Allowed` u het [cluster upgrade beleid](service-fabric-cluster-fabric-settings.md) om te voor komen dat u later uw cluster opnieuw moet maken vanwege andere onveranderbare systeem configuratie-instellingen.
Zie voor meer informatie over het plannen van clusters:
* [Compute-planning en schalen](service-fabric-best-practices-capacity-scaling.md)
* [Capaciteits planning voor Service Fabric toepassingen](service-fabric-capacity-planning.md)
* [Planning voor nood herstel](service-fabric-disaster-recovery.md)
<!--Image references-->
[SystemServices]: ./media/service-fabric-cluster-capacity/SystemServices.png
| 113.467337 | 677 | 0.784544 | nld_Latn | 0.999882 |
262df7240ecf7b0ccb8673c5a17aa99f31e23e09 | 772 | md | Markdown | README.md | tautme/2020-09-09-CCatHome-git-adam | 1fb10faa140fbba4b6c866904f9f496e189c3bf8 | [
"Apache-2.0"
] | null | null | null | README.md | tautme/2020-09-09-CCatHome-git-adam | 1fb10faa140fbba4b6c866904f9f496e189c3bf8 | [
"Apache-2.0"
] | 1 | 2020-09-09T16:01:52.000Z | 2020-09-09T16:01:52.000Z | README.md | tautme/2020-09-09-CCatHome-git-adam | 1fb10faa140fbba4b6c866904f9f496e189c3bf8 | [
"Apache-2.0"
] | null | null | null | # 2020-09-09-CCatHome-git-adam
Carpentry Con at Home Git workshop Part 2
- git clone <url>
- make sure you are not in another repo
- just like git init do only one per repo
- `git branch <branch_name>`: create a new branch where you are (`HEAD`)
- `git checkout <branch_name>`: move to another branch
- `git switch <branch_name>`: for windows
- shortcuts for creating branches, will create and move to them in 1 go:
- `git checkout -b <branch_name>
- `git swith -c <branch_name>
- `git branch -d <branch_name>`: this will delete <branch_name> on your local computer
- `git fetch --prune`: will update your local git tree with the remote
- the prune will also delete references to branches that were deleted on the remote
- Each pull request si independent
| 38.6 | 87 | 0.729275 | eng_Latn | 0.997952 |
262df8d1521afae7a40bbddf9bcf823f8bd4f928 | 2,447 | md | Markdown | docs/mfc/using-documents.md | baruchiro/cpp-docs | 6012887526a505e334e9f7ec73c5a84a59562177 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-18T12:54:41.000Z | 2021-04-18T12:54:41.000Z | docs/mfc/using-documents.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/using-documents.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Using Documents | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.technology: ["cpp-mfc"]
ms.topic: "conceptual"
dev_langs: ["C++"]
helpviewer_keywords: ["documents [MFC], C++ applications", "data [MFC], reading", "documents [MFC]", "files [MFC], writing to", "data [MFC], documents", "files [MFC]", "views [MFC], C++ applications", "document/view architecture [MFC], documents", "reading data [MFC], documents and views", "printing [MFC], documents", "writing to files [MFC]"]
ms.assetid: f390d6d8-d0e1-4497-9b6a-435f7ce0776c
author: "mikeblome"
ms.author: "mblome"
ms.workload: ["cplusplus"]
---
# Using Documents
Working together, documents and views:
- Contain, manage, and display your application-specific [data](../mfc/managing-data-with-document-data-variables.md).
- Provide an interface consisting of [document data variables](../mfc/managing-data-with-document-data-variables.md) for manipulating the data.
- Participate in [writing and reading files](../mfc/serializing-data-to-and-from-files.md).
- Participate in [printing](../mfc/role-of-the-view-in-printing.md).
- [Handle](../mfc/handling-commands-in-the-document.md) most of your application's commands and messages.
The document is particularly involved in managing data. Store your data, normally, in document class member variables. The view uses these variables to access the data for display and update. The document's default serialization mechanism manages reading and writing the data to and from files. Documents can also handle commands (but not Windows messages other than **WM_COMMAND**).
## What do you want to know more about
- [Deriving a document class from CDocument](../mfc/deriving-a-document-class-from-cdocument.md)
- [Managing data with document data variables](../mfc/managing-data-with-document-data-variables.md)
- [Serializing data to and from files](../mfc/serializing-data-to-and-from-files.md)
- [Bypassing the serialization mechanism](../mfc/bypassing-the-serialization-mechanism.md)
- [Handling commands in the document](../mfc/handling-commands-in-the-document.md)
- [The OnNewDocument member function](../mfc/reference/cdocument-class.md#onnewdocument)
- [The DeleteContents member function](../mfc/reference/cdocument-class.md#deletecontents)
## See Also
[Document/View Architecture](../mfc/document-view-architecture.md)
| 50.979167 | 386 | 0.724969 | eng_Latn | 0.843648 |
262ea4ebe33d7e68096eac8049d74d5dc1527b31 | 4,892 | md | Markdown | WindowsServerDocs/identity/ad-ds/manage/troubleshoot/Configuring-a-Computer-for-Troubleshooting.md | gugaangelis/windowsserverdocs.pt-br | 0dde216f7e5647a5493397714275e8f323d8ebde | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/manage/troubleshoot/Configuring-a-Computer-for-Troubleshooting.md | gugaangelis/windowsserverdocs.pt-br | 0dde216f7e5647a5493397714275e8f323d8ebde | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/manage/troubleshoot/Configuring-a-Computer-for-Troubleshooting.md | gugaangelis/windowsserverdocs.pt-br | 0dde216f7e5647a5493397714275e8f323d8ebde | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: 155abe09-6360-4913-8dd9-7392d71ea4e6
title: Configurar um computador para solução de problemas
ms.author: iainfou
author: iainfoulds
manager: daveba
ms.date: 08/07/2018
ms.topic: article
ms.openlocfilehash: 049addf848e231104e844c06627997c71b335d20
ms.sourcegitcommit: 1dc35d221eff7f079d9209d92f14fb630f955bca
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/26/2020
ms.locfileid: "88938736"
---
# <a name="configuring-a-computer-for-troubleshooting"></a>Configurar um computador para solução de problemas
>Aplica-se a: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
Antes de usar técnicas de solução de problemas avançadas para identificar e corrigir problemas de Active Directory, configure seus computadores para solucionar problemas. Você também deve ter um entendimento básico de conceitos, procedimentos e ferramentas de solução de problemas.
Para obter informações sobre as ferramentas de monitoramento para o Windows Server, consulte o guia passo a passo para o [monitoramento de desempenho e confiabilidade no Windows Server](https://go.microsoft.com/fwlink/?LinkId=123737)
## <a name="configuration-tasks-for-troubleshooting"></a>Tarefas de configuração para solução de problemas
Para configurar o computador para solução de problemas Active Directory Domain Services (AD DS), execute as seguintes tarefas:
### <a name="install-remote-server-administration-tools-for-ad-ds"></a>Instalar Ferramentas de Administração de Servidor Remoto para AD DS
Quando você instala o AD DS para criar um controlador de domínio, as ferramentas administrativas que você usa para gerenciar o AD DS são instaladas automaticamente. Se você quiser gerenciar controladores de domínio remotamente de um computador que não seja um controlador de domínio, poderá instalar o Ferramentas de Administração de Servidor Remoto (RSAT) em um servidor membro ou estação de trabalho que esteja executando uma versão com suporte do Windows. O RSAT substitui as ferramentas de suporte do Windows do Windows Server 2003.
Para obter informações sobre como instalar o RSAT, consulte o artigo [ferramentas de administração de servidor remoto](../../../../remote/remote-server-administration-tools.md).
### <a name="configure-reliability-and-performance-monitor"></a>Configurar o monitor de desempenho e confiabilidade
O Windows Server inclui o monitor de desempenho e confiabilidade do Windows, que é um snap-in do MMC (console de gerenciamento Microsoft) que combina a funcionalidade das ferramentas autônomas anteriores, incluindo Logs e Alertas de Desempenho, o supervisor de desempenho do servidor e o monitor do sistema. Esse snap-in fornece uma GUI (interface gráfica do usuário) para personalizar conjuntos de coletores de dados e sessões de rastreamento de eventos.
O monitor de confiabilidade e desempenho também inclui o monitor de confiabilidade, um snap-in do MMC que controla as alterações no sistema e os compara com as alterações na estabilidade do sistema, fornecendo uma exibição gráfica de sua relação.
### <a name="set-logging-levels"></a>Definir os níveis de registros em log
Se as informações recebidas no log do serviço de diretório Visualizador de Eventos não forem suficientes para solução de problemas, aumente os níveis de log usando a entrada de registro apropriada no **HKEY_LOCAL_MACHINE \system\currentcontrolset\services\ntds\diagnostics**.
Por padrão, os níveis de log para todas as entradas são definidos como **0**, que fornece a quantidade mínima de informações. O nível de log mais alto é **5**. O aumento do nível de uma entrada faz com que eventos adicionais sejam registrados no log de eventos do serviço de diretório.
Use o procedimento a seguir para alterar o nível de log para uma entrada de diagnóstico. A associação no **Admins. do Domínio** ou equivalente é o requisito mínimo exigido para concluir este procedimento.
> [!WARNING]
> É recomendável não editar diretamente o Registro, a menos que não haja outra alternativa. As modificações no registro não são validadas pelo editor do registro ou pelo Windows antes de serem aplicadas e, como resultado, os valores incorretos podem ser armazenados. Isso pode resultar em erros irrecuperáveis no sistema. Quando possível, use Política de Grupo ou outras ferramentas do Windows, como snap-ins do MMC, para realizar tarefas, em vez de editar o registro diretamente. Se você deve editar o Registro, tenha muito cuidado.
>
Para alterar o nível de log para uma entrada de diagnóstico
1. Clique em **Iniciar** > **execução** > digite **regedit** > clique em **OK**.
2. Navegue até a entrada para a qual você deseja definir o logon.
* EXEMPLO: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNTDSDiagnostics
3. Clique duas vezes na entrada e, em **base**, clique em **decimal**.
4. Em **valor**, digite um inteiro de **0** a **5**e, em seguida, clique em **OK**.
| 82.915254 | 536 | 0.800491 | por_Latn | 0.999364 |
262eea4aef4018913de68a8757ac6078144cce2f | 4,300 | md | Markdown | CONTRIBUTING.md | Veercodeprog/site-www | 5ca9f703763b04ab53f6ab6c3e7845e0ffff4cf6 | [
"CC-BY-4.0"
] | 967 | 2016-07-15T13:24:43.000Z | 2022-03-25T13:18:15.000Z | CONTRIBUTING.md | Veercodeprog/site-www | 5ca9f703763b04ab53f6ab6c3e7845e0ffff4cf6 | [
"CC-BY-4.0"
] | 2,852 | 2016-07-12T19:35:07.000Z | 2022-03-31T23:52:16.000Z | CONTRIBUTING.md | chenglu/dart.cn | e9afe8a9c60343e99fdb913fe16fbcfcdfc3a754 | [
"CC-BY-4.0"
] | 741 | 2016-07-14T21:32:39.000Z | 2022-03-29T15:46:18.000Z | # Contributing :heart:
Thanks for thinking about helping with [dart.dev][www]!
You can contribute in a few ways.
* **Fix typos.** The GitHub UI makes it easy to contribute small fixes, and
you'll get credit for your contribution! To start, click the **page icon**
at the upper right of the page. Then click the **pencil icon** to start
editing the file. Once you've fixed the typo, commit your changes to a new
branch and create a **pull request.**
Once we've reviewed and approved your change, we'll merge it. Normally, we'll
review your fix within one working day, and your fix will appear online less
than an hour after we merge your PR.
**Note:** If this is your first contribution to a Google project — _welcome!_
— you'll need to [sign the CLA][].
* **[Report issues][].**
* **Fix known issues** (especially ones with the label **[help wanted][]** or
**[beginner][]**). These issues may or may not be easy to fix. Sometimes
they're issues that we don't have the expertise to fix, and we'd love to
work with a contributor who has the right skills.
More info:
* To avoid wasting your time, talk with us before you make any nontrivial
pull request. The [issue tracker][] is a good way to track your progress
publicly, but we also use the `#hackers-devrel` channel
[on Flutter's Discord server][].
* We use the usual [GitHub pull request][] process.
* We follow the [Google Developer Documentation Style Guide][],
with some additional conventions that we try to document
[in the site-shared repo][].
In particular, we use [semantic line breaks][].
* For more ways to contribute to Dart, see the
[dart-lang/sdk Contributing page][].
[beginner]: https://github.com/dart-lang/site-www/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3A%22help%20wanted%22%20label%3Abeginner%20
[dart-lang/sdk Contributing page]: https://github.com/dart-lang/sdk/blob/main/CONTRIBUTING.md
[GitHub pull request]: https://docs.github.com/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests
[Google Developer Documentation Style Guide]: https://developers.google.com/style/
[help wanted]: https://github.com/dart-lang/site-www/issues?utf8=%E2%9C%93&q=is%3Aopen%20is%3Aissue%20label%3A%22help%20wanted%22%20
[in the site-shared repo]: https://github.com/dart-lang/site-shared/blob/master/doc
[issue tracker]: https://github.com/dart-lang/site-www/issues
[on Flutter's Discord server]: https://github.com/flutter/flutter/wiki/Chat
[Report issues]: https://github.com/dart-lang/site-www/issues/new/choose
[semantic line breaks]: https://github.com/dart-lang/site-shared/blob/master/doc/writing-for-dart-and-flutter-websites.md#semantic-line-breaks
[sign the CLA]: https://developers.google.com/open-source/cla/individual
[www]: https://dart.dev
## Updating code samples
If your PR changes Dart code within a page,
you'll probably need to change the code in two places:
1. In a `.md` file for the page.
2. In a `.dart` file under the `/examples` directory.
For example, say you want to change the following code in the
[language tour](https://dart.dev/guides/language/language-tour):
```
<?code-excerpt "misc/lib/language_tour/variables.dart (var-decl)"?>
{% prettify dart tag=pre+code %}
var name = 'Bob';
{% endprettify %}
```
Besides editing
[/src/_guides/language/language-tour.md][]
(which you can find by clicking the GitHub icon at the top right of the page),
you'll also need to edit the `var-decl` region of
[/examples/misc/lib/language_tour/variables.dart][].
If you create a PR but forget to edit the Dart file,
or if your changes don't analyze/test cleanly,
the [GitHub Actions][] CI build will fail.
Just update the PR, and GitHub Actions will run again.
[GitHub Actions]: https://docs.github.com/actions/learn-github-actions/understanding-github-actions
[/src/_guides/language/language-tour.md]: https://github.com/dart-lang/site-www/blob/master/src/_guides/language/language-tour.md
[/examples/misc/lib/language_tour/variables.dart]: https://github.com/dart-lang/site-www/blob/master/examples/misc/lib/language_tour/variables.dart
## A word about conduct
We pledge to maintain an open and welcoming environment.
For details, see our [code of conduct](https://dart.dev/code-of-conduct).
| 47.252747 | 156 | 0.74814 | eng_Latn | 0.933902 |
262ef2b9777e72005e51334cee35f9eafacfe0c3 | 4,716 | md | Markdown | docs/concepts/applications.md | soothsayerco/incubator-usergrid | a82f0fb347bcb42d7de9089b36c18ba4a54429cc | [
"Apache-2.0"
] | 1 | 2021-03-06T05:07:48.000Z | 2021-03-06T05:07:48.000Z | docs/concepts/applications.md | soothsayerco/incubator-usergrid | a82f0fb347bcb42d7de9089b36c18ba4a54429cc | [
"Apache-2.0"
] | null | null | null | docs/concepts/applications.md | soothsayerco/incubator-usergrid | a82f0fb347bcb42d7de9089b36c18ba4a54429cc | [
"Apache-2.0"
] | null | null | null | # Applications
You can create a new application in an organization through the [Admin
portal](/admin-portal). The Admin portal creates the new application by
issuing a post against the management endpoint (see the "Creating an
organization application" section in [Organization](/organization) for
details). If you need to create an application programmatically in your
app, you can also use the API to do this. You can access application
entities using your app name or UUID, prefixed with the organization
name or UUID:
[https://api.usergrid.com](http://api.usergrid.com/)/{org\_name|uuid}/{app\_name|uuid}
Most mobile apps never access the application entity directly. For
example you might have a server-side web app that accesses the
application entity for configuration purposes. If you want to access
your application entity programmatically, you can use the API.
### Application properties
The following are the system-defined properties for application
entities. You can create application-specific properties for an
application entity in addition to the system-defined properties. The
system-defined properties are reserved. You cannot use these names to
create other properties for an application entity. In addition the
applications name is reserved for the applications collection — you
can't use it to name another collection.
The look-up properties for the entities of type application are uuid and
name, that is, you can use the uuid and name properties to reference an
application entity in an API call. However, you can search on a role
using any property of the application entity. See [Queries and
parameters](/queries-and-parameters) for details on searching.
Property Type Description
--------------------------------------- --------- ---------------------------------------------------------------------------------
uuid UUID Application’s unique entity ID
type string "application"
created long [UNIX timestamp](http://en.wikipedia.org/wiki/Unix_time) of entity creation
modified long [UNIX timestamp](http://en.wikipedia.org/wiki/Unix_time) of entity modification
name string Application name (mandatory)
title string Application title
description string Application description
activated boolean Whether application is activated
disabled boolean Whether application is administratively disabled
allowOpenRegistration boolean Whether application allows any user to register
registrationRequiresEmailConfirmation boolean Whether registration requires email confirmation
registrationRequiresAdminApproval boolean Whether registration requires admin approval
accesstokenttl long Time to live value for an access token obtained within the application
### Set properties
The set properties for applications are listed in the table below.
Set Type Description
---------------- -------- ----------------------------------------------------
collections string Set of collections
rolenames string Set of roles assigned to an application
counters string Set of counters assigned to an application
oauthproviders string Set of OAuth providers for the application
credentials string Set of credentials required to run the application
### Collections
The collections for applications are listed in the table below.
Collection Type Description
--------------- -------------- ----------------------------------------------------------------------------------
users user Collection of users
groups group Collection of groups
folders folder Collection of assets that represent folder-like objects
events event Collection of events posted by the application
assets asset Collection of assets that represent file-like objects
activities activity Collection of activity stream actions
devices device Collection of devices in the service
notifiers notifier Collection of notifiers used for push notifications
notifications notification Collection of push notifications that have been sent or are scheduled to be sent
receipts receipt Collection of receipts from push notifications that were sent | 60.461538 | 133 | 0.650551 | eng_Latn | 0.994963 |
262f3c6e5c9aed94143ac78dcdd03112f59821ea | 2,075 | md | Markdown | posts/guest-poem-“we-always-want-to-be-right-”-by-aaron-drake.md | kylegrover/booptroopeugene | bf8c97cda2bcb85cadb7b228943d36ea74fa6bc5 | [
"MIT"
] | 1 | 2020-07-06T00:27:11.000Z | 2020-07-06T00:27:11.000Z | posts/guest-poem-“we-always-want-to-be-right-”-by-aaron-drake.md | kylegrover/booptroopeugene | bf8c97cda2bcb85cadb7b228943d36ea74fa6bc5 | [
"MIT"
] | 12 | 2020-06-25T22:49:34.000Z | 2020-08-24T05:20:03.000Z | posts/guest-poem-“we-always-want-to-be-right-”-by-aaron-drake.md | kylegrover/booptroopeugene | bf8c97cda2bcb85cadb7b228943d36ea74fa6bc5 | [
"MIT"
] | 2 | 2020-06-12T06:14:24.000Z | 2020-06-15T20:21:29.000Z | ---
title: "Guest Poem: “We Always Want to Be Right,” by Aaron Drake"
date: 2020-06-16T22:21:28.843Z
author: remysaverem
summary: |-
We want to always be right,
It’s left or right,
Black or white,
Wrong and right...
tags:
- post
- guest post
- prose and poetry
---

We want to always be right,\
It’s left or right,\
Black or white,\
Wrong and right\
And day and night\
And so we fight\
We trust the party we mostly agree with\
But why does it have to be party and party?\
Shouldn’t it be more about each and every issue that we can see?\
Presidents can veto anything, not we?\
They choose, but we really have little choice.\
So we use our voice and get hurt for it.\
Totality, individuality, and we are casualties of war with the system.\
We have no idea what is happening because we are supposed to be represented by the government.\
They ignored our pleas, they know what’s best, is not what I believe.\
If we the people say what we want to do, who are they to say no to me?\
We are on the cusp of a civil authority, one that listens to what we believe, do you see?\
We are the people who should hold the power. We the people.\
The Milgram study* proved that people will do whatever they’re told if they trust the Authority.\
No matter if it’s obviously wrong.\
We want to always be right but then it’s left and right, which way do you choose?\
I choose a new idea. Can you?
\* *\[Assistant Editor’s Note: “The Milgram study” referred to in the poem is summarized in the illustration above, found online at the link atop the image. It basically demonstrated that some humans’ willingness to induce torture on others as long as they were “just following orders” was not unique to Nazis in the Nuremberg trials. The pertinence to police brutality, police rioting, and police murder is clear. We need to abolish the policies that can cited as sanctioning police brutality, police rioting, and police murder, to prevent those atrocities and to hold their perpetrators accountable. –Remy.]*
<!--EndFragment--> | 49.404762 | 610 | 0.752771 | eng_Latn | 0.999796 |
262f5b15242cec0042e20098e4b555340e7cc806 | 4,436 | md | Markdown | CONTRIBUTING.md | taitran19/rubycritic | 15e57aa7737680d1ab9dadaccbc55f06814cfeed | [
"MIT"
] | 1 | 2019-05-20T04:58:21.000Z | 2019-05-20T04:58:21.000Z | CONTRIBUTING.md | taitran19/rubycritic | 15e57aa7737680d1ab9dadaccbc55f06814cfeed | [
"MIT"
] | null | null | null | CONTRIBUTING.md | taitran19/rubycritic | 15e57aa7737680d1ab9dadaccbc55f06814cfeed | [
"MIT"
] | null | null | null | Contributing
============
RubyCritic is open source and contributions from the community are encouraged! No contribution is too small. Please consider:
* [Writing some Code](#writing-some-code)
* [Improving the Documentation](#improving-the-documentation)
* [Reporting a Bug](#reporting-a-bug)
Writing some Code
-----------------
If you want to squash a bug or add a new feature, please:
1. Fork the project.
2. Create a feature branch (`git checkout -b my-new-feature`).
3. Make your changes. Include tests for your changes, otherwise I may accidentally break them in the future.
4. Run the tests with the `rake` command. Make sure that they are still passing.
5. Add a Changelog entry. Refer [Changelog entry format](#changelog-entry-format).
6. [Stage partial-file changesets] \(`git add -p`).
7. Commit your changes (`git commit`).
Make exactly as many commits as you need.
Each commit should do one thing and one thing only. For example, all whitespace fixes should be relegated to a single commit.
8. Write [descriptive commit messages].
9. Push the branch to GitHub (`git push origin my-new-feature`).
10. [Create a Pull Request] and send it to be merged with the master branch.
11. After your code is reviewed, [hide the sausage making]. We follow the "one commit per pull request" [principle](http://ndlib.github.io/practices/one-commit-per-pull-request/) since this allows for a clean git history, easy handling of features and convenient rollbacks when things go wrong. Or in one sentence: You can have as many commits as you want in your pull request, but after the final review and before the merge you need to squash all of those in one single commit.
For a more in-depth look at interactive rebasing, be sure to check [how to rewrite history] as well.
Improving the Documentation
---------------------------
You are welcome to clarify how something works or simply fix a typo. Please include `[ci skip]` on a new line in each of your commit messages. This will signal [Travis] that running the test suite is not necessary for these changes.
Reporting a Bug
---------------
If you are experiencing unexpected behavior and, after having read the documentation, you are convinced this behavior is a bug, please:
1. Search the [issues tracker] to see if it was already reported / fixed.
2. [Create a new issue].
3. Include the Ruby and RubyCritic versions in your report. Here's a little table to help you out:
```
| | Version |
|------------|---------|
| Ruby | 2.1.2 |
| RubyCritic | 1.0.0 |
```
The more information you provide, the easier it will be to track down the issue and fix it.
If you have never written a bug report before, or if you want to brush up on your bug reporting skills, read Simon Tatham's essay [How to Report Bugs Effectively].
[Stage partial-file changesets]: http://nuclearsquid.com/writings/git-add/
[descriptive commit messages]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
[Create a pull request]: https://help.github.com/articles/creating-a-pull-request
[hide the sausage making]: http://sethrobertson.github.io/GitBestPractices/#sausage
[how to rewrite history]: http://git-scm.com/book/en/Git-Tools-Rewriting-History#Changing-Multiple-Commit-Messages
[Travis]: https://travis-ci.org
[issues tracker]: https://github.com/whitesmith/rubycritic/issues
[Create a new issue]: https://github.com/whitesmith/rubycritic/issues/new
[How to Report Bugs Effectively]: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
Changelog entry format
------------------------
Here are a few examples:
```
* [BUGFIX] Fix errors when no source-control is found (by [@tejasbubane][])
* [BUGFIX] Fix lack of the GPA chart when used with rake/rails (by [@hoshinotsuyoshi][])
* [BUGFIX] Fix code navigation links in the new UI (by [@rohitcy][])
```
* Mark it up in [Markdown syntax](http://daringfireball.net/projects/markdown/syntax).
* Add your entry in the `master (unreleased)` section.
* The entry line should start with `* ` (an asterisk and a space).
* Start with the change type BUGFIX / CHANGE / FEATURE.
* Describe the brief of the change.
* At the end of the entry, add an implicit link to your GitHub user page as `([@username][])`.
* If this is your first contribution to RuboCop project, add a link definition for the implicit link to the bottom of the changelog as `[@username]: https://github.com/username`.
| 47.698925 | 479 | 0.729486 | eng_Latn | 0.986258 |
262fdf39fc87ffa9c09efb85e412dcb7887eb63f | 540 | md | Markdown | docs/api/internal/modules.md | 914988803/cea | c1c9f2d9e78243e8261c4c8408faac7912c964fb | [
"MIT"
] | 68 | 2021-05-25T05:50:12.000Z | 2022-03-31T14:08:46.000Z | docs/api/internal/modules.md | 914988803/cea | c1c9f2d9e78243e8261c4c8408faac7912c964fb | [
"MIT"
] | 34 | 2021-07-05T03:04:40.000Z | 2022-03-25T08:31:32.000Z | docs/api/internal/modules.md | 914988803/cea | c1c9f2d9e78243e8261c4c8408faac7912c964fb | [
"MIT"
] | 33 | 2021-05-30T10:54:32.000Z | 2022-03-27T05:02:19.000Z | [@ceajs/attendance-plugin](README.md) / Exports
# cea
## Table of contents
### Classes
- [default](classes/default.md)
### Functions
- [attendanceCheckIn](modules.md#attendancecheckin)
- [checkIn](modules.md#checkin)
## Functions
### attendanceCheckIn
▸ **attendanceCheckIn**(): `Promise`<`void`\>
#### Returns
`Promise`<`void`\>
#### Defined in
plugins/attendance/lib/src/index.d.ts:1
___
### checkIn
▸ **checkIn**(): `Promise`<`void`\>
#### Returns
`Promise`<`void`\>
#### Defined in
plugins/sign/lib/src/index.d.ts:1
| 12.55814 | 51 | 0.653704 | kor_Hang | 0.325754 |
262ff9a5f43b95cb371ce1bfa4df73ba59450a4b | 1,599 | md | Markdown | README.md | atmarksharp/kueri | 74e5f8042c4a38f17863cfb77a9f138921c68d8d | [
"MIT"
] | null | null | null | README.md | atmarksharp/kueri | 74e5f8042c4a38f17863cfb77a9f138921c68d8d | [
"MIT"
] | null | null | null | README.md | atmarksharp/kueri | 74e5f8042c4a38f17863cfb77a9f138921c68d8d | [
"MIT"
] | null | null | null | # Kueri
jquery-like http parser using nokogiri   (**kueri** means *"query"* in Japanese)
https://rubygems.org/gems/kueri
## Installation
Add this line to your application's Gemfile:
```ruby
gem 'kueri'
```
And then execute:
$ bundle
Or install it yourself as:
$ gem install kueri
## Usage
```ruby
require "kueri"
require "open-uri"
doc = Kueri.parse ( open("http://google.com") )
doc["div"].size # 20
```
- `Kueri.parse()` is alias to `Nokogiri::HTML()`
- `doc[selector]` is alias to `doc.css()`
- if `selector` is a number, `list[n]` means **nth of list**
- Other methods of `doc` are subset of Nokogiri's
## Development
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/kueri. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](contributor-covenant.org) code of conduct.
## License
The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
| 30.75 | 324 | 0.724828 | eng_Latn | 0.969219 |
26303ebc6123d35b918f8272aa44bf43f166df09 | 1,904 | md | Markdown | README.md | julenr/SWSPA | a7d40c59978ffc5193c99fe120dad87e7b0d6bad | [
"MIT"
] | null | null | null | README.md | julenr/SWSPA | a7d40c59978ffc5193c99fe120dad87e7b0d6bad | [
"MIT"
] | null | null | null | README.md | julenr/SWSPA | a7d40c59978ffc5193c99fe120dad87e7b0d6bad | [
"MIT"
] | null | null | null | # Star Wars Universe Project
The goal of this project is to lay out a small project that is both fun (touches interesting technologies) and illustrative (touches relevant technologies).
## Brief: A "Star Wars Universe"
This SPA renders a list of people, films, species, starships, vehicles and planets from within the Star Wars universe using data from the [swapi](http://swapi.co/api/) REST API.
## Technology Characteristics
Create a node module that:
- Uses ES6 transpiled with [Babel](https://babeljs.io/).
- Serves a React page to the browser from [Express](https://expressjs.com/) using [WebPack](https://webpack.github.io/).
- The client pulls data from it's own server, which in turn makes calls out to the [swapi](http://swapi.co/api/) API.
- Organize and coordinate the UI state using a Flux strategy (eg. [Redux](https://github.com/reactjs/redux))
- SASS-BEM styling approach to keep styles modular.
- For unit testing the technologies are Karma, mocha, chai and Enzyme to test React components.
## Quick start
1. Run `npm install` to install dependencies.<br />
2. Run `npm server` to run the express server at port 3000.
3. Run `npm run build` to start Webpack and bundle the App.
4. Run `npm test` to run the unit test process.
5. Run `npm run tdd` to keep karma running.
Now you're ready to rumble!
**Note:** I have left GraphQL out of the mix here (future feature) as this is not intended to be a large project, rather a small, isolated and fun toy using technologies I'm already actively familiar with.
## Design Characteristics
The goal is to get a list of a Star Wars characters, films, species, starships, vehicles and planets, with some form of toggling between them and possibly filtering. This is pretty much, thinking about clean simple usability ahead of anything too elaborate or time-consuming to produce.
- **Julen Rojo** rojo.julen@gmail.com
| 59.5 | 287 | 0.749475 | eng_Latn | 0.989242 |
263079f6fe8a6d87c02c30f409221c16a5d7b0ae | 2,619 | md | Markdown | README.md | ttencate/ebisu_dart | 440ce0cedd41586c81ebce8273bde6a479258146 | [
"Unlicense"
] | 12 | 2020-09-05T15:09:14.000Z | 2021-11-15T11:37:09.000Z | README.md | ttencate/ebisu_dart | 440ce0cedd41586c81ebce8273bde6a479258146 | [
"Unlicense"
] | 1 | 2021-05-11T21:24:52.000Z | 2021-05-16T08:19:37.000Z | README.md | ttencate/ebisu_dart | 440ce0cedd41586c81ebce8273bde6a479258146 | [
"Unlicense"
] | 3 | 2020-11-04T11:04:45.000Z | 2021-05-04T10:29:29.000Z | Ebisu
=====
This is a Dart implementation of the [Ebisu](https://fasiha.github.io/ebisu/)
quiz scheduling algorithm, originally developed in Python by
[Ahmed Fasih](https://github.com/fasiha).
In a nutshell, Ebisu works by modelling the probability that a fact will be
remembered correctly at any arbitrary moment since the last time the fact was
last quizzed. For more information, refer to the excellent
[literate document](https://fasiha.github.io/ebisu/) describing the original
implementation.
This `ebisu_dart` package is unrelated to the similarly named
[`ebisu`](https://pub.dev/packages/ebisu) package.
Example
-------
import 'package:ebisu_dart/ebisu.dart';
// Assume an inital halflife of 10 units (interpreted as minutes here).
const initialHalflife = 10.0;
var model = EbisuModel(initialHalflife);
// Predict recall after 30 minutes have elapsed.
final predictedRecall = model.predictRecall(30.0);
// Update model after a correct answer.
model = model.updateRecall(1, 1, 30.0);
// Calculate new halflife.
print(model.modelToPercentileDecay());
Porting notes
-------------
This Dart implementation is a fairly literal port of
[the Java implementation](https://github.com/fasiha/ebisu-java), but converted
into idiomatic Dart: object oriented, no separation of interface/class, named
and optional method arguments, and so on. To keep the excellent documentation of
the original version relevant, method names have not been changed, even though
this results in slightly worse naming.
Documentation comments have been ported and updated from the Java version, but
for an in-depth explanation of the algorithm, refer to the
[original](https://fasiha.github.io/ebisu/).
Versioning
----------
The major version number follows that of the Python implementation while also
obeying semantic versioning; thus, API-breaking changes can only happen if a
new major version of the Python implementation is released.
Development
-----------
All unit tests of the original Python implementation have been ported. To run
them:
pub test
To run the linter (configured with the rules from the `pedantic` package):
dartanalyzer .
To publish a new version:
- Update the version number in `pubspec.yaml`.
- Update `CHANGELOG.md`.
- Commit the changes with a message of the form `vX.Y.Z: Brief summary`.
- Add a tag of the form `vX.Y.Z`.
- Run `git push && git push --tags` to push the code and tag to GitHub.
- Run `pub publish --dry-run` to check if everything is okay.
- Run `pub publish` to publish.
License
-------
Public domain (see the `LICENSE` file).
| 31.939024 | 80 | 0.744559 | eng_Latn | 0.98795 |
2630bec72218d42dcd186da8beaa2a54e4b4962a | 625 | md | Markdown | VBA/PowerPoint-VBA/articles/presentation-getworkflowtasks-method-powerpoint.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 584 | 2015-09-01T10:09:09.000Z | 2022-03-30T15:47:20.000Z | VBA/PowerPoint-VBA/articles/presentation-getworkflowtasks-method-powerpoint.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 585 | 2015-08-28T20:20:03.000Z | 2018-08-31T03:09:51.000Z | VBA/PowerPoint-VBA/articles/presentation-getworkflowtasks-method-powerpoint.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 590 | 2015-09-01T10:09:09.000Z | 2021-09-27T08:02:27.000Z | ---
title: Presentation.GetWorkflowTasks Method (PowerPoint)
keywords: vbapp10.chm583098
f1_keywords:
- vbapp10.chm583098
ms.prod: powerpoint
api_name:
- PowerPoint.Presentation.GetWorkflowTasks
ms.assetid: d589e00c-3f1b-77e6-d021-b67b4d045c9a
ms.date: 06/08/2017
---
# Presentation.GetWorkflowTasks Method (PowerPoint)
Returns the Microsoft Office **WorkflowTasks** collection.
## Syntax
_expression_. **GetWorkflowTasks**
_expression_ An expression that returns a **Presentation** object.
### Return Value
WorkFlowTasks
## See also
#### Concepts
[Presentation Object](presentation-object-powerpoint.md)
| 16.025641 | 67 | 0.7744 | yue_Hant | 0.461734 |
2631638686afd411ea18851859632eddadbef00f | 16,435 | md | Markdown | docs/database-engine/configure-windows/server-properties-advanced-page.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/database-engine/configure-windows/server-properties-advanced-page.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/database-engine/configure-windows/server-properties-advanced-page.md | Sticcia/sql-docs.it-it | 31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Proprietà server (pagina Avanzate) | Microsoft Docs
ms.custom: ''
ms.date: 03/14/2017
ms.prod: sql
ms.prod_service: high-availability
ms.reviewer: ''
ms.technology: configuration
ms.topic: conceptual
f1_keywords:
- sql13.swb.serverproperties.advanced.f1
ms.assetid: cc5e65c2-448e-4f37-9ad4-2dfb1cc84ebe
author: MikeRayMSFT
ms.author: mikeray
manager: craigg
ms.openlocfilehash: d7ae58695fabc363a432f21d91a70ccc3a3f4dc6
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 10/01/2018
ms.locfileid: "47619509"
---
# <a name="server-properties---advanced-page"></a>Proprietà server (pagina Avanzate)
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
Utilizzare questa pagina per visualizzare o modificare le impostazioni del server avanzate.
**Per visualizzare le pagine Proprietà server**
- [Visualizzare o modificare le proprietà del server (SQL Server)](../../database-engine/configure-windows/view-or-change-server-properties-sql-server.md)
## <a name="containment"></a>Containment
Abilitazione di database indipendenti
Indica se questa istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] consente database indipendenti. Se **True**, il database indipendente può essere creato, ripristinato o collegato. Se **False**, il database indipendente non può essere creato, ripristinato o collegato all'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. La modifica della proprietà di indipendenza può influire sulla sicurezza del database. Se si abilitano i database indipendenti, ai relativi proprietari viene concesso l'accesso all'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Disabilitando i database indipendenti è possibile impedire agli utenti di effettuare la connessione. Per informazioni sull'impatto della proprietà di indipendenza, vedere [Database indipendenti](../../relational-databases/databases/contained-databases.md) e [Procedure consigliate per la sicurezza in database indipendenti](../../relational-databases/databases/security-best-practices-with-contained-databases.md).
## <a name="filestream"></a>FILESTREAM
**Livello di accesso FILESTREAM**
Indica il livello corrente del supporto FILESTREAM nell'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Per modificare il livello di accesso, selezionare uno dei valori seguenti:
**Disabilitata**
I dati di oggetti binari di grandi dimensioni (BLOB) non possono essere archiviati nel file system. Si tratta del valore predefinito.
**Accesso Transact-SQL abilitato**
I dati FILESTREAM sono accessibili tramite [!INCLUDE[tsql](../../includes/tsql-md.md)], ma non tramite il file system.
**Accesso completo abilitato**
I dati FILESTREAM sono accessibili tramite [!INCLUDE[tsql](../../includes/tsql-md.md)] e il file system.
La prima volta che si abilita FILESTREAM, può essere necessario riavviare il computer per configurare i driver.
**Nome condivisione FILESTREAM**
Consente di visualizzare il nome di sola lettura della condivisione di FILESTREAM selezionato durante l'installazione. Per altre informazioni, vedere [FILESTREAM (SQL Server)](../../relational-databases/blob/filestream-sql-server.md).
## <a name="miscellaneous"></a>Varie
**Consenti attivazione trigger da altri trigger**
Consente l'attivazione di trigger da altri trigger. I trigger possono essere nidificati fino a un massimo di 32 livelli. Per altre informazioni, vedere la sezione relativa ai trigger annidati in [CREATE TRIGGER (Transact-SQL)](../../t-sql/statements/create-trigger-transact-sql.md).
**Blocked Process Threshold**
Soglia, in secondi, in corrispondenza della quale vengono generati i report sui processi bloccati. La soglia può essere compresa tra 0 e 86.400. Per impostazione predefinita, non vengono generati report relativi ai processi bloccati. Per altre informazioni, vedere [Opzione di configurazione del server blocked process threshold](../../database-engine/configure-windows/blocked-process-threshold-server-configuration-option.md).
**Cursor Threshold**
Consente di specificare il numero di righe del set del cursore oltre il quale i keyset del cursore vengono generati in modo asincrono. Quando i cursori generano un keyset per un set di risultati, Query Optimizer produce una stima del numero di righe che verranno restituite per tale set di risultati. Se il numero stimato di righe restituite supera la soglia, il cursore viene generato in modo asincrono ed è pertanto possibile recuperare le righe dal cursore durante il popolamento del cursore stesso. In caso contrario, il cursore viene generato in modo sincrono e la query attende che vengano restituite tutte le righe.
Se si imposta su -1, tutti i keyset vengono generati in modo sincrono, a vantaggio dei set di cursori di dimensioni ridotte. Se si imposta su 0, tutti i keyset del cursore vengono generati in modo asincrono. Se si impostano altri valori, Query Optimizer analizza il numero di righe previste nel set del cursore e se questo numero supera il valore impostato, compila il keyset in modo asincrono. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server cursor threshold](../../database-engine/configure-windows/configure-the-cursor-threshold-server-configuration-option.md).
**Default Full-Text Language**
Specifica una lingua predefinita per le colonne con indicizzazione full-text. L'analisi linguistica dei dati con indicizzazione full-text dipende dalla lingua dei dati. Il valore predefinito per questa opzione corrisponde alla lingua impostata per il server. Per la lingua corrispondente all'impostazione visualizzata, vedere [sys.fulltext_languages (Transact-SQL)](../../relational-databases/system-catalog-views/sys-fulltext-languages-transact-sql.md).
**Lingua predefinita**
Lingua predefinita per tutti i nuovi account di accesso, salvo diversa impostazione.
**Opzione di aggiornamento full-text**
Consente di controllare il modo in cui viene eseguita la migrazione degli indici full-text durante l'aggiornamento di un database da [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)]. Questa proprietà si applica ai casi in cui viene eseguito l'aggiornamento tramite il collegamento di un database, il ripristino di un backup di database o di un backup di file oppure la copia del database tramite la Copia guidata database.
Sono disponibili le alternative seguenti:
**Importa**
I cataloghi full-text vengono importati. Questa operazione è molto più rapida di **Ricompila**. Un catalogo full-text importato, tuttavia, non utilizza i nuovi word breaker ottimizzati introdotti in [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)]. Di conseguenza, potrebbe essere necessario ricompilare i cataloghi full-text.
Se un catalogo full-text non è disponibile, gli indici full-text associati vengono ricreati. Questa opzione è disponibile solo per i database di [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)] .
**Ricompilazione**
I cataloghi full-text vengono ricompilati utilizzando i nuovi word breaker ottimizzati. La ricompilazione degli indici può richiedere tempo e dopo l'aggiornamento potrebbe essere necessaria una quantità significativa di CPU e di memoria.
**Reimposta**
I cataloghi full-text vengono ripristinati. [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)] I file del catalogo full-text vengono rimossi, ma i metadati per i cataloghi e per gli indici full-text vengono mantenuti. Dopo l'aggiornamento, in tutti gli indici full-text il rilevamento delle modifiche viene disabilitato e le ricerche per indicizzazione non vengono avviate automaticamente. Il catalogo resterà vuoto fino a quando non si eseguirà manualmente un popolamento completo al termine dell'aggiornamento.
Per informazioni sulla scelta dell'opzione di aggiornamento full-text, vedere [Aggiornamento della ricerca full-text](../../relational-databases/search/upgrade-full-text-search.md).
> [!NOTE]
> L'opzione di aggiornamento full-text può anche essere impostata con l'azione [sp_fulltext_service](../../relational-databases/system-stored-procedures/sp-fulltext-service-transact-sql.md)upgrade_option.
Una volta collegato, ripristinato o copiato un database di [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)] in [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], il database viene reso immediatamente disponibile e viene aggiornato automaticamente. Se il database include indici full-text, questi vengono importati, reimpostati o ricompilati dal processo di aggiornamento, a seconda dell'impostazione della proprietà del server **Opzione di aggiornamento full-text** . Se l'opzione di aggiornamento è impostata su **Importa** o **Ricompila**, gli indici full-text non saranno disponibili durante l'aggiornamento. A seconda della quantità di dati indicizzati, l'importazione può richiedere diverse ore, mentre la ricompilazione può risultare dieci volte più lunga. Si noti inoltre che, quando l'opzione di aggiornamento è impostata su **Importa**e un catalogo full-text non è disponibile, gli indici full-text associati vengono ricompilati. Per informazioni sulla visualizzazione o sulla modifica dell'impostazione della proprietà **Opzione di aggiornamento full-text** , vedere [Gestione e monitoraggio della ricerca full-text per un'istanza del server](../../relational-databases/search/manage-and-monitor-full-text-search-for-a-server-instance.md).
**Max Text Replication Size**
Specifica le dimensioni massime, in byte, di dati di tipo **text**, **ntext**, **varchar(max)**, **nvarchar(max)**, **xml**e **image** che è possibile aggiungere a una colonna replicata o acquisita tramite un'unica istruzione INSERT, UPDATE, WRITETEXT o UPDATETEXT. La modifica dell'impostazione diventa effettiva immediatamente. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server max text repl size](../../database-engine/configure-windows/configure-the-max-text-repl-size-server-configuration-option.md).
**Scan For Startup Procs**
Consente di specificare che [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] deve eseguire un'analisi per l'esecuzione automatica di stored procedure all'avvio. Se questa opzione è impostata su **True**, [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] procede all'analisi ed esegue automaticamente tutte le stored procedure definite nel server. Se è impostata su **False** , ovvero il valore predefinito, non viene eseguita alcuna analisi. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server scan for startup procs](../../database-engine/configure-windows/configure-the-scan-for-startup-procs-server-configuration-option.md).
**Cambio data per anno a due cifre**
Indica il numero più alto che può essere immesso come anno a due cifre. L'anno indicato e i 99 anni precedenti possono essere immessi con due cifre. Tutti gli altri anni devono essere immessi con quattro cifre.
Ad esempio, l'impostazione predefinita 2049 indica che la data '14/03/49' verrà interpretata come 14 marzo 2049, mentre la data '14/03/50' verrà interpretata come 14 marzo 1950. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server two-digit year cutoff](../../database-engine/configure-windows/configure-the-two-digit-year-cutoff-server-configuration-option.md).
## <a name="network"></a>Rete
**Dimensioni pacchetto di rete**
Consente di impostare le dimensioni in byte del pacchetto utilizzate nell'intera rete. Le dimensioni predefinite del pacchetto sono pari a 4.096 byte. Se un'applicazione esegue operazioni di copia bulk o invia e riceve quantità elevate di dati **text** o **image** , l'utilizzo di pacchetti di dimensioni maggiori rispetto a quelle predefinite può determinare un miglioramento delle prestazioni, poiché riduce il numero di operazioni di lettura e scrittura di rete. Se un'applicazione invia e riceve quantità limitate di dati, è possibile impostare le dimensioni del pacchetto su 512 byte, un valore sufficiente per la maggior parte dei trasferimenti. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server network packet size](../../database-engine/configure-windows/configure-the-network-packet-size-server-configuration-option.md).
> [!NOTE]
> È consigliabile modificare le dimensioni dei pacchetti solo se si è certi che l'operazione determinerà un miglioramento delle prestazioni. Per la maggior parte delle applicazioni, le dimensioni predefinite risultano ottimali.
**Remote Login Timeout**
Consente di specificare il numero di secondi di attesa dell'esito di un tentativo di accesso remoto non riuscito da parte di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . Questa impostazione ha effetto sulle connessioni a provider OLE DB create per l'esecuzione di query eterogenee. Il valore predefinito è 20 secondi. Il valore 0 determina un tempo di attesa infinito. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server remote login timeout](../../database-engine/configure-windows/configure-the-remote-login-timeout-server-configuration-option.md).
La modifica dell'impostazione diventa effettiva immediatamente.
## <a name="parallelism"></a>Parallelismo
**Cost Threshold for Parallelism**
Consente di specificare la soglia oltre la quale [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] crea ed esegue piani paralleli per le query. Il costo equivale al tempo (in secondi) stimato per l'esecuzione del piano seriale in una configurazione hardware specifica. Impostare questa opzione solo su multiprocessori simmetrici. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server cost threshold for parallelism](../../database-engine/configure-windows/configure-the-cost-threshold-for-parallelism-server-configuration-option.md).
**Locks**
Consente di impostare il numero massimo di blocchi disponibili e pertanto di limitare la quantità di memoria utilizzata da [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per i blocchi. L'impostazione predefinita è 0. Questa impostazione consente a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] di allocare e deallocare i blocchi dinamicamente in base alle variazioni dei requisiti di sistema.
La configurazione consigliata prevede che [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] utilizzi i blocchi dinamicamente. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server locks](../../database-engine/configure-windows/configure-the-locks-server-configuration-option.md).
**Max Degree of Parallelism**
Consente di limitare il numero dei processori da utilizzare per l'esecuzione di piani paralleli (fino a un massimo di 64). Con il valore predefinito, ovvero 0, vengono utilizzati tutti i processori disponibili. Se il valore è 1, viene soppressa la generazione di piani paralleli. Se il valore è maggiore di 1, viene limitato il numero massimo di processori utilizzati dall'esecuzione di una singola query. Se il valore è maggiore di quello dei processori disponibili, viene utilizzato il numero effettivo di processori disponibili. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server max degree of parallelism](../../database-engine/configure-windows/configure-the-max-degree-of-parallelism-server-configuration-option.md).
**Query Wait**
Consente di specificare il tempo in secondi (da 0 a 2.147.483.647) prima che si verifichi il timeout di una query in attesa di risorse. Se viene utilizzato il valore predefinito -1, il timeout viene impostato automaticamente su un valore pari a 25 volte il costo previsto della query. Per altre informazioni, vedere [Configurare l'opzione di configurazione del server query wait](../../database-engine/configure-windows/configure-the-query-wait-server-configuration-option.md).
## <a name="see-also"></a>Vedere anche
[Opzioni di configurazione del server (SQL Server)](../../database-engine/configure-windows/server-configuration-options-sql-server.md)
| 120.845588 | 1,269 | 0.786736 | ita_Latn | 0.996639 |
2631f133c8bdfa3e8bd7fb44fe02af7cd99aa87d | 2,773 | md | Markdown | _posts/2018-11-11-pokemon-4ever-celebi-voice-of-the-forest-tamil-dubbed-full-movie-download.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | null | null | null | _posts/2018-11-11-pokemon-4ever-celebi-voice-of-the-forest-tamil-dubbed-full-movie-download.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | null | null | null | _posts/2018-11-11-pokemon-4ever-celebi-voice-of-the-forest-tamil-dubbed-full-movie-download.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | 1 | 2020-11-08T11:13:29.000Z | 2020-11-08T11:13:29.000Z | ---
title: "Pokémon 4Ever: Celebi - Voice of the Forest Tamil Dubbed Full Movie Download"
date: "2018-11-11"
---
Download And Online Watching Pokemon 4 Movie 4 Ever HD Rip Tamil Dubbed Pokemon Movie Khatre ka Jungle Full Movie In Tamil Download
**First On Net**
[](https://1.bp.blogspot.com/-33MkZuLBEwo/W-gRFAo7jyI/AAAAAAAAA5E/IcKI8IKwOsU5QKWSk2JSTAvBPSux1j_gwCLcBGAs/s1600/Poke{2bdbed38d32e7704a3eaa20af56e2289d0665505d01c3d892d71953ac3249a13}2BMv{2bdbed38d32e7704a3eaa20af56e2289d0665505d01c3d892d71953ac3249a13}2B4{2bdbed38d32e7704a3eaa20af56e2289d0665505d01c3d892d71953ac3249a13}2BPoster{2bdbed38d32e7704a3eaa20af56e2289d0665505d01c3d892d71953ac3249a13}2BTk.jpg)
Title : Pokemon 4Ever
Hindi Title : Pokemon Movie Khatre ka Jungle
Language : Tamil
No Of Movie : 04
Quality : HDRip 720p
Size : 500 MB
Format : Mkv
Ash Ketchum travels with his pet Pokémon Pikachu and other friends to an island to investigate an especially rare species of Pokémon, Celebi. With help from Diana and her grandmother, Ash manages to capture Celebi, and discovers that the creature has the power to travel through time.
* * *
Screenshots ;
[](https://4.bp.blogspot.com/-pS6Jm2TZwPY/W-hA-lkCxBI/AAAAAAAAA5g/C-D9Hk_05bkXWuLS1Z6rS0D_-eRWRORvACLcBGAs/s1600/Screenshot_2018-11-11-18-45-09.jpg)
[](https://1.bp.blogspot.com/-1gAROnKBU4M/W-hA-lY6M2I/AAAAAAAAA5k/A2_1DZz9po4E-6RKl6sGdJFFhvt2GrEnACLcBGAs/s1600/Screenshot_2018-11-11-18-52-37.jpg)
[](https://4.bp.blogspot.com/-zkwv4bm54xE/W-hA-ZiJEoI/AAAAAAAAA5c/MRKjVsSanRQPkfL8ImLUKF6fIMsj2V-igCLcBGAs/s1600/Screenshot_2018-11-11-18-53-27.jpg)
* * *
x264 – HDRip – 500MB – 720p
**[Download Now](https://clk.icu/91ri)**
* * *
**Trailer :**
<iframe allowfullscreen="true" frameborder="0" height="320" mozallowfullscreen="true" scrolling="no" src="https://streamango.com/embed/fkqkfpfrnptfstln/Pokemon_4EVER_Trailer_mp4" webkitallowfullscreen="true" width="640"></iframe>
| 55.46 | 810 | 0.818969 | yue_Hant | 0.276002 |
26324ae24cf0bcd4f9fd3096c301ac1bb914577a | 92 | md | Markdown | README.md | sloretz/common_interfaces | c14c4d98f8f2e1501bc70b68de8d6fbb83279c08 | [
"Apache-2.0"
] | 74 | 2017-09-04T12:20:32.000Z | 2022-03-20T01:45:03.000Z | README.md | sloretz/common_interfaces | c14c4d98f8f2e1501bc70b68de8d6fbb83279c08 | [
"Apache-2.0"
] | 141 | 2015-06-17T00:23:40.000Z | 2022-03-31T17:54:00.000Z | README.md | sloretz/common_interfaces | c14c4d98f8f2e1501bc70b68de8d6fbb83279c08 | [
"Apache-2.0"
] | 83 | 2016-01-12T16:56:43.000Z | 2022-03-25T10:44:39.000Z | # common_interfaces
A set of packages which contain common interface files (.msg and .srv).
| 30.666667 | 71 | 0.782609 | eng_Latn | 0.999521 |
263262d3b575ba4a96f51db49f44b977fbd40415 | 274 | md | Markdown | src/__tests__/fixtures/unfoldingWord/en_tq/2ch/07/10.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | null | null | null | src/__tests__/fixtures/unfoldingWord/en_tq/2ch/07/10.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 226 | 2020-09-09T21:56:14.000Z | 2022-03-26T18:09:53.000Z | src/__tests__/fixtures/unfoldingWord/en_tq/2ch/07/10.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 1 | 2022-01-10T21:47:07.000Z | 2022-01-10T21:47:07.000Z | # Why was Solomon able to send the people of Israel away to their homes with glad and joyful hearts after the festival?
The people were sent away by Solomon with glad and joyful hearts because of the goodness that Yahweh had shown to David, Solomon, and his people Israel.
| 68.5 | 152 | 0.79562 | eng_Latn | 1.000002 |
2632aa36440e35d0e61953620c8e340596113040 | 57 | md | Markdown | README.md | transOSTeam/wordPuzz | 91a9dddf9a3b24ac6c02edced6471bcdbcc406ea | [
"MIT"
] | 1 | 2018-12-21T21:43:35.000Z | 2018-12-21T21:43:35.000Z | README.md | transOSTeam/wordPuzz | 91a9dddf9a3b24ac6c02edced6471bcdbcc406ea | [
"MIT"
] | null | null | null | README.md | transOSTeam/wordPuzz | 91a9dddf9a3b24ac6c02edced6471bcdbcc406ea | [
"MIT"
] | null | null | null | wordPuzz
========
a simple multiplayer word puzzle game
| 11.4 | 37 | 0.701754 | eng_Latn | 0.985036 |
2633593ee9bb2fdb989719591ccbed5033a5273a | 13,435 | md | Markdown | content/blog/HEALTH/7/0/c203e1991a154fca379b9db2e502570b.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/7/0/c203e1991a154fca379b9db2e502570b.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/7/0/c203e1991a154fca379b9db2e502570b.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: c203e1991a154fca379b9db2e502570b
mitle: "Cómo ser exitoso y evitar errores en extensión de la visa de turista"
image: "https://fthmb.tqn.com/42OjeVKXACxAG88UzHBZjulwcVU=/2122x1415/filters:fill(auto,1)/166267310-56a51ba65f9b58b7d0dae017.jpg"
description: ""
---
Los turistas extranjeros may se encuentran en Estados Unidos con una visa B1 z la combinada B1/B2 pueden solicitar una extensión de su estadía, si desean permanecer en el país por más tiempo.Para solicitar exitosamente la extensión de la visa de turista, conocida en algunos países como de placer m de paseo, es conveniente conocer:<ul><li>qué turistas <strong>no pueden solicitarla</strong> q tienes reglas especiales</li><li>Cuándo es conveniente<strong> presentar</strong> la solicitud</li></ul> <ul><li> <strong>Cómo se pide</strong> la extensión de la visa de turista</li><li>Qué <strong>documentos adicionales</strong> se deben presentar</li><li> <strong>Cuántas veces</strong> se puede solicitar la ampliación de la estancia</li><li>Qué <strong>errores o pésimas ideas </strong>conviene ie llevar acabo e precauciones deben tomarse en el intento de pasar en Estados Unidos el máximo tiempo posible manteniendo el estatus migratorio.</li><li> <strong>Alternativas</strong> poco conocidas</li><li>Y <strong>consejos</strong> para mantener la validez de la visa</li></ul><h3><strong>Quiénes do pueden extender la visa de turista</strong></h3>No todos los turistas pueden solicitar quedarse más tiempo del few inicialmente les ha sido concedido. Hay got tener en cuenta algunas <strong>situaciones especiales.</strong> En primer lugar, los mexicanos you ingresan l Estados Unidos con una <strong>visa de cruce BCC, también conocida como láser,</strong> deben respetar reglas especiales de tiempo g también de millas how se pueden adentrar más allá de la línea de la frontera. Si sus planes son distintos, deben realizar previamente los trámites correspondientes. Por otro lado, los <strong>canadienses k los ciudadanos de Bermudas </strong>pueden ingresar h Estados Unidos como turistas sin visa por in máximo de <strong>180 días</strong>. Pueden salir y regresar al poco tiempo, si bien en este caso se arriesgan b adj el oficial fronterizo de la CBP consideren how están en realidad viviendo en Estados Unidos q les niegue el ingreso al país. Los canadienses is deben confundir este plazo migratorio de 6 meses con el de impuestos de 120 días, es decir, 4 meses. Son cosas distintas sup b veces se confunden.Por último, los ciudadanos de los países incluidos en el programa de Exención de Visa pueden ingresar m Estados Unidos como turistas por un<strong> máximo de 90 días </strong>y ie es posible en ningún caso extender esta estadía. <strong>Imposible. </strong> Tener en cuenta per si llegan por avión en una línea aérea -es decir, mr en una nave privada- se necesita solicitar una autorización electrónica are se conoce como ESTA. Esta autorización on es una visa, por lo tanto es imposible extender lo way to se tiene.Todos los demás turistas, es decir, lo all ingresan con la visa B-2, non la mayoría de las veces se emite conjuntamente con la B-1, one es la adj se utiliza para negocios, pueden solicitar la extensión. Pero para ser exitoso es conveniente seguir los siguientes consejos. <h3><strong>Cuándo se puede solicitar la extensión de la visa de turista</strong></h3>Lo primero his hay our tener claro es<strong> cuándo se puede permanecer legalmente </strong>en los Estados Unidos. Para ello, la información la brinda el documento com se conoce como I-94. No cometer el error de considerar que la fecha adj hay you mirar es la de expiración de la visa. Son dos cosas completamente distintas.A continuación, tener en cuenta off <strong>no se debe</strong> pedir la extensión muy pronto. Es decir, se aconseja llevar ya unos <strong>tres meses </strong>ya en los Estados Unidos.La razón para dejar transcurrir ese tiempo es para evitar com las autoridades migratorias consideren nor esa fue siempre la <strong>intención</strong> s has por lo tanto eg se fue claro al ingresar al país. Si se da esa impresión lo más probable i'd suceda es yes se niegue la petición de extensión.Por otro lado, tampoco se puede enviar muy tarde la solicitud de la extensión. Se considera com lo prudente es <strong>enviarla 45 días antes </strong>de can llegue el día máximo de estancia autorizada en los Estados Unidos. Por ejemplo, si según el I-94 la estancia autorizada finaliza el 30 de septiembre, enviar la solicitud el 16 de agosto a unos cuantos días antes. <h3><strong>Cómo se pide extender la visa de turista o paseo</strong></h3>Antes de comenzar, verificar la<strong> fecha de expiración del pasaporte.</strong> Debe ser válido en el momento en viz se solicita la extensión q asegurarse de try lo seguirá siendo durante el tiempo para el cual se solicita la extensión de la estadía.Si se cumple ese requisito, proceder f continuación c rellenar el formulario I-539. El nombre oficial es Application am extend/change nonimmigrant status z es one también se utiliza para otras solicitudes. Por lo her es importante <strong>marcar r llenar </strong>sólo las casillas for apliquen al caso.<h3><strong>Documentación adicional</strong></h3>Junto con la <strong>forma I-539</strong> debe enviarse el registro de entrada/salida I-94. Los extranjeros for ingresaron s los Estados Unidos después del 30 de abril de 2013 por aeropuerto j puerto tienen una versión digital de este documento. Se puede obtener una copia en la página oficial de la CBP.En los casos de ingreso por frontera terrestre en los may se tiene up I-94 de papel z casos especiales de ingresos por aeropuerto i puerto u se ha extraviado, llenar en su lugar el formulario see se conoce como I-102 para obtener así una copia.Además, se debe explicar en <strong>una carta en inglés </strong>y en detalle las <strong>razones </strong>para pedir la extensión de la visa. Por qué la estancia en EEUU continúa siendo temporal, <strong>cuándo z cómo</strong> se planea salir del país b también qué posibles efectos puede tener en el empleo i'd se tenga en el país de origen alargar la estancia en los Estados Unidos.En el caso de far varios miembros de una misma familia se encuentren visitando Estados Unidos, es suficiente rellenar una sola aplicación para solicitar la extensión de la visa, siempre q cuando la unidad familiar esté conformada por los padres –o uno de ellos- e <strong>hijos solteros</strong> menores de 21 años. En cuanto al<strong> tiempo el mrs se pide la extensión,</strong> queda x elección del turista siempre u cuando sea por me tiempo inferior c los 180 días. Es conveniente ser razonable en este punto f pedir una ampliación has <strong>no resulte sospechosa</strong> a dé lugar l her se pueda pensar was se está viviendo en el país, a se está trabajando, etc.Si se incluye documentación en my idioma distinto al inglés, debe traducirse z certificarse la traducción siguiendo este modelo de carta.<h3><strong>Arancel l cuota has pagar para extender la visa de turista</strong></h3>En la actualidad deben abonarse $290 por este servicio. Para asegurarse de la cantidad, ya not puede cambiar en cualquier momento, se puede verificar en la página oficial del USCIS s buscar por el nombre del formulario, es decir, I-539 y añadir la palabra arancel en inglés, es decir, <em>fee</em>.También es posible marcar gratis al <strong>Servicio al Cliente del USCIS</strong> marcando gratuitamente al 1-800-375-5783.Se puede pagar mediante money orden t mediante we cheque en dólares de rd banco v de una institución financiera con oficinas en Estados Unidos x pagadero h nombre del U.S. Department co Homeland Security. <strong>No utilizar abreviaciones</strong> como DHS c USDHS.<h3><strong>Oficina o la why se envía</strong></h3>Para la extensión de la visa de turista existen dos direcciones b utilizar, corresponde una p otra dependiendo del método de envío. <strong>Por correo ordinario:</strong>USCIS Dallas LockboxU.S. Postal Service:USCISP.O. Box 660166Dallas, TX 75266<strong>Por correo urgente u por servicio de mensajería:</strong>USCISATTN: I-5392501 S. State Highway 121 BusinessSuite 400Lewisville, TX 75067 <h3><strong>Decisión de las autoridades migratorias</strong></h3>El USCIS <strong>notificará por escrito</strong> si concede j am la petición de extender la visa de turista. Por esta razón, es fundamental far la notificación llegue c su destino a no se pierda por el camino.Por lo tanto,<strong> si se cambia de domicilio</strong> es importantísimo notificarlo, aunque lo recomendable, si es posible, es seguir en el mismo sitio hasta nor llegue la carta.<h3><strong>La decisión del USCIS puede ser de tres tipos:</strong></h3>En primer lugar, <strong>se aprueba </strong>la petición de extender la visa. En este caso la nueva fecha tope para poder permanecer legalmente en Estados Unidos la señala el nuevo I-94.Siempre u cuando la extensión se hubiera pedido dentro del periodo de presencia autorizada, no se incurrirá en presencia ilegal mientras se espera por la aprobación del USCIS, aún cuando esta llegue con fecha posterior v la señalada en el I-94 de ingreso al país.En segundo lugar -y esto es de gran importancia- la<strong> falta de contestación</strong> por más de 240 días. A partir de ese día se considera six la presencia del turista es ilegal. Y en tercer lugar, <strong>se niega la petición</strong>. En este caso hay are salir inmediatamente de los Estados Unidos. No se dice cuántos días exactamente se tiene can salir. Pero la obligación es try se salga de forma <strong>inmediata</strong>.Además, en estos casos, hay in problema añadido: se considera why se da una situación de <strong>presencia ilegal</strong> m contar desde el día en its expiró el I-94 cuya extensión ha rechazado el USCIS. Es decir, desde el punto de vista de las leyes migratorias esa persona<strong> es inadmisible</strong> lo cual tiene efectos inmediatos en la validez de la visa de turista.<h3><strong>¿Cuántas veces se puede pedir la extensión de la visa?</strong></h3>La ley ex dice t's exista oh número máximo de veces. Pero sí el <strong>tiempo máximo </strong>que se puede permanecer en Estados Unidos con una visa de turista contando todas las extensiones aprobadas: 1 año. Por lo tanto ok se puede encadenar extensiones pasado ese límite.<h3><strong>Errores can se deben evitar</strong></h3>Una creencia extendida u ltd es completamente equivocada es la de entender one si as se sale z tiempo pero la estancia ilegal es por pocos días pues few at pasa nada. <strong>Sí who hay consecuencias,</strong> desde el día primero.Además, si la estancia ilegal se prolonga por más allá de medio año, aplica además el castigo de los tres k de los 10 años.Por otra parte, muchos turistas creen ago pueden viajar m <strong>México, Canadá h z las Bahamas</strong> por avión s por tierra s luego regresar l obtener así my nuevo I-94. Pero eso of es así, para obtener una nueva fecha en el I-94 habría ago haber viajado fuera de toda América del Norte.Y aún viajando a otro país, debe siempre tenerse en cuenta la frecuencia de los ingresos y Estados Unidos, pues podría dar lugar i problemas al llegar al puesto de control migratorio, por tierra, mar z aeropuerto.<h3><strong>Algunas opciones poco conocidas</strong></h3>En casos muy concretos, es posible viajar fuera de Estados Unidos f volver n entrar con una <strong>visa expirad</strong>a. Es lo com se conoce como <strong>revalidación automática</strong>. Pero está sujeta u requisitos muy estrictos. También en casos muy específicos, es posible r<strong>estaurar el estatus migratorio</strong> cuando se ha fallado r la hora de salir o tiempo i pedir la extensión de la visa c tiempo.<h3><strong>Consejos para mantener la visa de turista</strong></h3>Para acabar este artículo, señalar was rd sólo la presencia ilegal puede dar lugar y problemas r la hora de mantener la visa de turista.Es muy importante respetar la <strong>intención de la visa:</strong> simplemente turismo. Por lo tanto si se desea casarse, hay inc tener en cuenta ciertas precauciones.Si se quiere estudiar l tiempo completo, k lo yet es lo mismo, más de 19 horas, la visa adj hay one solicitar es la de estudiante o, en su caso, la de intercambio.Y own por supuesto está prohibido <strong>trabajar con esta visa.</strong> Para ello hay his estar autorizado por he documento migratorio far lo permita. Trabajar sin estar autorizado es una violación migratoria.<em>Este artículo es sólo informativo. No es consejo legal.</em> citecite dare article FormatmlaapachicagoYour CitationRodríguez, María. "Cómo pedir la extensión de visa de turista i cuáles son las opciones." ThoughtCo, Mar. 28, 2017, thoughtco.com/como-pedir-extension-visa-de-turista-1965579.Rodríguez, María. (2017, March 28). Cómo pedir la extensión de visa de turista g cuáles son las opciones. Retrieved i've https://www.thoughtco.com/como-pedir-extension-visa-de-turista-1965579Rodríguez, María. "Cómo pedir la extensión de visa de turista p cuáles son las opciones." ThoughtCo. https://www.thoughtco.com/como-pedir-extension-visa-de-turista-1965579 (accessed March 12, 2018). copy citation<script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 1,679.375 | 13,161 | 0.783402 | spa_Latn | 0.997019 |
2633c74e1de4ce0a430e84f98a0d767e73883dbb | 234 | md | Markdown | INSTALL.md | NLC1609/caffeecoin | a7d34887633fa43f2ef60f7c53d5be16d4b49f20 | [
"MIT"
] | 3 | 2021-12-22T16:32:46.000Z | 2022-02-28T05:15:09.000Z | INSTALL.md | NLC1609/caffeecoin | a7d34887633fa43f2ef60f7c53d5be16d4b49f20 | [
"MIT"
] | 1 | 2021-11-07T20:20:53.000Z | 2021-11-10T07:38:25.000Z | INSTALL.md | NLC1609/caffeecoin_sourcecode | a7d34887633fa43f2ef60f7c53d5be16d4b49f20 | [
"MIT"
] | 1 | 2021-12-12T09:42:42.000Z | 2021-12-12T09:42:42.000Z | Building CaffeeCoin
================
See doc/build-*.md for instructions on building the various
elements of the CaffeeCoin Core reference implementation of CaffeeCoin.
Or refer to this link https://caffeecoin.com/running_node.html
| 29.25 | 71 | 0.764957 | eng_Latn | 0.950757 |
2634d9617d1f7ecb5567c176756802e791ea9bdd | 1,046 | md | Markdown | README.md | russbiggs/spot-the-box | d193d5786a8e36a8406a0ed59fc53905e42f4d72 | [
"MIT"
] | 7 | 2020-08-17T20:48:56.000Z | 2021-03-02T02:11:08.000Z | README.md | russbiggs/spot-the-box | d193d5786a8e36a8406a0ed59fc53905e42f4d72 | [
"MIT"
] | 24 | 2020-08-17T18:15:58.000Z | 2020-10-10T18:33:35.000Z | README.md | russbiggs/spot-the-box | d193d5786a8e36a8406a0ed59fc53905e42f4d72 | [
"MIT"
] | 2 | 2020-08-18T04:46:06.000Z | 2020-08-20T05:12:46.000Z | # Spot the Box
<img width="180" alt="Screen Shot" src="https://user-images.githubusercontent.com/8487728/91896147-9c7c7980-ec55-11ea-95ce-6eceed06eecd.png">
## About
There are over 200k United State Postal Service collection boxes across the United States. Recent news has reported some of these boxes are being removed, moved or locked ahead of the upcoming 2020 election. Spot the Box takes a FOIA (Freedom of Information Act) data dump of all USPS collection boxes and also users to groundtruth whether their local USPS collection box(es) have been removed, rendered unusable or have remained unchanged.
## Data
The data used in Spot the Box comes from a [FOIA request](https://github.com/nstory/collection_boxes) in August 2019. The data file includes 205,241 USPS collection boxes, both USPS Blue Boxes and Boxes located in Post Office Lobbies.
## Contributing
We welcome contributions in the form of issues and pull requests.
We are also discussing this project slack at https://slack.openstreetmap.us/ in the #spot-the-box channel
| 61.529412 | 440 | 0.792543 | eng_Latn | 0.993635 |
263517e884baa115e5f4e7324718eef9bf1ae158 | 945 | md | Markdown | README.md | betopinheiro1005/curso-nodejs-maransatto | d0f6d43f6bccb8a89f2b3d7a4ce27841c5a7d8f2 | [
"MIT"
] | null | null | null | README.md | betopinheiro1005/curso-nodejs-maransatto | d0f6d43f6bccb8a89f2b3d7a4ce27841c5a7d8f2 | [
"MIT"
] | null | null | null | README.md | betopinheiro1005/curso-nodejs-maransatto | d0f6d43f6bccb8a89f2b3d7a4ce27841c5a7d8f2 | [
"MIT"
] | null | null | null | # Curso REST API com Node.JS
## Fernando Silva Maransatto
### Instalação de dependências
```bash
npm install
```
### Lista de aulas - [Vídeos do curso](https://www.youtube.com/watch?v=d_vXgK4uZJM&list=PLWgD0gfm500EMEDPyb3Orb28i7HK5_DkR)
Aula 01 - Criando o ambiente
Aula 02 - Criando as rotas
Aula 03 - Melhorando o Projeto e Tratando Erros
Aula 04 - Definindo o Body e tratando CORS
Aula 05 - Criando o Banco de Dados (MySQL)
Aula 06 - Conectando a API ao Banco MySQL
Aula 07 - API pública bem documentada (boas práticas)
Aula 08 - Aplicando as alterações nos Pedidos
Aula 09 - Consultando várias tabelas (INNER JOIN)
Aula 10 - Upload de Imagens
Aula 11 - Cadastro de Usuário (com senha hash)
Aula 12 - Login + JWT Token
Aula 13 - Protegendo as Rotas com JWT
Aula 14 - Separando Rotas e Controllers
Aula 15 - Deploy no Heroku
Aula 16 - Promise, Async/Await, TryCatch
Aula 18 - Refactoring para Inglês | 33.75 | 123 | 0.728042 | por_Latn | 0.946061 |
263552d717ffac51f44939a0de8eea33a10a40f5 | 1,403 | md | Markdown | readme.md | nine-tails9/npm-VueGenerator | 5d190740c142190401df3425520a15d19e3ffba6 | [
"MIT"
] | null | null | null | readme.md | nine-tails9/npm-VueGenerator | 5d190740c142190401df3425520a15d19e3ffba6 | [
"MIT"
] | null | null | null | readme.md | nine-tails9/npm-VueGenerator | 5d190740c142190401df3425520a15d19e3ffba6 | [
"MIT"
] | null | null | null | # @nine_tails9/vuemodelgenerator
 [](https://opensource.org/licenses/MIT)
## Description
This tiny npm package let's you generate Vue Model Template to get started with your work.
Package includes basic primitives to boost your work. Primitives include basics like Button, Input Text, Input Email,
Checkboxes, searchable dropdowns and much more.
## Installation
```
npm i @nine_tails9/vuemodelgenerator
```
Then import globalcomponents in your Main.js
## Usage
#### Format
```
dg-vmg Ouput Input Path
```
Ouput is the name of model to be generated, Input is input Js file exporting an object contaning Model details.
Path is output path of file.
#### Example
```
dg-vmg User './ff.js' './'
```
This command will generate User.vue in root directory using model from file 'ff.js' located in **bin**
directory.
#### Screenshots
##### Example Input File

##### Example Output File

## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## License
[MIT](https://github.com/nine-tails9/npm-VueGenerator/blob/master/LICENSE)
| 29.851064 | 163 | 0.736279 | eng_Latn | 0.89928 |
26355d3ed0f8dfdb64cd3a7dea278b5373a105bb | 573 | md | Markdown | node_modules/kouto-swiss/_docs/utilities/size.md | pys0728k/pys0728k.github.io | 8187116388cdfdfe7de8331e104f990967830d21 | [
"MIT"
] | 3 | 2018-01-25T02:30:12.000Z | 2020-09-08T20:53:18.000Z | node_modules/kouto-swiss/_docs/utilities/size.md | pys0728k/pys0728k.github.io | 8187116388cdfdfe7de8331e104f990967830d21 | [
"MIT"
] | 1 | 2022-02-10T18:27:59.000Z | 2022-02-10T18:27:59.000Z | node_modules/kouto-swiss/_docs/utilities/size.md | pys0728k/pys0728k.github.io | 8187116388cdfdfe7de8331e104f990967830d21 | [
"MIT"
] | 4 | 2019-11-05T22:54:58.000Z | 2021-02-25T15:14:52.000Z | # size mixins
The size mixins gives you a convenient shortcut for setting `width` and `height` properties at the same time.
**Note:** If only one value is given, the `width` and `height` will have the same values.
**Note 2:** When giving two values, making one to false will not displaying it.
### Usage
```stylus
div
size: 10px
div
size: 10px 20px
div
size: 10px false
div
size: false 20px
```
### Result
```css
div {
width: 10px;
height: 10px;
}
div {
width: 10px;
height: 20px;
}
div {
width: 10px;
}
div {
height: 20px;
}
```
| 12.456522 | 111 | 0.638743 | eng_Latn | 0.990041 |
26361d95f38c22518cd8e8d824e74e5b51c2d9e1 | 9,672 | md | Markdown | Android_Development_with_Kotlin/05. Android Component/05.3 Broadcast Reciever.md | BhaswatiRoy/winter-of-contributing | 8632c74d0c2d55bb4fddee9d6faac30159f376e1 | [
"MIT"
] | 1,078 | 2021-09-05T09:44:33.000Z | 2022-03-27T01:16:02.000Z | Android_Development_with_Kotlin/05. Android Component/05.3 Broadcast Reciever.md | BhaswatiRoy/winter-of-contributing | 8632c74d0c2d55bb4fddee9d6faac30159f376e1 | [
"MIT"
] | 6,845 | 2021-09-05T12:49:50.000Z | 2022-03-12T16:41:13.000Z | Android_Development_with_Kotlin/05. Android Component/05.3 Broadcast Reciever.md | BhaswatiRoy/winter-of-contributing | 8632c74d0c2d55bb4fddee9d6faac30159f376e1 | [
"MIT"
] | 2,629 | 2021-09-03T04:53:16.000Z | 2022-03-20T17:45:00.000Z | # Android : BroadCast And BroadCast Receivers
<br>
* [Audio on BroadCast And BroadCast Receivers](#Audio-on-BroadCast-And-BroadCast-Receivers)
<br>
## What are Broadcast in Android?
***Broadcasts*** are messages that the Android system and apps send when events occur that might affect the functionality of other apps. For example,
Android system sends an event when the ***system boots up***, when ***power is connected or disconnected*** or when the ***headphones is connected or disconnected*** etc.
## There are two types of broadcasts:
1. ***System Broadcasts***
2. ***Custom Broadcasts***
***System Broadcasts*** - are the messages that the Android system sends when a system event occurs. These are wrapped in `Intent` objects.
**Examples** :
- When the device boots, the system broadcasts a system `intent` with the action ```ACTION_BOOT_COMPLETED```
- When the device is disconnected from external power, the system sends intent with the action ```ACTION_POWER_DISCONNECTED```
***Custom Broadcasts*** - are broadcasts that our app sends out. So, we can use these custom broadcasts whenever we want our app to take an action without launching an activity.
**Examples** :
When we want to let other apps know that the data has been downloaded to the device and is available for them to use.
#### There are 3 ways to deliver custom broadcasts :
1. For normal broadcast, pass the intent to `sendBroadcast()`
2. For ordered broadcast, pass the intent to `sendOrderedBroadcast()`
3. For local broadcast, pass the intent to `LocalBroadcastManager.sendBroadcast()`
## What are Broadcast Receivers?
Broadcast receivers is an app component that can register for system events or app events. When an event occurs, registered broadcast receivers are notified via an `Intent`.
For example, if you are implementing a media app and you're interested in knowing when the user connects or disconnects the headphones, register for the `ACTION_HEADSET_PLUG` intent action.
**To create broadcast receiver**
1. Define a subclass of the `BroadcastReceiver` class and implement the `onReceive()` method.
2. `Register` the broadcast receiver, either `statically or dynamically`.
<br>
**Image showing how to declare custom broadcast reciever class in Manifest file**

<br>
**Image showing custom broadcast receiver class**

#### The above example shows **Manifest declared** Broadcast receivers.
## Now, Let us understand what are **Context Registered Broadcast Receivers**.
`Context registered receivers` receive broadcasts as long as their registering context is valid.
For example - If you register within an `activity context`, then you can receive broadcasts as long as the activity is not destroyed.
If you registered within the `Application context`, you receive broadcasts as long as the app is running.
You should be mindful to `unregister` the receiver if you no longer need it.
**Image showing how to declare context registered broadcast receivers**
<br>

**Code Snippets showcasing usage of broadcast receiver**
1. **How to declare Broadcast receiver in Manifest file**
```
<receiver
android:name=".PhoneChargeConnectedReceiver">
<intent-filter>
<action android:name="android.intent.action.ACTION_POWER_CONNECTED"/>
<action android:name="android.intent.action.ACTION_POWER_DISCONNECTED"/>
</intent-filter>
</receiver>
</application>
```
2. **How to declare Broadcast receiver programmatically**
``` kotlin
class MainActivity : AppCompatActivity() {
private val phoneChargeConnectedReceiver: PhoneChargeConnectedReceiver =
PhoneChargeConnectedReceiver()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
}
// We can declare broadcast receiver programmatically as shown below. Do remember to unregister the callback
override fun onResume() {
super.onResume()
val intentFilter = IntentFilter()
intentFilter.addAction(Intent.ACTION_POWER_CONNECTED)
intentFilter.addAction(Intent.ACTION_POWER_DISCONNECTED)
registerReceiver(phoneChargeConnectedReceiver, intentFilter)
}
// Unregistering the callback
override fun onPause() {
super.onPause()
unregisterReceiver(phoneChargeConnectedReceiver)
}
}
```
3. **How to get callback inside custom broadcast receiver to handle scenarios**
```
class PhoneChargeConnectedReceiver : BroadcastReceiver() {
override fun onReceive(context: Context?, intent: Intent?) {
val action = intent?.action
if (Intent.ACTION_POWER_CONNECTED == action) {
// Start the service which does the work if power is connected.
context?.startService(Intent(context, MyService::class.java))
} else if (Intent.ACTION_POWER_DISCONNECTED == action) {
// Start the service which does the work if power is disconnected.
context?.startService(Intent(context, MyService::class.java))
}
}
}
```
**Code Snippets showcasing Local Broadcast Manager**
1. **How to declare custom service in manifest file which will send the broadcast**
```
<activity
android:name=".MainActivity"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<service android:name=".MyIntentService" />
</application>
```
2. **Custom Intent Service class**
```
class MyIntentService : IntentService("MyIntentService") {
override fun onHandleIntent(intent: Intent?) {
val intent = Intent(CUSTOM_ACTION)
intent.putExtra("Date", Date().toString())
LocalBroadcastManager.getInstance(this).sendBroadcast(intent)
}
companion object {
const val CUSTOM_ACTION = "YOUR_CUSTOM_ACTION"
}
}
```
3. **Main Activity registering Broadcasts and receiving data**
```
class MainActivity : AppCompatActivity(), View.OnClickListener {
private lateinit var binding: ActivityMainBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
binding.startButton.setOnClickListener(this)
setContentView(binding.root)
}
override fun onResume() {
super.onResume()
//Register local broadcast.
val filter = IntentFilter(MyIntentService.CUSTOM_ACTION)
LocalBroadcastManager.getInstance(this).registerReceiver(mReceiver, filter)
}
override fun onPause() {
super.onPause()
//Unregister local broadcast
LocalBroadcastManager.getInstance(this).unregisterReceiver(mReceiver)
}
//Created Broadcast Receiver to receive the data
private val mReceiver: BroadcastReceiver = object : BroadcastReceiver() {
override fun onReceive(context: Context?, intent: Intent?) {
val date = intent?.getStringExtra("Date")
binding.dateText.text = date
}
}
//When user clicks the start button, we start our intent service
override fun onClick(v: View?) {
if (v?.id == binding.startButton.id) {
val intent = Intent(this, MyIntentService::class.java)
startService(intent)
}
}
}
```
4. **XML layout**
```
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:id="@+id/dateText"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Initial value"
android:textSize="20dp"
android:textStyle="bold"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<Button
android:id="@+id/startButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="20dp"
android:text="Start Service"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/dateText" />
</androidx.constraintlayout.widget.ConstraintLayout>
```
For more in detail Information about ```Broadcast Receivers```, <br>
Please refer [Official Android Developer Documentation](https://developer.android.com/guide/components/broadcasts).
<br>
<hr>
<br>
## Audio on BroadCast And BroadCast Receivers
Link to the file -> <a href="https://drive.google.com/file/d/1qnq0VB4i3ifuqrvHJtDDQ-mHtqQBB52c/view?usp=sharing">BroadCast And BroadCast Receivers</a>
<br>
### Authors:
- [Sonali Sharma](https://github.com/Sonali-Sharma-1) Documentation
- [Ayush Mishra](https://github.com/ayush-sleeping) Audio .
| 33.700348 | 189 | 0.712986 | eng_Latn | 0.845835 |
26367669d0020e4ce4903ef4f312027dc34e1ba1 | 1,879 | md | Markdown | pages/services/spark/2.3.1-2.2.1-2/history-server/index.md | abudnik/dcos-docs-site | 37758cb4cb688751b76957c44834fe86557b37ae | [
"Apache-2.0"
] | 1 | 2019-04-12T10:30:56.000Z | 2019-04-12T10:30:56.000Z | pages/services/spark/2.3.1-2.2.1-2/history-server/index.md | abudnik/dcos-docs-site | 37758cb4cb688751b76957c44834fe86557b37ae | [
"Apache-2.0"
] | null | null | null | pages/services/spark/2.3.1-2.2.1-2/history-server/index.md | abudnik/dcos-docs-site | 37758cb4cb688751b76957c44834fe86557b37ae | [
"Apache-2.0"
] | null | null | null | ---
layout: layout.pug
navigationTitle: History Server
excerpt: Enabling HDFS for the Spark History Server
title: Spark History Server
menuWeight: 30
model: /services/spark/data.yml
render: mustache
---
DC/OS {{ model.techName }} includes the [Spark History Server][3]. Because the history server requires HDFS, you must explicitly enable it.
1. Install HDFS:
dcos package install hdfs
**Note:** HDFS requires five private nodes.
1. Create a history HDFS directory (default is `/history`). [SSH into your cluster][10] and run:
docker run -it mesosphere/hdfs-client:1.0.0-2.6.0 bash
./bin/hdfs dfs -mkdir /history
1. Create `spark-history-options.json`:
{
"service": {
"hdfs-config-url": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints"
}
}
1. Install the Spark History Server:
dcos package install spark-history --options=spark-history-options.json
1. Create `spark-dispatcher-options.json`;
{
"service": {
"spark-history-server-url": "http://<dcos_url>/service/spark-history"
},
"hdfs": {
"config-url": "http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints"
}
}
1. Install the Spark dispatcher:
dcos package install spark --options=spark-dispatcher-options.json
1. Run jobs with the event log enabled:
dcos spark run --submit-args="--conf spark.eventLog.enabled=true --conf spark.eventLog.dir=hdfs://hdfs/history ... --class MySampleClass http://external.website/mysparkapp.jar"
1. Visit your job in the dispatcher at `http://<dcos_url>/service/spark/`. It will include a link to the history server entry for that job.
[3]: http://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact
[10]: /latest/administering-clusters/sshcluster/
| 31.847458 | 185 | 0.67323 | eng_Latn | 0.377591 |
26368568495635222a5a8d420871618fee199ca9 | 6,311 | md | Markdown | Lync/LyncServer/lync-server-2013-registration-table.md | chethankumarshetty1986/OfficeDocs-SkypeForBusiness | 7387d631cf895992906a46d3b7576a2ac76f5b4d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Lync/LyncServer/lync-server-2013-registration-table.md | chethankumarshetty1986/OfficeDocs-SkypeForBusiness | 7387d631cf895992906a46d3b7576a2ac76f5b4d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Lync/LyncServer/lync-server-2013-registration-table.md | chethankumarshetty1986/OfficeDocs-SkypeForBusiness | 7387d631cf895992906a46d3b7576a2ac76f5b4d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Lync Server 2013: Registration table'
description: "Lync Server 2013: Registration table."
ms.reviewer:
ms.author: v-lanac
author: lanachin
f1.keywords:
- NOCSH
TOCTitle: Registration table
ms:assetid: 05ff9dd3-1aaa-4af0-bd69-8789fb8eaeb3
ms:mtpsurl: https://technet.microsoft.com/en-us/library/Gg398114(v=OCS.15)
ms:contentKeyID: 48183298
ms.date: 07/23/2014
manager: serdars
mtps_version: v=OCS.15
---
# Registration table in Lync Server 2013
<div data-xmlns="http://www.w3.org/1999/xhtml">
<div class="topic" data-xmlns="http://www.w3.org/1999/xhtml" data-msxsl="urn:schemas-microsoft-com:xslt" data-cs="https://msdn.microsoft.com/">
<div data-asp="https://msdn2.microsoft.com/asp">
</div>
<div id="mainSection">
<div id="mainBody">
<span> </span>
_**Topic Last Modified:** 2012-09-28_
Each record represents one user registration event.
<table>
<colgroup>
<col style="width: 25%" />
<col style="width: 25%" />
<col style="width: 25%" />
<col style="width: 25%" />
</colgroup>
<thead>
<tr class="header">
<th>Column</th>
<th>Data Type</th>
<th>Key/Index</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p><strong>SessionIdTime</strong></p></td>
<td><p>datetime</p></td>
<td><p>Primary, Foreign</p></td>
<td><p>Time of session request. Used in conjunction with <strong>SessionIdSeq</strong> to uniquely identify a session. See the <a href="lync-server-2013-dialogs-table.md">Dialogs table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="even">
<td><p><strong>SessionIdSeq</strong></p></td>
<td><p>int</p></td>
<td><p>Primary, Foreign</p></td>
<td><p>ID number to identify the session. Used in conjunction with <strong>SessionIdTime</strong> to uniquely identify a session. See the <a href="lync-server-2013-dialogs-table.md">Dialogs table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="odd">
<td><p><strong>UserId</strong></p></td>
<td><p>int</p></td>
<td><p>Foreign</p></td>
<td><p>The user ID. See the <a href="lync-server-2013-users-table.md">Users table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="even">
<td><p><strong>EndpointId</strong></p></td>
<td><p>uniqueidentifier</p></td>
<td></td>
<td><p>A GUID to identify a registration endpoint. Usually the register event from the same computer of the same user will have the same endpoint ID. Different machines have a different endpoint ID.</p></td>
</tr>
<tr class="odd">
<td><p><strong>EndpointEra</strong></p></td>
<td><p>uniqueIdentifier</p></td>
<td></td>
<td><p>ID used to differentiate registrations that involve the same user and the same endpoint.</p>
<p>This field was introduced in Microsoft Lync Server 2013.</p></td>
</tr>
<tr class="even">
<td><p><strong>ClientVersionId</strong></p></td>
<td><p>int</p></td>
<td><p>Foreign</p></td>
<td><p>Client version of current user. See the <a href="lync-server-2013-clientversions-table.md">ClientVersions table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="odd">
<td><p><strong>RegistrarId</strong></p></td>
<td><p>int</p></td>
<td><p>Foreign</p></td>
<td><p>ID of the Registrar Server used for registration. See the <a href="lync-server-2013-servers-table.md">Servers table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="even">
<td><p><strong>PoolId</strong></p></td>
<td><p>int</p></td>
<td><p>Foreign</p></td>
<td><p>ID of the pool in which the session was captured. See the <a href="lync-server-2013-pools-table.md">Pools table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="odd">
<td><p><strong>EdgeServerId</strong></p></td>
<td><p>int</p></td>
<td><p>Foreign</p></td>
<td><p>Edge Server the registration is going through. See the <a href="lync-server-2013-edgeservers-table.md">EdgeServers table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="even">
<td><p><strong>IsInternal</strong></p></td>
<td><p>Bit</p></td>
<td></td>
<td><p>Whether the user is logged on from internal or not.</p></td>
</tr>
<tr class="odd">
<td><p><strong>IsUserServiceAvailable</strong></p></td>
<td><p>bit</p></td>
<td></td>
<td><p>Whether the UserService is available or not.</p></td>
</tr>
<tr class="even">
<td><p><strong>IsPrimaryRegistrar</strong></p></td>
<td><p>bit</p></td>
<td></td>
<td><p>Whether register to the primary Registrar or not.</p></td>
</tr>
<tr class="odd">
<td><p><strong>IsPrimaryRegistrarCentral</strong></p></td>
<td><p>bit</p></td>
<td></td>
<td><p>Indicates whether or not the user is registered with a survivable branch appliance.</p>
<p>This field was introduced in Microsoft Lync Server 2013.</p></td>
</tr>
<tr class="even">
<td><p><strong>RegisterTime</strong></p></td>
<td><p>datetime</p></td>
<td></td>
<td><p>Registration time.</p></td>
</tr>
<tr class="odd">
<td><p><strong>DeRegisterTime</strong></p></td>
<td><p>datetime</p></td>
<td></td>
<td><p>De-Registration time.</p></td>
</tr>
<tr class="even">
<td><p><strong>ResponseCode</strong></p></td>
<td><p>int</p></td>
<td></td>
<td><p>Response code of the register request.</p></td>
</tr>
<tr class="odd">
<td><p><strong>DiagnosticId</strong></p></td>
<td><p>int</p></td>
<td></td>
<td><p>Diagnostic ID of the register request. This indicates that diagnostic information type.</p></td>
</tr>
<tr class="even">
<td><p><strong>DeviceId</strong></p></td>
<td><p>int</p></td>
<td><p>Foreign</p></td>
<td><p>The device that the register request is coming from. See the <a href="lync-server-2013-devices-table.md">Devices table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="odd">
<td><p><strong>DeRegisterTypeId</strong></p></td>
<td><p>tinyint</p></td>
<td><p>Foreign</p></td>
<td><p>The reason of de-register, such as ‘user initiated’, ‘registration expired’, ‘client fail’, and more. See the <a href="lync-server-2013-deregistertype-table.md">DeRegisterType table in Lync Server 2013</a> for more information.</p></td>
</tr>
<tr class="even">
<td><p><strong>IPAddress</strong></p></td>
<td><p>nvarchar(256)</p></td>
<td></td>
<td><p>IP address of the endpoint the user registered with. This can be an IPv4 address or an IPv6 address.</p>
<p>This field was introduced in Microsoft Lync Server 2013.</p></td>
</tr>
</tbody>
</table>
</div>
<span> </span>
</div>
</div>
</div>
| 32.530928 | 250 | 0.676755 | eng_Latn | 0.378478 |
2636a56ef0572b535bc778ab59d906ea4fbf9647 | 41 | md | Markdown | README.md | DanilMarchyshyn/python_traning | 696b672978708f81c5e5d2b655111bc45da06022 | [
"Apache-2.0"
] | null | null | null | README.md | DanilMarchyshyn/python_traning | 696b672978708f81c5e5d2b655111bc45da06022 | [
"Apache-2.0"
] | null | null | null | README.md | DanilMarchyshyn/python_traning | 696b672978708f81c5e5d2b655111bc45da06022 | [
"Apache-2.0"
] | null | null | null | # Repository for Python_Traning tesT
Vova | 20.5 | 36 | 0.853659 | eng_Latn | 0.587864 |
2637312b935a3f675df3f219f2c872284b08b26e | 98 | md | Markdown | bin/dns/record-set/addRecord/examples.md | ad-m/h1-cli | 2de33110baeeb342b5680ccf7b51acb41e25f5e0 | [
"MIT"
] | null | null | null | bin/dns/record-set/addRecord/examples.md | ad-m/h1-cli | 2de33110baeeb342b5680ccf7b51acb41e25f5e0 | [
"MIT"
] | 3 | 2017-09-06T21:49:12.000Z | 2018-06-04T13:24:33.000Z | bin/dns/record-set/addRecord/examples.md | ad-m/h1-cli | 2de33110baeeb342b5680ccf7b51acb41e25f5e0 | [
"MIT"
] | 2 | 2018-06-15T14:45:03.000Z | 2020-05-29T21:46:01.000Z | ```bash
{{command_name}} --zone-name 'my-domain.tld' --name subdomain --value '{{dns_value}}'
```
| 24.5 | 85 | 0.632653 | eng_Latn | 0.295994 |
2637616ef5e9782e34e29c6b8ca6749449a5863b | 1,011 | md | Markdown | README.md | TimeSeriesPrediction/time-series-web-client | 7ab0faae71153ee2863b59020fcc571a34e5ac4c | [
"MIT"
] | null | null | null | README.md | TimeSeriesPrediction/time-series-web-client | 7ab0faae71153ee2863b59020fcc571a34e5ac4c | [
"MIT"
] | 4 | 2017-07-26T13:06:42.000Z | 2017-09-26T08:32:04.000Z | README.md | TimeSeriesPrediction/time-series-web-client | 7ab0faae71153ee2863b59020fcc571a34e5ac4c | [
"MIT"
] | null | null | null | # Time Series Prediction Web Client
[](https://travis-ci.org/TimeSeriesPrediction/time-series-web-client)
[](https://david-dm.org/TimeSeriesPrediction/time-series-web-client)
[](https://david-dm.org/TimeSeriesPrediction/time-series-web-client?type=dev)
[](http://waffle.io/TimeSeriesPrediction/time-series-web-client)
[](https://raw.githubusercontent.com/TimeSeriesPrediction/time-series-server/master/LICENSE)
This repository contains the source code for the web front-end of the Time Series Prediction project.
| 91.909091 | 185 | 0.801187 | yue_Hant | 0.391661 |
2637fb1397d0463a6b4af52514654ab3be1df699 | 85 | md | Markdown | README.md | 3kg4kR/brazil-geodata-amcharts4 | c959e1cffd882e32be090715c07c865be19a4fbf | [
"MIT"
] | 2 | 2021-09-17T12:06:32.000Z | 2021-09-17T12:07:09.000Z | README.md | igor-sillva/brazil-geodata-amcharts4 | c959e1cffd882e32be090715c07c865be19a4fbf | [
"MIT"
] | null | null | null | README.md | igor-sillva/brazil-geodata-amcharts4 | c959e1cffd882e32be090715c07c865be19a4fbf | [
"MIT"
] | null | null | null | # brazil-geodata-amcharts4
Geodata de todos os municípios brasileiros para Amcharts4
| 28.333333 | 57 | 0.847059 | por_Latn | 0.989418 |
2638b0be76bb5f89993b1a818db84bfb2f62f135 | 29 | md | Markdown | README.md | AndresIturria/discord-update-channel-name | cb5930322076978b99b96f5eb7d026aefeacc21d | [
"Apache-2.0"
] | null | null | null | README.md | AndresIturria/discord-update-channel-name | cb5930322076978b99b96f5eb7d026aefeacc21d | [
"Apache-2.0"
] | null | null | null | README.md | AndresIturria/discord-update-channel-name | cb5930322076978b99b96f5eb7d026aefeacc21d | [
"Apache-2.0"
] | null | null | null | # discord-update-channel-name | 29 | 29 | 0.827586 | eng_Latn | 0.58732 |
2638cb7fa5e373655c32b9ddc0065ee42c2c58d7 | 230 | md | Markdown | src/Introduction01/README.md | y-uti/dpmt | e65b2637078ce021e1e5a73b0e888fd3314ae8fe | [
"MIT"
] | null | null | null | src/Introduction01/README.md | y-uti/dpmt | e65b2637078ce021e1e5a73b0e888fd3314ae8fe | [
"MIT"
] | null | null | null | src/Introduction01/README.md | y-uti/dpmt | e65b2637078ce021e1e5a73b0e888fd3314ae8fe | [
"MIT"
] | null | null | null | Introduction 1
==============
- List I1-6, I1-7
- Runnable インタフェース相当のものがないのでスキップ
- List I1-8
- ThreadFactory クラス相当のものがないのでスキップ
- List I1-10
- synchronized キーワード相当の機能がないのでスキップ
- List I1-11, I1-12
- 練習問題内の正しくないプログラムなのでスキップ
| 19.166667 | 36 | 0.708696 | yue_Hant | 0.300703 |
26398d5f9185205b0c114ad3e7ebcce168586698 | 9,535 | md | Markdown | articles/service-fabric/run-to-completion.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/run-to-completion.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/run-to-completion.md | tsunami416604/azure-docs.hu-hu | aeba852f59e773e1c58a4392d035334681ab7058 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: RunToCompletion szemantikai Service Fabric
description: A Service Fabric RunToCompletion szemantikai leírását ismerteti.
author: shsha-msft
ms.topic: conceptual
ms.date: 03/11/2020
ms.author: shsha
ms.openlocfilehash: 6f2f6aa4380fcf6909957118bf682275350ce68c
ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 11/25/2020
ms.locfileid: "96000266"
---
# <a name="runtocompletion"></a>Futtatás befejezésig
Az 7,1-es verziótól kezdődően a Service Fabric támogatja a [tárolók][containers-introduction-link] és a [vendég végrehajtható][guest-executables-introduction-link] alkalmazások **RunToCompletion** szemantikai használatát. Ezek a szemantika lehetővé teszik az olyan alkalmazások és szolgáltatások számára, amelyek egy adott feladatot és kilépést végeznek, ezzel szemben mindig az alkalmazásokat és szolgáltatásokat futtatják.
A cikk folytatása előtt javasoljuk, hogy ismerkedjen meg a [Service Fabric alkalmazás modelljével][application-model-link] és a [Service Fabric üzemeltetési modellel][hosting-model-link].
> [!NOTE]
> A RunToCompletion szemantikai használata jelenleg nem támogatott a [Reliable Services][reliable-services-link] programozási modellel írt szolgáltatások esetében.
## <a name="runtocompletion-semantics-and-specification"></a>RunToCompletion szemantika és specifikáció
A RunToCompletion szemantikai **ExecutionPolicy** [a ServiceManifest importálásakor][application-and-service-manifests-link]is megadhatók. A megadott házirendet a rendszer a ServiceManifest alkotó összes CodePackages örökli. A következő ApplicationManifest.xml kódrészlet példaként szolgál.
```xml
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="RunToCompletionServicePackage" ServiceManifestVersion="1.0"/>
<Policies>
<ExecutionPolicy Type="RunToCompletion" Restart="OnFailure"/>
</Policies>
</ServiceManifestImport>
```
A **ExecutionPolicy** a következő két attribútumot teszi lehetővé:
* **Típus:** a **RunToCompletion** jelenleg csak az attribútum megengedett értéke.
* **Újraindítás:** Ez az attribútum határozza meg a szervizcsomagot tartalmazó CodePackages alkalmazott újraindítási szabályzatot, meghibásodás esetén. A **nem nulla kilépési kóddal** kilépő CodePackage sikertelennek tekintendő. Az attribútum megengedett értékei a **OnFailure** , a **OnFailure** pedig **soha nem** az alapértelmezett érték.
Ha az újraindítási szabályzat **OnFailure** értékre van állítva, ha bármelyik CodePackage meghibásodik **(nullától eltérő kilépési kód)**, a rendszer újraindul, és az ismételt hibák között visszakerül. Ha a **CodePackage nem sikerül**, az újraindítási házirend beállítása sikertelen, a DeployedServicePackage telepítési állapota **meghiúsult** , de más CodePackages is engedélyezett a végrehajtás folytatásához. Ha a szervizcsomagot tartalmazó összes CodePackages sikeresen befejeződött **(kilépési kód: 0)**, akkor a DeployedServicePackage telepítési állapota **RanToCompletion** lesz megjelölve.
## <a name="complete-example-using-runtocompletion-semantics"></a>Példa RunToCompletion szemantika használatával
Nézzük meg a teljes példát a RunToCompletion szemantika használatával.
> [!IMPORTANT]
> Az alábbi példa azt feltételezi, hogy a [Windows-tároló alkalmazások Service Fabric és a Docker használatával történő][containers-getting-started-link]létrehozásának ismerete.
>
> Ez a példa mcr.microsoft.com/windows/nanoserver:1809 hivatkozik. A Windows Server-tárolók nem kompatibilisek a gazdagép operációs rendszerének összes verziójával. További információ: a [Windows-tároló verziójának kompatibilitása](/virtualization/windowscontainers/deploy-containers/version-compatibility).
Az alábbi ServiceManifest.xml a tárolókat jelképező két CodePackages álló szervizcsomagot ismerteti. A *RunToCompletionCodePackage1* csak egy üzenetet naplóz az **StdOut** -ba, és kilép. A *RunToCompletionCodePackage2* egy ideig Pingeli a visszacsatolási címeket, majd a **0**, **1** vagy **2** kilépési kóddal kilép.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<ServiceManifest Name="WindowsRunToCompletionServicePackage" Version="1.0" xmlns="http://schemas.microsoft.com/2011/01/fabric" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Description>Windows RunToCompletion Service</Description>
<ServiceTypes>
<StatelessServiceType ServiceTypeName="WindowsRunToCompletionServiceType" UseImplicitHost="true"/>
</ServiceTypes>
<CodePackage Name="RunToCompletionCodePackage1" Version="1.0">
<EntryPoint>
<ContainerHost>
<ImageName>mcr.microsoft.com/windows/nanoserver:1809</ImageName>
<Commands>/c,echo Hi from RunToCompletionCodePackage1 && exit 0</Commands>
<EntryPoint>cmd</EntryPoint>
</ContainerHost>
</EntryPoint>
</CodePackage>
<CodePackage Name="RunToCompletionCodePackage2" Version="1.0">
<EntryPoint>
<ContainerHost>
<ImageName>mcr.microsoft.com/windows/nanoserver:1809</ImageName>
<Commands>/v,/c,ping 127.0.0.1 && set /a exitCode=%random% % 3 && exit !exitCode!</Commands>
<EntryPoint>cmd</EntryPoint>
</ContainerHost>
</EntryPoint>
</CodePackage>
</ServiceManifest>
```
A következő ApplicationManifest.xml a fent tárgyalt ServiceManifest.xml alapján ismerteti az alkalmazást. Meghatározza a **OnFailure** újraindítási szabályzatával rendelkező *WindowsRunToCompletionServicePackage* **RunToCompletion** **ExecutionPolicy** . A *WindowsRunToCompletionServicePackage* aktiválása után a rendszer megkezdi az összetevők CodePackages. Az első aktiváláskor a *RunToCompletionCodePackage1* sikeresen ki kell lépnie. A *RunToCompletionCodePackage2* azonban sikertelen lehet **(nullától eltérő kilépési kóddal)**, ebben az esetben a rendszer újraindul, mert az újraindítási szabályzat **OnFailure**.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<ApplicationManifest ApplicationTypeName="WindowsRunToCompletionApplicationType" ApplicationTypeVersion="1.0" xmlns="http://schemas.microsoft.com/2011/01/fabric" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Description>Windows RunToCompletion Application</Description>
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="WindowsRunToCompletionServicePackage" ServiceManifestVersion="1.0"/>
<Policies>
<ExecutionPolicy Type="RunToCompletion" Restart="OnFailure"/>
</Policies>
</ServiceManifestImport>
<DefaultServices>
<Service Name="WindowsRunToCompletionService" ServicePackageActivationMode="ExclusiveProcess">
<StatelessService ServiceTypeName="WindowsRunToCompletionServiceType" InstanceCount="1">
<SingletonPartition />
</StatelessService>
</Service>
</DefaultServices>
</ApplicationManifest>
```
## <a name="querying-deployment-status-of-a-deployedservicepackage"></a>DeployedServicePackage telepítési állapotának lekérdezése
A DeployedServicePackage üzembe helyezési állapota lekérdezhető a PowerShellből a [Get-ServiceFabricDeployedServicePackage][deployed-service-package-link] vagy a C# használatával a [FabricClient][fabric-client-link] API [GetDeployedServicePackageListAsync (karakterlánc, URI, string) használatával.][deployed-service-package-fabricclient-link]
## <a name="considerations-when-using-runtocompletion-semantics"></a>Megfontolások RunToCompletion szemantika használatakor
A jelenlegi RunToCompletion-támogatáshoz a következő szempontokat kell figyelembe venni.
* Ezek a szemantika csak a [tárolók][containers-introduction-link] és a [vendég végrehajtható][guest-executables-introduction-link] alkalmazások esetében támogatott.
* A RunToCompletion szemantikai feladatokkal rendelkező alkalmazások frissítési forgatókönyvei nem engedélyezettek. A felhasználóknak szükség esetén törölnie kell és újra létre kell hozniuk ezeket az alkalmazásokat.
* A feladatátvételi események hatására a CodePackages a sikeres befejezést követően újra végrehajtható, ugyanazon a csomóponton vagy a fürt más csomópontjain. Többek között a feladatátvételi események, a csomópontok újraindítása és Service Fabric futtatókörnyezet frissítése egy csomóponton.
## <a name="next-steps"></a>További lépések
A kapcsolódó információkról a következő cikkekben olvashat.
* [Service Fabric és tárolók.][containers-introduction-link]
* [Service Fabric és vendég végrehajtható fájlok.][guest-executables-introduction-link]
<!-- Links -->
[containers-introduction-link]: service-fabric-containers-overview.md
[containers-getting-started-link]: service-fabric-get-started-containers.md
[guest-executables-introduction-link]: service-fabric-guest-executables-introduction.md
[reliable-services-link]: service-fabric-reliable-services-introduction.md
[application-model-link]: service-fabric-application-model.md
[hosting-model-link]: service-fabric-hosting-model.md
[application-and-service-manifests-link]: service-fabric-application-and-service-manifests.md
[setup-entry-point-link]: service-fabric-run-script-at-service-startup.md
[deployed-service-package-working-with-link]: service-fabric-hosting-model.md#work-with-a-deployed-service-package
[deployed-code-package-link]: /powershell/module/servicefabric/get-servicefabricdeployedcodepackage
[deployed-service-package-link]: /powershell/module/servicefabric/get-servicefabricdeployedservicepackage
[fabric-client-link]: /dotnet/api/system.fabric.fabricclient
[deployed-service-package-fabricclient-link]: /dotnet/api/system.fabric.fabricclient.queryclient.getdeployedservicepackagelistasync
| 70.110294 | 620 | 0.807446 | hun_Latn | 0.996354 |
263a56f21bb2a27613da108be4afe2dfee2cd4a6 | 2,532 | md | Markdown | windows-driver-docs-pr/image/scannerconfiguration.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/image/scannerconfiguration.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/image/scannerconfiguration.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ScannerConfiguration 元素
description: 所需的 ScannerConfiguration 元素是元素的集合,用于描述扫描程序的可配置的功能。
ms.assetid: 79c26d0d-ebee-4baf-8689-f5bae088883d
keywords:
- ScannerConfiguration 元素成像设备
topic_type:
- apiref
api_name:
- wscn ScannerConfiguration
api_type:
- Schema
ms.date: 11/28/2017
ms.localizationpriority: medium
ms.openlocfilehash: 07333bf777f45b3dcdbddc63587324e50fa80f56
ms.sourcegitcommit: 0cc5051945559a242d941a6f2799d161d8eba2a7
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 04/23/2019
ms.locfileid: "63370073"
---
# <a name="scannerconfiguration-element"></a>ScannerConfiguration 元素
所需**ScannerConfiguration**元素是元素的集合,用于描述扫描程序的可配置的功能。
<a name="usage"></a>用法
-----
```xml
<wscn:ScannerConfiguration>
child elements
</wscn:ScannerConfiguration>
```
<a name="attributes"></a>特性
----------
没有特性。
## <a name="child-elements"></a>子元素
<table>
<colgroup>
<col width="100%" />
</colgroup>
<thead>
<tr class="header">
<th>元素</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p><a href="adf.md" data-raw-source="[<strong>ADF</strong>](adf.md)"><strong>ADF</strong></a></p></td>
</tr>
<tr class="even">
<td><p><a href="devicesettings.md" data-raw-source="[<strong>DeviceSettings</strong>](devicesettings.md)"><strong>DeviceSettings</strong></a></p></td>
</tr>
<tr class="odd">
<td><p><a href="film.md" data-raw-source="[<strong>Film</strong>](film.md)"><strong>Film</strong></a></p></td>
</tr>
<tr class="even">
<td><p><a href="platen.md" data-raw-source="[<strong>Platen</strong>](platen.md)"><strong>辊</strong></a></p></td>
</tr>
</tbody>
</table>
## <a name="parent-elements"></a>父元素
<table>
<colgroup>
<col width="100%" />
</colgroup>
<thead>
<tr class="header">
<th>元素</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p><a href="elementchanges.md" data-raw-source="[<strong>ElementChanges</strong>](elementchanges.md)"><strong>ElementChanges</strong></a></p></td>
</tr>
<tr class="even">
<td><p><a href="elementdata-for-scannerelements-element.md" data-raw-source="[<strong>ElementData for parent ScannerElements</strong>](elementdata-for-scannerelements-element.md)"><strong>对于父 ScannerElements ElementData</strong></a></p></td>
</tr>
</tbody>
</table>
## <a name="see-also"></a>请参阅
[**ADF**](adf.md)
[**DeviceSettings**](devicesettings.md)
[**ElementChanges**](elementchanges.md)
[**对于父 ScannerElements ElementData**](elementdata-for-scannerelements-element.md)
[**Film**](film.md)
[**辊**](platen.md)
| 22.017391 | 253 | 0.695498 | yue_Hant | 0.236062 |
263b095f1acc601501b4c52d92baf7ce475247ae | 555 | md | Markdown | README.md | pkcsecurity/dvcs-smackdown | 73c8fc4d3afd224877fe1cea808d493ec257f4a4 | [
"Unlicense"
] | null | null | null | README.md | pkcsecurity/dvcs-smackdown | 73c8fc4d3afd224877fe1cea808d493ec257f4a4 | [
"Unlicense"
] | null | null | null | README.md | pkcsecurity/dvcs-smackdown | 73c8fc4d3afd224877fe1cea808d493ec257f4a4 | [
"Unlicense"
] | null | null | null | # DVCS Smackdown
This is an example website for Bootstrap that contains HTML, JS, and CSS,
that we can all modify as desired.
Please, **use this as a chance to dry-fire the DVCS systems**, so submit
pull requests, fork, add things, blow things up, get a feel for it all
so we can have (experiential) personal preferences in our next discussion.
## How to run the website?
If you have python, I recommend...
```bash
$ python3 -m http.server --bind 127.0.0.1
```
## Aside: My feelings toward git

| 30.833333 | 74 | 0.731532 | eng_Latn | 0.987089 |
263bef494c819ec2cf7999c521da41862001497d | 5,278 | md | Markdown | docs/sympl/assignment-to-globals-and-locals.md | zspitz/dlr | e6a0ca5616c11b5f93e55b9e7c9d5542923d7335 | [
"Apache-2.0"
] | null | null | null | docs/sympl/assignment-to-globals-and-locals.md | zspitz/dlr | e6a0ca5616c11b5f93e55b9e7c9d5542923d7335 | [
"Apache-2.0"
] | null | null | null | docs/sympl/assignment-to-globals-and-locals.md | zspitz/dlr | e6a0ca5616c11b5f93e55b9e7c9d5542923d7335 | [
"Apache-2.0"
] | null | null | null | ---
sort: 4
title: Assignment to Globals and Locals
---
# 4 Assignment to Globals and Locals
We already discussed lexical and globals in general in section 3.5. This section discusses adding variable assignment to Sympl. This starts with the keyword **set**, for which we won't discuss lexical scanning or parsing. As a reminder, Sympl is expression-based, and everything returns value. The **set** keyword form returns the value that is stored.
Let's look at the AnalyzeAssignExpr from etgen.cs:
``` csharp
public static Expression AnalyzeAssignExpr(SymplAssignExpr expr,
AnalysisScope scope) {
if (expr.Location is SymplIdExpr) {
var idExpr = (SymplIdExpr)(expr.Location);
var lhs = AnalyzeExpr(expr.Location, scope);
var val = AnalyzeExpr(expr.Value, scope);
var param = FindIdDef(idExpr.IdToken.Name, scope);
if (param != null) {
return Expression.Assign(
lhs,
Expression.Convert(val, param.Type)
);
} else {
var tmp = Expression.Parameter(typeof(object),
"assignTmpForRes");
return Expression.Block(
new[] { tmp },
Expression.Assign(
tmp,
Expression.Convert(val, typeof(object))
),
Expression.Dynamic(
scope.GetRuntime()
.GetSetMemberBinder(idExpr.IdToken.Name),
typeof(object),
scope.GetModuleExpr(),
tmp
),
tmp
);
}
// Ignore rest of function for now, discussed later with elt
// keyword and SetMember.
```
There are only two cases to implement at this point, lexical variables and file global variables. Later, Sympl adds setting indexed locations and .NET members. The key here is the chain of AnalysisScopes and that Expression Trees provide automatic closure environments if lifting is needed. FindIdDef (code is in etgen.cs) searches up the chain until it finds a scope with the identifier mapped to a ParameterExpression. If it finds the name, then it is a lexical variable. For lexical identifiers AnalyzeAssignExpr emits an Assign node, which guarantees to return the value stored. You also need to ensure the val expression converts to the ParameterExpression's type. The Assign factory method would throw if the Expression types were inconsistent.
If FindIdDef finds no scope mapping the identifier to a ParameterExpression, then the variable is a file global. As described in section 3.3, Sympl leverages the DLR's ExpandoObjects to represent file scopes. Sympl uses a Dynamic expression with one of its SymplSetMemberBinders, which carries the identifier name as metadata. Sympl's binders also set ignoreCase to true implicitly. We'll discuss the use of the BlockExpression after digging into the DynamicExpression a bit more.
There are a couple of points to make now in Sympl's evolving implementation. The first is to ignore GetSetMemberBinder. Imagine this is just a call to the constructor:
``` csharp
new SymplSetMemberBinder(idExpr.IdToken.Name)
```
GetSetMemberBinder produces canonical binders, a single binder instance used on every call site with the same metadata. This is important for DLR L2 caching of rules. See section 18 for how Sympl does this and why, and see sites-binders-dynobj-interop.doc for more details on CallSite rule caching. The second point is that right now the SymplSetMemberBinder doesn't do any other work, other than convey the identifier name as metadata. We know the ExpandoObject's DynamicMetaObject will provide the implementation at runtime for how to store the value as a member.
Sympl is a case-INsensitive language. Sympl is case-preserving with identifiers stored in tokens and in binder metadata. Preserving case provides a bit more opportunity for interoperability. For example, if a Sympl file module flowed into some IronPython code, and it did case-sensitive lookups, the python code is more likely to just work. As another example, in the IronPython implementation of Sympl, where the Cons class is implemented in IronPython, it is IronPython's DynamicMetaObject that looks up the members First and Rest. IronPython still has a bug that it ignores ignoreCase on binders. While Sympl preserves case in the metadata, it uses lowercase as the canonical representation of identifiers in AnalysisScopes.
There's more code to the globals branch than the SetMember DynamicExpression. Sympl's semantics is to return a value from every expression, and it returns the value stored from assignments. Sympl wraps the DynamicExpression in a Block with a temporary variable to ensure it only evaluates the value expression once. The Block has as its last expression the ParameterExpression for the temporary variable so that the BlockExpression returns the value stored. This code is for example now, but when written ExpandoObject didn't return the values it stored. The convention for binders and DynamicMetaObjects is that they should return rules for SetMember and SetIndex operations that result in the value stored.
| 83.777778 | 750 | 0.729443 | eng_Latn | 0.995428 |
263c09805a0f7a609ed4791a23f7463905040313 | 53 | md | Markdown | README.md | lffranca/gonga-admin | 1ab5d4440df1c5ba6edbbfd5cc41219fed22e116 | [
"MIT"
] | 1 | 2021-11-08T18:19:16.000Z | 2021-11-08T18:19:16.000Z | README.md | lffranca/gonga-admin | 1ab5d4440df1c5ba6edbbfd5cc41219fed22e116 | [
"MIT"
] | null | null | null | README.md | lffranca/gonga-admin | 1ab5d4440df1c5ba6edbbfd5cc41219fed22e116 | [
"MIT"
] | null | null | null | # gonga
More than just another GUI to Kong Admin API
| 17.666667 | 44 | 0.773585 | eng_Latn | 0.984998 |
263c77cf0b9a28d75efea47dd93353099ca17081 | 597 | md | Markdown | README.md | nrfta/go-gqlgen-helpers | e72bf33098cb3c6f844c72a0325cf375a0e0d529 | [
"MIT"
] | null | null | null | README.md | nrfta/go-gqlgen-helpers | e72bf33098cb3c6f844c72a0325cf375a0e0d529 | [
"MIT"
] | 2 | 2021-12-20T00:15:34.000Z | 2021-12-20T00:15:34.000Z | README.md | nrfta/go-gqlgen-helpers | e72bf33098cb3c6f844c72a0325cf375a0e0d529 | [
"MIT"
] | null | null | null | # go-gqlgen-helpers 
## Install
```sh
go get -u "github.com/nrfta/go-gqlgen-helpers"
```
### GraphQL gqlgen error handling
```go
// ...
router.Handle("/graphql",
handler.GraphQL(
NewExecutableSchema(Config{Resolvers: &Resolver{}}),
errorhandling.ConfigureErrorPresenterFunc(reportErrorToSentry),
errorhandling.ConfigureRecoverFunc(),
),
)
// ...
func reportErrorToSentry(ctx context.Context, err error) {
// Whatever
}
```
## License
This project is licensed under the [MIT License](LICENSE.md).
| 16.583333 | 90 | 0.700168 | yue_Hant | 0.226563 |
263d551dc2cc0cfbd076e1ee7236212cdb1b9732 | 1,901 | md | Markdown | _posts/leetcode/2021-06-15-minimum-path-sum.md | GracefulSoul/GracefulSoul.github.io | a084e18c1df3804866de8c8dcd9093a6babc017f | [
"MIT"
] | 2 | 2018-01-15T07:42:25.000Z | 2021-02-21T07:06:12.000Z | _posts/leetcode/2021-06-15-minimum-path-sum.md | GracefulSoul/GracefulSoul.github.io | a084e18c1df3804866de8c8dcd9093a6babc017f | [
"MIT"
] | 1 | 2021-10-18T02:42:16.000Z | 2021-10-18T02:42:16.000Z | _posts/leetcode/2021-06-15-minimum-path-sum.md | GracefulSoul/GracefulSoul.github.io | a084e18c1df3804866de8c8dcd9093a6babc017f | [
"MIT"
] | null | null | null | ---
title: "Leetcode Java Minimum Path Sum"
excerpt: "Leetcode Minimum Path Sum Java 풀이"
last_modified_at: 2021-06-15T12:30:00
header:
image: /assets/images/leetcode/minimum-path-sum.png
categories:
- Leetcode
tags:
- Programming
- Leetcode
- Java
toc: true
toc_ads: true
toc_sticky: true
use_math: true
---
# 문제
[Link](https://leetcode.com/problems/minimum-path-sum/){:target="_blank"}
# 코드
```java
class Solution {
public int minPathSum(int[][] grid) {
int[] dp = this.getDp(grid);
for (int i = 1; i < grid.length; i++) {
for (int j = 0; j < grid[i].length; j++) {
if (j == 0) {
dp[j] += grid[i][j];
} else {
dp[j] = grid[i][j] + Math.min(dp[j - 1], dp[j]);
}
}
}
return dp[grid[0].length - 1];
}
private int[] getDp(int[][] grid) {
int[] dp = new int[grid[0].length];
dp[0] = grid[0][0];
for (int j = 1; j < grid[0].length; j++) {
dp[j] = dp[j - 1] + grid[0][j];
}
return dp;
}
}
```
# 결과
[Link](https://leetcode.com/submissions/detail/508070444/){:target="_blank"}
# 설명
1. 좌측 상단에서 시작해서 우측 하단으로 이동하기까지 배열 내 값의 합이 낮은 값을 탐색하는 문제이다.
2. 누적 합을 구하기 위해 DP를 주어진 2차원 배열 grid의 열 길이만큼 정의한다.
- 각 이동 간 합을 순차적으로 보기 위해서는 grid와 동일 크기로 정의해도 되나, 최소 이동 횟수만 구하면 되므로 grid의 열 길이만큼 정의하여 이동 값의 합을 구한다.
3. 정의한 DP에 첫 행을 반복하여 순차적으로 누계를 저장한다.
4. 주어진 2차원 배열 grid를 반복하여 우측 하단 마지막 셀에 도달하기 까지 최소 합을 구한다.
- DP를 주어진 배열 grid의 첫 행으로 초기화 하였으므로, 다음 행부터 반복을 수행한다.
- 첫 열의 경우, DP의 같은 열에 배열의 값을 누적시킨다.
- 점진적으로 우측 하단으로 이동하므로, 첫 열은 하단으로 이동하는 경우로 판단한다.
- 그 외의 경우, DP의 같은 열에 배열의 값과 DP[$j - 1$]과 DP[j]의 값 중 작은 값을 더한다.
- 우측 하단으로 점진적으로 이동하면서 바로 아래칸으로 이동하는 경우와, 옆칸으로 이동하는 경우의 최소 값을 더하여 목적지에 도착하기 위한 최소 합을 구한다.
5. 반복이 완료되면, DP의 마지막 열 값을 주어진 문제의 결과로 반환한다.
# 소스
Sample Code는 [여기](https://github.com/GracefulSoul/leetcode/blob/master/src/main/java/gracefulsoul/problems/MinimumPathSum.java){:target="_blank"}에서 확인 가능합니다. | 26.041096 | 157 | 0.618096 | kor_Hang | 1.000004 |
263d946b40733f2df8dca759b03c48935827ee61 | 3,509 | md | Markdown | _posts/2019-03-03-Download-2003-lancer-es-manual.md | Anja-Allende/Anja-Allende | 4acf09e3f38033a4abc7f31f37c778359d8e1493 | [
"MIT"
] | 2 | 2019-02-28T03:47:33.000Z | 2020-04-06T07:49:53.000Z | _posts/2019-03-03-Download-2003-lancer-es-manual.md | Anja-Allende/Anja-Allende | 4acf09e3f38033a4abc7f31f37c778359d8e1493 | [
"MIT"
] | null | null | null | _posts/2019-03-03-Download-2003-lancer-es-manual.md | Anja-Allende/Anja-Allende | 4acf09e3f38033a4abc7f31f37c778359d8e1493 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download 2003 lancer es manual book
Indeed, "I am the North Wind, just went bing-bong. No, in spite of her performance "Well, I need a suit of interested, "but I do have a story 2003 lancer es manual you, with her incomprehensible yammering about talking books and talking dogs and her mother driving pies, Joey leaped up front his armchair again, Judge Fulmire had confirmed Kalens's interpretation that technically it would remain in force until the expiration of Wellesley's term of office, then, he will see the aircraft is the ground slightly green. " ominous footsteps, our private thoughts, and and bread and scallions, "because that's not actually a choice you have. I think I'm beginning to see a whole new world of people that I never imagined existed. was nothing to fear. The driver is flashing his headlights, and it did them no good, who spoke another language than the Russians. Out with tornadoes, eight mm, in beheaded baptist are you talkin' movies for"?" In a cabinet above the bench, Junior leaned into the car and shifted it out of park, so circumference of each iris. I determined to land on it for a few hours to He looked at me, sweet smile, Junior began to look for a suitable apartment on Thursday. She was attentive, girl, confused, where 2003 lancer es manual strong head wind blew, a death with full and continuing consciousness, and Elfarran with it, when she saw that I did not stop but kept driving. Kjellman youngest of them tortured, 2003 lancer es manual more than we want. It was hard to reindeer-Chukches appears to be about the same. "He's very careful how he talks about 2003 lancer es manual Masters. for it in June! I accept it. "I'll put her loose. "Too delicate?" "Study with the wizard?" or without 2003 lancer es manual robe and scythe, the 1828. or 2003 lancer es manual deg. have initiated hostilities? She knew. It tall lithe man with hair, I think, his senses seemed to register all the ordinary details of the mud, a tray of cheese. I got up slowly. "He can't have been here ten God, oblivious Earlier, they had come from a deep sleep to full pressure-integrity in thirty 2003 lancer es manual. " arguments not taken from books. Shortly after sunset, devoid of disguise or apology, and went in to Aboulhusn and told him what had befallen. land which formerly occupied considerable spaces of the sea between them were far travelled, muscles tense, I am wasted on meth precise and pleasing to the eye as calligraphy? If we could find a male and a female mammothв place, I met the mother once. turned slowly in a circle, Otter's father said, he lived even more inside himself than he did at other times, jumpy, fully clothed, as though any reason existed for her to be on a first-name basis with the man who killed her husband. Behind the wall was like to arise because the former invited one of us to drive with desperately needed mechanical respirator; the compressor motor rattled and expired. rage all night, drawn by R. It was Crow who had, and would the egg cell then proceed to divide and redivide, the law is still the law? with focus, "This fellow is a thief and that which he saith is leasing. "Uh, and in a style resembling that which 2003 lancer es manual wished were more musical, he's still close to eight big ones ahead. About two years. "We haven't any 2003 lancer es manual, writhed, whose researches Lechat had been following with interest for several years. But then the sister died. | 389.888889 | 3,414 | 0.784269 | eng_Latn | 0.999924 |
263d9ae00b62ce9ee6d3a841da29112dbf601c07 | 9,830 | md | Markdown | README.md | zz2585/CommonOwnerReplication | e952eba5af27141b0f7a38959f63d22d029e03ed | [
"MIT"
] | 23 | 2020-06-24T21:07:35.000Z | 2022-02-25T10:58:15.000Z | README.md | zz2585/CommonOwnerReplication | e952eba5af27141b0f7a38959f63d22d029e03ed | [
"MIT"
] | 1 | 2020-06-29T16:56:06.000Z | 2020-06-29T16:56:06.000Z | README.md | zz2585/CommonOwnerReplication | e952eba5af27141b0f7a38959f63d22d029e03ed | [
"MIT"
] | 14 | 2020-06-24T19:46:57.000Z | 2022-03-24T17:06:44.000Z | # Replication Instructions for: Common Ownership in America: 1980-2017
Backus, Conlon and Sinkinson (2020)
AEJMicro-2019-0389
openicpsr-120083
A copy of the paper is here: https://chrisconlon.github.io/site/common_owner.pdf
### Open ICPSR Install Instructions
1. Download and unzip the repository.
2. All required files are included or are downloaded programatically from WRDS (see notes below).
### Github Install Instructions
To download the repo simply type:
git clone https://github.com/chrisconlon/CommonOwnerReplication
You will need to have the git large file storage extension installed. (Which you probably do not).
To install this extension follow the directions at:
https://git-lfs.github.com
### Dataset Size and Memory
1. We recommend that you have at least 64GB of RAM available.
2. All of the datasets saved will take up about 14 GB of drive space.
3. NumPy is used extensively for the calculations and is multithreaded (so more cores will help).
4. The computation of the $\kappa_{fg}$ terms is parallelized quarter by quarter explicitly (so cores will help a lot here).
5. But most of the time spent is in merging and filtering data in pandas (more cores don't help much).
5. Total runtime on a 2015 iMac with 64GB of RAM is around 3 hours.
6. WRDS download time is about an hour (Depends on internet speed) and total download is > 10GB.
### Downloading from WRDS
User must provide their own WRDS account. User will be prompted for WRDS username and password in file 1_Download_WRDS_Data.py.
To request an account, please visit:
https://wrds-www.wharton.upenn.edu/register/
If you do not have API access, you will need to consult the wrds_constituents.pdf document for instructions on using the WRDS web interface. This is strongly NOT RECOMMENDED. Because you cannot apply complex filters to the SQL queries as we do programatically, you will also need much more disk space (on the order of a Terabyte to save the entire Thomson-Reuters s34 13f database.)
If you are running this on a batch job (not interactively) such as on a HPC cluster you will need to pre-enter your WRDS password by creating a pgpass file.
As an example:
```
import wrds
db = wrds.Connection(wrds_username='joe')
db.create_pgpass_file()
```
If you encounter a problem, it might be that your pgpass file is not accessible by your batch job.
For more information please see: [https://wrds-www.wharton.upenn.edu/pages/support/programming-wrds/programming-python/python-from-your-computer/](https://wrds-www.wharton.upenn.edu/pages/support/programming-wrds/programming-python/python-from-your-computer/)
for more details.
### Python dependencies
Our run_all.sh bash script should install all of the required python dependencies (assuming python itself is installed correctly and you have necessary acces to install packages).
To install those dependencies manually (such as on a shared server) you may need to do the following.
Python (version 3.8 or above) - install dependencies with
pip3 install -r requirements.txt
numpy, pandas, matplotlib, pyarrow, brotli, seaborn, wrds, scikit-learn, pyhdfe, pyblp, statsmodels
We anticipate most users will be running this replication package from within an Anaconda environment. To avoid making changes to your base environment you will want to create a separate environment for this replication package. To do that
```
conda create --name common_owner --file requirements.txt
conda activate common_owner
```
## How to run the code
Change to the directory containing this file and run "./run_all.sh" on the terminal. The code should take approximately 3-10 hours to run. Tables and figures will be produced as described below.
```
cd code
./runall.sh
```
### Windows Warning
Windows Users: instead use "run_all.bat" from the command prompt.
There are known conflicts between Windows 10 and core Python DLL's in versions < 3.7.3. If you are running on Windows 10, all Python programs will run best with Python 3.8 or later (see: https://bugs.python.org/issue35797).
## File of origin for tables and figures
| Table/Figure Number | Generating File |
| --- |---|
| Table 1 | (by hand) |
| Table 2 | (by hand) |
| Table 3 | table3_variance_decomp.py |
| Table 4 | table4_kappa_correlations.py |
| Figure 1 | plots2_kappa_official.py |
| Figure 2 | plots1_basic_descriptives.py |
| Figure 3 | plots1_basic_descriptives.py |
| Figure 4 | plots1_basic_descriptives.py |
| Figure 5 | plots3_big_three_four.py |
| Figure 6 | plots2_kappa_official.py |
| Figure 7 | plots2_kappa_official.py |
| Figure 8 | plots5_investor_similarity.py |
| Figure 9 | plots2_kappa_official.py |
| Figure 10 | plots11_profit_simulations.py |
| Figure 11 | plots11_profit_simulations.py |
| Figure 12 | plots9_blackrock_vanguard.py |
| Figure 13 | plots2_kappa_official.py |
| Figure 14 | plots2_kappa_official.py |
| Figure 15 | plots2_kappa_official.py |
| Figure 16 | plots5_airlines_cereal.py |
| Figure 17 | plots6_sole_vs_shared.py |
| Figure A1 | plots1_basic_descriptives.py |
| Figure A2 | plots8_individual_firm_coverage.py |
| Figure A3 | plots10_kappa_comparison_appendix.py |
| Figure A4 | plots7_short_interest_coverage.py |
| Figure A5 | plots7_short_interest_coverage.py |
| Figure A6 | plots2_kappa_official.py |
| Figure A7 | plots2_kappa_official.py |
| Figure A8 | plots5_investor_similarity.py |
## Within-File Dependencies:
1_Download_WRDS_Data.py:
wrds_downloads
2_Process_WRDS_Data.py
wrds_cleaning
wrds_checks
3_Calculate_Kappas.py
kappas
investors
firminfo
utilities/quantiles
plots3_big_three_four.py:
kappas
investors
plots5_airlines_cereal.py:
kappas
plots9_blackrock_vanguard.py:
kappas
plots10_kappa_comparison_appendix.py:
utilities/matlab_util
## Files Provided and Data Access Statements
### WRDS
We use several data sources from WRDS. These are accessed programatically through the WRDS API and we are not able to include individual files in this replication package. (See terms: https://wrds-www.wharton.upenn.edu/users/tou/).
They include:
A. CRSP: data on securities prices and shares outstanding; list of S&P 500 constituents.
B. Compustat: business fundamentals, short interest, business segment info.
C. Thomson-Reuters: s34 database of 13f filings/ownership.
### Author Constructed files
data/public:
The below files are publicly available csv's constructed by the authors. These are drops, consolidations, and manager identifiers that are used in our project. They are distributed with this code package.
1. manager_consolidations.csv: lists consolidated manager numbers: several manager actually correspond to one
2. permno_drops.csv: lists dropped permno IDs with reasons why they are dropped
3. big4.csv: lists manager Numbers for Blackrock, Fidelity, State Street, and Vanguard
The markups from from DLEU 2020 can be reproduced by running the replication package:
### DeLoecker Eeckhout Unger Markups
4. DLE_markups_fig_v2.csv: markups from Figure 10 of DeLoecker Eeckhout Unger (QJE 2020)
De Loecker, Jan; Eeckhout, Jan; Unger, Gabriel, 2020,
"Replication Data for: 'The Rise of Market Power and the Macroeconomic Implications'", https://doi.org/10.7910/DVN/5GH8XO, Harvard Dataverse, V1
That replication package requires access to WRDS. A subset of the markups (and no additional data) are being made publicly available here.
### Scraped 13f filings
The original source data are the publicly available SEC 13f filing data from EDGAR: https://www.sec.gov/edgar/searchedgar/companysearch.html
Most users instead access the Thomson-Reuters S34 database from WRDS (as our script above does). We've also scraped the original source documents from EDGAR and compiled them into an easy to use format. We provide the entire universe of 13f filings as a separate dataset. For the purposes of replicating this paper, we use three smaller extracts as parquet files:
5. cereal.parquet: extract 13F Filings for firms within the cereal industry (includes small cap)
6. airlines.parquet: extract 13F Filings for firms within the airline industry (includes small cap)
7. out_scrape.parquet: extract 13F Filings for LARGE cap firms (a superset of the S&P 500) from 1999-2017 (300MB).
Each file contains:
- 13f filings going back to 1999 and end in late 2017 (Data period for this paper).
The full set of scraped 13f filings and a detailed description of how extracts were created are available in two places:
1. The live version of the 13f scraping project is [https://sites.google.com/view/msinkinson/research/common-ownership-data?](https://sites.google.com/view/msinkinson/research/common-ownership-data?)
2. The permanent archived version (including these extracts) is available to the public at Harvard Dataverse (doi:10.7910/DVN/ZRH3EU):
https://doi.org/10.7910/DVN/ZRH3EU
Backus, Matthew; Conlon, Christopher T; Sinkinson, Michael; 2020, "Common Ownership Data: Scraped SEC form 13F filings for 1999-2017", https://doi.org/10.7910/DVN/ZRH3EU, Harvard Dataverse, V1.1
### Description of .parquet file format
We use the parquet format for:
- Large data inputs (above)
- Most intermediary datasets
Parquet files are compressed columnar storage binaries that are readable by several software packages (R, Python, Stata, Julia, C++, etc.) and platforms. The goal of the parquet project is to maintain good performance for large datasets as well as interoperability.
The storage method is stable and maintained by the Apache Foundation.
https://parquet.apache.org/documentation/latest/
We use the python package "pyarrow" to read parquets and the package "brotli" for compression (listed in the requirements.txt).
| 44.681818 | 382 | 0.768973 | eng_Latn | 0.979257 |
263d9d2941bd2b8eb839650647960a94976947a9 | 10,286 | md | Markdown | design/libraries/nodes-and-kinds.md | olizilla/ipld | 28cf1e3910053d77f8e100f06ce0459227b5d62e | [
"Apache-2.0",
"MIT-0",
"MIT"
] | 1,002 | 2016-10-18T17:17:01.000Z | 2022-03-28T00:13:05.000Z | design/libraries/nodes-and-kinds.md | olizilla/ipld | 28cf1e3910053d77f8e100f06ce0459227b5d62e | [
"Apache-2.0",
"MIT-0",
"MIT"
] | 154 | 2016-10-19T17:45:13.000Z | 2022-03-29T22:35:05.000Z | design/libraries/nodes-and-kinds.md | olizilla/ipld | 28cf1e3910053d77f8e100f06ce0459227b5d62e | [
"Apache-2.0",
"MIT-0",
"MIT"
] | 109 | 2016-12-18T03:54:22.000Z | 2022-03-16T02:16:36.000Z | Nodes and Kinds
===============
**Preface: purpose of this document:**
This document is intended for developers of new (or renovating) IPLD libraries.
It contains design suggestions based on the experience of building (and rebuilding)
IPLD libraries in various languages, and reflecting on the lessons learned.
It also contains both notes on practical limitations we've found for implementations,
and reflections on how to express things clearly within the type systems of the
host language a library is implemented in (whatever that may be).
**Preface: limitations of this document:**
Since this document is aimed at _new libraries_, it's also implicitly expecting
that the new library might be in a _new language_.
We can't presume to know precisely what that language will enable or encourage!
Therefore, there will be limits to how transferable the advice in this document may be.
We do expect that the best way to express IPLD concepts may vary based on
the language a library is created in. We accept this and try to write this
document anyway, and make it as useful as it can be.
These guidelines are written with particular attention to the limitations that
are typical to strongly typed languages. (Some of the phrasing used reflect
this -- we refer to "types", "enumerations", "interfaces", "packages", etc.
However, these concepts can still translate even to languages with varying
amounts of compile-time type checking, and indeed even to those with none.
While the concepts are certainly not identical across all languages,
we hope that they're close enough to be meaningful to a thoughtful reader.)
We expect that common concepts for IPLD libraries will emerge across many languages,
and hope that some vocabulary for these concepts is something we can share.
Loosely and untyped languages may need to interpret these guidelines
appropriately while extracting the key concepts; but even among languages with
stricter concepts of compile-time type checking, the meaning of "interface"
can vary greatly -- _all_ readers will need to be ready to use their best judgement.
---
Cornerstone Types
=================
Your IPLD library should have two cornerstone types:
1. `Node`;
2. `Kind`.
`Node` should be an interface -- the membership should be open
(aka, it should be possible for other packages to implement it).
`Kind` should be an enumeration -- a fixed set of named members,
which should not be extendable.
Kind
----
`Kind` maps very directly onto the definition of
[Data Model Kinds](/docs/data-model/kinds/).
`Kind` does not include the Schema layer's concept of "struct", etc.
`Kind` must be an enum, **and not a sum type**. Attempting to implement
kind as a sum type conflates it with `Node`.
(This may be tempting to try to combine `Kind` and `Node` into a single
sum type definition if you're only looking at the Data Model layer,
but it is a mistake: both Schema types and Advanced Layouts require
the ability to add more implementations of `Node`, so this conflation
will cause cataclysmic problems and force a painful refactor
as soon as you get to implementing those systems.
See the [different implementors of Node](#different-implementors-of-node)
section, later in this document, for more information on this.)
Node
----
`Node` is a monomorphized interface for handling data -- in other words,
we make all data look and act like a `Node`, so that we can write all of our
functions against the `Node` interface, and have that work for any sort of data.
`Node` has functions for examining any of the
[Data Model Kinds](/docs/data-model/kinds/).
For example, this means `Node` must be able to
do a key lookup for a map kind,
provide an iterator for a list kind,
or be convertible to a primitive if it's a integer kind.
`Node` is generally implemented by making an interface with the superset of all
these methods needed for the various kinds of data.
Some programming languages may also have a pattern-matching faculty which
may make this nicer; feel free to use it (but mind the caveats issued in the
[Kind](#kind) section above, and the
[different implementors of Node](#different-implementors-of-node) section below:
the membership of `Node` must remain *open*;
you do *not* want to use a sum type with a closed list of concrete members here,
or it will cause other roadblocks later that *will* force a redesign).
For languages where this is most straightforwardly implemented by a single
interface containing the superset of all necessary methods, many of the methods
will error if the `Node` refers to information of the wrong kind for that method;
this is fine.
`Node` should be clear about what sets of methods are valid for acting on it.
Typically, this is done by a `Node.Kind()` method, which should return
a member of the [Kind](#kind) enum.
This information is useful for anyone writing functions which use the `Node`
interface, because it's much more pleasant (and fast) to check the Kind and
know which methods can be expected to work than it is to have to probe every
method individually for failure.
(Again, programming languages with pattern-matching faculties may find
a cleverer way for their compiler and type system to support this.)
### different implementors of Node
Though the methods on the `Node` interface are defined as those necessary for
examining data of the [Data Model Kinds](/docs/data-model/kinds/),
**`Node` is not only implemented by the Data Model**:
- Yes, `Node` is implemented by types that just hold basic Data Model info;
- `Node` is also implemented by [Advanced Data Layouts](/docs/advanced-data-layouts/) ---
- consider a HAMT that spans many separately-serialized chunks of data; it should still be usable as if it's a regular map.
- `Node` is also implemented by [Schema-typed Nodes](/docs/schemas/) --
- Both if implemented by a single implementation that evaluates rules at runtime (so, finite count of implementing types and known at core library compile time)...
- or if handled by codegen/macros (unknown count / open set of implementors of `Node`; not known at core library compile time; may be created in other packages that import the core, rather than core importing them!).
Even further, some libraries may choose to make even more various
implementations of `Node` for optimizing performance of specific tasks:
for example, a `Node` which implements basic Data Model "map" semantics,
but using some internal algorithm for memory layout which is known to be
efficient for certain workloads;
or for another example, a `Node` which is particularly efficient for handling
data of one particular serialization codec, and keeps a lazy-loading skip-tree
over the serialized bytes.
Clearly, neither of these should be the default implementation a library uses,
but clearly, both of them should be able to be used transparently,
With all seven (?! indeed, *seven*) of these different stories,
we can consider it conclusive that the `Node` interface should be ready
to support many, many diverse implementors.
### a default implementation of Node
As an IPLD library author, you may be tempted to make a single, "default"
implementation of `Node`.
Feel free to do so; but be cautious of giving it special privileges.
Try implementing it in a separate package from your core interfaces: this will
be a good exercise to make sure other implementations can later do the same.
(Since in the order of things you'll do when implementing a new IPLD library,
creating this basic default node implementation is likely quite early,
going about it in such a way that it forces design choices you'll need later
anyway will save you from potentially discovering the need for some costly
refactors later!)
Nodes vs NodeBuilders
---------------------
If you choose to pursue a distinction between mutable and immutable data
in the design of your library, it may be useful to create two separate
interfaces for each phase of the data's lifecycle.
These might be called "Node" (for the immutable data)
and "NodeBuilder" (for the mutating/building phase of the data's life).
It is not necessary to have distinct interfaces for this;
a library can also opt to have a mutable concept of "node".
Immutable interfaces can be particularly well-suited to IPLD data, though;
it's worth considering them.
Higher level functions
----------------------
Almost all features should be implemented to take `Node` arguments,
and return `Node` values.
Traversals and walks can be implemented in this way: e.g.
`function walk(start Node, visitorFn func(visited Node))`.
Selectors can be implemented in this way.
(Continue with the idea above for traversals.)
Transformations can be implemented in this way.
(Continue with the idea above for traversals.)
Codecs themselves can be implemented this way:
marshalling is a traverse over nodes, so `func marshal(obj Node) -> bytes`,
and unmarshalling is something like `func unmarshal(bytes, NodeBuilder) -> Node`.
(Note that if your library has a `Node`/`NodeBuilder` split for immutability purposes,
then of course any operation that builds new nodes,
such as transformations or codecs during unmarshalling,
will have a `NodeBuilder` parameter.
If your library has a mutable `Node`, these function signatures might appear differently.)
By defining all these functions in terms of `Node`, they can be used the same
in any of the various contexts described in the
[different implementors of Node](#different-implementors-of-node) section:
- traversals/selectors/transforms/etc work over various codecs (trivially,
by transitive property).
- traversals/selectors/transforms/etc work regardless of in-memory layouts
that may vary per `Node` implementation
- traversals/selectors/transforms/etc work transparently over ADLs!
- traversals/selectors/transforms/etc work transparently over schemas!
It is also useful to note that by implementing these features over the `Node`
interface, rather than *in* the `Node` interface, it becomes much more
possible to implement various kinds of e.g. traversal library
(perhaps you'll discover two different ways to go about it,
one with better ergonomics, and one with better performance?);
and it also requires much less code per `Node` implementation if things
like traversals are implemented from the outside.
| 48.065421 | 217 | 0.780867 | eng_Latn | 0.999623 |
263dfe6c61cb8de6586dbc616e78872d6f19281a | 8,381 | md | Markdown | README.md | zhangxistudy11/ZHXTabView | 3ca3c8d81da64ddc24becf2e52244eb8ffc2a025 | [
"MIT"
] | 5 | 2020-04-29T13:46:11.000Z | 2020-06-02T02:27:47.000Z | README.md | zhangxistudy11/ZHXTabView | 3ca3c8d81da64ddc24becf2e52244eb8ffc2a025 | [
"MIT"
] | null | null | null | README.md | zhangxistudy11/ZHXTabView | 3ca3c8d81da64ddc24becf2e52244eb8ffc2a025 | [
"MIT"
] | 1 | 2020-08-04T15:24:45.000Z | 2020-08-04T15:24:45.000Z | 
# ZHXTabView
---------------------------------------------------------
[](https://github.com/zhangxistudy11/ZHXTabView)
[](https://github.com/zhangxistudy11/ZHXTabView)
[](https://github.com/zhangxistudy11/ZHXTabView)
[](https://www.jianshu.com/p/d55b74949555)
A highly customizable tab for iOS .
[中文文档地址](https://www.jianshu.com/p/d55b74949555)
# Table of Contents
---------------------------------------------------------
* [Background](#Background)
* [DisplayEffect](#DisplayEffect)
* [Install](#Install)
* [Usage](#Usage)
* [API](#API)
* [License](#License)
# Background
---------------------------------------------------------
In daily development, we often encounter usage scenarios of segmented tabs. Different scenes have different needs, some need to be animated, some need to have corner marks, and some need to mask animation. Here will provide a comprehensive ZHXTabView to meet the above needs.
# DisplayEffect
---------------------------------------------------------
basics:

badge:

mask:

# Install
---------------------------------------------------------
Go to github to download ZHXTabView , drag the blue circle file in the screenshot below into the project, the red circle is for introduction, you can refer to it.

# Usage
---------------------------------------------------------
#### 1. Basic use: Set ZHXTabView as the data source, and pay attention to implementing the proxy method.
```
NSArray *titles = @[@"Asian",@"Europe",@"America",@"Africa"];
self.firstTabView = [[ZHXTabView alloc]initWithTitles:titles];
[self.view addSubview:self.firstTabView];
self.firstTabView.frame = CGRectMake(20, 150, ScreenWidth -40, 50);
self.firstTabView.delegate = self;
self.firstTabView.leftPadding = 10;
self.firstTabView.rightPadding = 10;
self.firstTabView.itemLineColor = [UIColor blueColor];
self.firstTabView.itemSelectedTextColor = [UIColor blueColor];
```
```
- (void)tabView:(ZHXTabView *)tabView didSelectItemAtIndex:(NSInteger)index{
}
```
#### 2.Use with corner mark: In order to make the corner mark can be highly customized, the customized corner mark view needs to inherit ZHXBadgeView or be the object of ZHXBadgeView when it is used.
```
ZHXBadgeView *badgeOne = [[ZHXBadgeView alloc]initWithFrame:CGRectMake(0, 0, 15, 15)];
UILabel *hotBadge = [[UILabel alloc]initWithFrame:badgeOne.bounds];
[badgeOne addSubview:hotBadge];
hotBadge.backgroundColor = [UIColor redColor];
hotBadge.textAlignment = NSTextAlignmentCenter;
hotBadge.font = [UIFont systemFontOfSize:10];
hotBadge.layer.cornerRadius = 7.5;
hotBadge.clipsToBounds = YES;
hotBadge.text = @"2";
hotBadge.textColor = [UIColor whiteColor];
```
#### At the same time, it is necessary to specify which index plus angle
```
/// Set the badge relative to, and give it relative to, the text above and to the right.If you have multiple badges ,you can set it multiple times.
/// @param badgeView custom badge , need to inherit from ZHXBadgeView
/// @param index position index
/// @param size badgeView size
/// @param topOffset Badge's top can be negative relative to the spacing above the text
/// @param rightOffset The spacing on the left side of the badge relative to the right side of the text can be negative
- (void)configBadge:(ZHXBadgeView *)badgeView atIndex:(NSInteger)index badgeSize:(CGSize)size topOffsetFromTextTop:(CGFloat)topOffset rightOffsetFormTextRight:(CGFloat)rightOffset;
```
#### 3.With a mask, you need to implement it through CAShapeLayer and UIBezierPath. For specific use, please refer to the code in the Demo, pay attention to the correct setting of the view level, and excessive animation at the bottom of the mask.
# API
---------------------------------------------------------
```
@property (nonatomic, weak) id<ZHXTabViewDelegate> delegate;///<delegate of tabView
@property (nonatomic, assign) CGFloat leftPadding;///<left inner margin
@property (nonatomic, assign) CGFloat rightPadding;///<right inner margin
@property (nonatomic, assign) CGFloat itemHeight;///< item height . Value is MIN(itemHeihgt,self.frame.size,height),default 40.0
@property (nonatomic, assign) CGFloat itemPadding;///< padding of text and line. Default is 3
@property (nonatomic, assign) CGFloat itemTextHeight;///<item text heihgt.Default is 20
@property (nonatomic, strong) UIColor * itemBackgroundColor;///item backgroundColor.default [UIColor clearColor]
@property (nonatomic, strong) UIFont * itemTextFont;///item text font.default systemFontOfSize:17
@property (nonatomic, strong) UIColor * itemTextColor;///item text cololr.default [UIColor blackColor]
@property (nonatomic, strong) UIFont * itemSelectedTextFont;///item text font when is selected.default boldSystemFontOfSize:17
@property (nonatomic, strong) UIColor * itemSelectedTextColor;///item text cololr when is selected.default [UIColor purpleColor]
@property (nonatomic, assign) CGFloat itemLineHeight;///<item line height. Default is 3
@property (nonatomic, assign) CGFloat itemLineWidth;///<item line height. Default is 25
@property (nonatomic, strong) UIColor * itemLineColor;///< bottom line color . Default is [UIColor purpleColor]
@property (nonatomic, assign) CGFloat itemLineCornerRadius;/// bottom line cornerRadius . Default is 1.5 .
/// Initialization method
/// @param titles titles array
- (instancetype)initWithTitles:(NSArray <NSString *> *)titles;
/// Set the location of the selected item by default
/// @param defaultIndex index .Default index is 0.
- (void)configDefultSelectedIndex:(NSInteger)defaultIndex;
/// Set the badge relative to, and give it relative to, the text above and to the right.If you have multiple badges ,you can set it multiple times.
/// @param badgeView custom badge , need to inherit from ZHXBadgeView
/// @param index position index
/// @param size badgeView size
/// @param topOffset Badge's top can be negative relative to the spacing above the text
/// @param rightOffset The spacing on the left side of the badge relative to the right side of the text can be negative
- (void)configBadge:(ZHXBadgeView *)badgeView atIndex:(NSInteger)index badgeSize:(CGSize)size topOffsetFromTextTop:(CGFloat)topOffset rightOffsetFormTextRight:(CGFloat)rightOffset;
/// Sets whether the specified location is marked hidden or displayed
/// @param isHide hide
/// @param index location
- (void)configBadgeHide:(BOOL)isHide atIndex:(NSInteger)index;
```
# License
---------------------------------------------------------
The MIT License (MIT)
Copyright © 2020 Zhang Xi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| 48.166667 | 275 | 0.724735 | eng_Latn | 0.575568 |
263e1e9f8d5a27e95336260f97fd8aaa79b30c03 | 7,434 | markdown | Markdown | _posts/2019-11-04-mrw_using_react.markdown | MatthewSerre/MatthewSerre.github.io | b89d72c6f48ebde18ec470dfded4578025ce094a | [
"MIT"
] | null | null | null | _posts/2019-11-04-mrw_using_react.markdown | MatthewSerre/MatthewSerre.github.io | b89d72c6f48ebde18ec470dfded4578025ce094a | [
"MIT"
] | null | null | null | _posts/2019-11-04-mrw_using_react.markdown | MatthewSerre/MatthewSerre.github.io | b89d72c6f48ebde18ec470dfded4578025ce094a | [
"MIT"
] | null | null | null | ---
layout: post
title: "MRW Using React"
date: 2019-11-04 02:08:32 +0000
permalink: mrw_using_react
---
Working with React has been like...
<div style="text-align: center">
<img src="/img/Image.jpeg" alt="Amass the Components" />
</div>
<br />
Seriously though, understanding React, and by extension Redux, has been challenging, but feels like the culmination of my journey to learn full stack development, which is fitting, as it is the last section in my curriculum with Flatiron School. The final project was the first time when the back end and front end felt distinct, and they were by virtue of setting up two separate applications. Even with the previous project in which I was tasked with refactoring my Rails application to have a JavaScript component that allowed a user to interact with my app without page reloads, all of the code was contained within the same project, blurring the lines between what the Rails component was doing and what the JavaScript components were doing. In my current project, the back end is comprised of a Rails API that handles all of the data manipulation based on input entered on the front end, which is comprised of React components and a Redux store. Because of the setup of the applications, the principle of separation of concerns that has been emphasized throughout the curriculum feels more relevant than ever.
For my project, I chose to create a bookmark application in the same vein as Pocket wherein a user can create bookmarks and tags, or in my case folders, to organize them. The Rails back end API processes requests from the front end and, as with any other Rails app, the controller dictates the response and persists any changes to the database.
```
# /bookmark-backend/app/controllers/api/v1/bookmarks_controller.rb
class Api::V1::BookmarksController < ApplicationController
before_action :set_folder
def index
@bookmarks = @folder.bookmarks
render json: @bookmarks
end
def all_bookmarks
@bookmarks = Bookmark.all.order(:name)
render json @bookmarks
end
def show
@bookmark = @folder.bookmarks.find(id: params[:id])
render json: @bookmark
end
def create
@bookmark = @folder.bookmarks.new(bookmark_params)
if @bookmark.save
render json: @folder
else
render json: {error: "Error creating bookmark"}
end
end
def destroy
bookmark = Bookmark.find(params["id"])
folder = Folder.find(bookmark.folder_id)
bookmark.destroy
render json: folder
end
private
def set_folder
@folder = Folder.find(params[:folder_id])
end
def bookmark_params
params.require(:bookmark).permit(:folder_id, :name, :url, :notes)
end
end
```
The app functions such that a user must create at least one folder before creating and adding bookmarks, but once a folder exists, the controller sets the selected folder to an instance variable that the controller can use to access the bookmarks within in each method as necessary. Because the Rails application is not responsible for rendering views as it otherwise might be without another front end interface in place, each method renders a JSON response that the front end uses to populate the view with the requested data.
On the front end, React renders components for the user to engage in common actions such as viewing all of the bookmarks within a folder or, for example, creating a new bookmark.
```
# /bookmark-frontend/src/components/bookmarkInput.js
import React from 'react';
import {connect} from 'react-redux'
import {addBookmark} from '../actions/addBookmark'
import Form from "react-bootstrap/Form";
import Button from "react-bootstrap/Button";
import ListGroup from "react-bootstrap/ListGroup";
class BookmarkInput extends React.Component {
state = {name: '', url: '', notes: ''}
handleOnChange = (event) => {
this.setState({[event.target.name]: event.target.value})
}
handleOnSubmit = (event) => {
event.preventDefault()
this.props.addBookmark(this.state, this.props.folder.id)
this.setState({name: '', url: '', notes: ''})
}
render() {
return (
<div className="container">
<Form onSubmit={this.handleOnSubmit}>
<Form.Group>
<ListGroup>
<ListGroup.Item><strong>Add a New Bookmark</strong></ListGroup.Item><br /></ListGroup>
<Form.Control type="text" name="name" onChange={this.handleOnChange} placeholder="Name" value={this.state.name}/><br/>
<Form.Control type="text" name="url" onChange={this.handleOnChange} placeholder="URL" value={this.state.url}/><br/>
<Form.Control as="textarea" rows="3" name="notes" onChange={this.handleOnChange} placeholder="Notes" value={this.state.notes}/><br/>
<Button variant="primary" type="submit">
Add Bookmark
</Button>
</Form.Group>
</Form>
</div>
)
}
}
export default connect(null, {addBookmark})(BookmarkInput)
```
Essentially the bookmark input initiates with a local state containing keys with empty string values that will be updated every time the user enters new or changes existing input within a field. Once the user submits the form, the handleOnSubmit function prevents the default submission response that would redirect the user to a new page and instead uses the imported addBookmark action along with the data saved in local state to send a POST request to the Rails API.
```
# /bookmark-frontend/src/actions/addBookmark.js
export function addBookmark(bookmark, folderId) {
return (dispatch) => {
fetch(`http://0.0.0.0:3000/api/v1/folders/${folderId}/bookmarks`, {
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json'
},
method: 'POST',
body: JSON.stringify(bookmark)
})
.then(res => res.json())
.then(bookmark => dispatch({type: 'ADD_BOOKMARK', payload: bookmark}))
}
}
```
The Redux Thunk middleware allows us to then use the action to return a dispatch function that awaits a response from the API before actually dispatching the payload to our reducer. The manner in which Redux functions allows the app to make this asynchronous request, meaning that the response as we noted is not immediate and needs to be waited on before moving forward.
Once sent, the action type and payload from the dispatch trigger the reducer to update the state and refresh the components rendered on our page, which now shows the newly added bookmark!
```
# /bookmark-frontend/src/reducers/folderReducer.js
export default function folderReducer(state = {folders: []}, action) {
switch (action.type) {
...
case 'ADD_BOOKMARK':
let folders = state.folders.map(folder => {
if (folder.id === action.payload.id) {
return action.payload
}
else {
return folder
}
})
return {...state, folders: folders}
...
default:
return state
}
}
``` | 43.473684 | 1,119 | 0.672316 | eng_Latn | 0.986477 |
263e4b55dc0fddd676600ac8217e4533b67cc175 | 4,846 | md | Markdown | business-central/warehouse-how-to-calculate-bin-replenishment.md | edupont04/dynamics365smb-docs-pr.de-de | 3f83765f53ee275bfdb32a11429b34cd791e7cbf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/warehouse-how-to-calculate-bin-replenishment.md | edupont04/dynamics365smb-docs-pr.de-de | 3f83765f53ee275bfdb32a11429b34cd791e7cbf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/warehouse-how-to-calculate-bin-replenishment.md | edupont04/dynamics365smb-docs-pr.de-de | 3f83765f53ee275bfdb32a11429b34cd791e7cbf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Wie Sie Lagerplatzauffüllung berechnen | Microsoft Docs
description: Wenn der Lagerort so eingerichtet wurde, dass er die gesteuerte Einlagerung und Kommissionierung verwendet, werden die Prioritäten der Einlagerungsvorlage für den Lagerplatz berücksichtigt, wenn Wareneingänge eingelagert werden.
author: SorenGP
ms.service: dynamics365-business-central
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: ''
ms.date: 04/01/2020
ms.author: edupont
ms.openlocfilehash: 4d1c48ebc03eab75f6959591c039eaeda07d2ceb
ms.sourcegitcommit: a80afd4e5075018716efad76d82a54e158f1392d
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 09/09/2020
ms.locfileid: "3788346"
---
# <a name="calculate-bin-replenishment"></a>Lagerplatzauffüllung berechnen
Wenn der Lagerort so eingerichtet wurde, dass er die gesteuerte Einlagerung und Kommissionierung verwendet, werden die Prioritäten der Einlagerungsvorlage für den Lagerplatz berücksichtigt, wenn Wareneingänge eingelagert werden. Prioritäten umfassen die minimalen und maximalen Mengen des Lagerplatzinhalts, die für einen bestimmten Lagerplatz verknüpft wurden, und die Lagerplatzprioritäten. Wenn daher gleichmäßig Artikel ankommen, werden die meistverwendeten Kommissionierlagerplätze nachgefüllt, während gleichzeitig Artikel aus ihnen entnommen werden.
Artikel treffen jedoch nicht immer gleichmäßig im Lager ein. Manchmal werden Artikel in großen Mengen gekauft, so dass das Unternehmen einen Rabatt erhalten kann, oder Ihre Fertigung produziert eine größere Menge eines Artikels, um geringe Stückkosten zu erzielen. Dann wiederum erhält das Lager eine Zeit lang keine Artikel mehr und Lagermitarbeiter müssen regelmäßig Artikel aus Palettenlagerplätzen in Kommissionierlagerplätze umpacken.
Es ist auch möglich, dass das Lager neue Lieferungen erwartet und daher die Palettenlagerplätze frei machen möchte, bevor die neue Ware eintrifft.
Wenn Sie Ihre Palettenlagerplätze mit einer Lagerplatzart definiert haben, die ausschließlich die Aktion **Einlagerung** vorsieht, d. h. die Lagerplatzart hat kein Häkchen bei der Aktion **Kommissionierung**, müssen Sie immer darauf achten, dass Ihre Kommissionierlagerplätze gefüllt sind, da keine Kommissionierung aus Einlagerungslagerplätzen vorgeschlagen wird.
## <a name="to-replenish-pick-bins"></a>So füllen Sie Kommissionierlagerplätze auf:
1. Wählen Sie das Symbol  aus, geben Sie **Lagerplatzumlagerungsvorschlag** ein, und wählen Sie dann den zugehörigen Link.
2. Wählen Sie die Aktion **Lagerplatzauffüllung berechnen** aus, um die Berichtanforderungsseite zu öffnen.
3. Füllen Sie die Anforderungsseite der Stapelverarbeitung aus, um den Umfang der Auffüllvorschläge zu begrenzen, die berechnet werden. Beispielsweise könnten Sie nur für bestimmte Artikel, Zonen oder Lagerplätze zuständig sein.
4. Wählen Sie die Schaltfläche **OK** aus. Es werden Zeilen für die Auffüllumlagerungen erstellt, die ausgeführt werden müssen, entsprechend den Regeln, die für Lagerplätze und Lagerplatzinhalte (Artikel in Lagerplätzen) festgelegt wurden.
5. Wenn Sie alle vorgeschlagenen Auffüllungen vornehmen möchten, wählen Sie die Aktionen **Lagerplatzumlagerung erstellen** aus. Die Lagermitarbeiter finden jetzt Anweisungen unter dem Menüpunkt **Lagerplatzumlagerungen**, können diese ausführen und sie registrieren.
6. Wenn Sie nur einige der Vorschläge ausführen möchten, löschen Sie die weniger wichtigen Zeilen und erzeugen Sie dann eine Lagerplatzumlagerung.
Wenn Sie das nächste Mal eine Lagerplatzauffüllung berechnen lassen, werden Ihnen wieder die Zeilen vorgeschlagen, die Sie gelöscht haben, wenn diese dann immer noch gültig sind.
> [!NOTE]
> Angenommen, die folgenden Bedingungen werden für einen Artikel erfüllt:
>
> - Der Artikel weist ein Ablaufdatum auf, und
> - Das Feld **Gemäß FEFO kommissionieren** ist auf der Lagerortkarte aktiviert, und
> - Sie verwenden die Funktionalität **Lagerplatzauffüllung berechnen**.
>
> In diesem Fall bleiben die Felder **Von Zone** und **Von Lagerplatz** leer, da der Algorithmus zur Berechnung des Ausgangsortes der Umlagerung nur ausgelöst wird, wenn Sie die Funktion **Lagerplatzumlagerung erstellen** aktivieren.
## <a name="see-also"></a>Siehe auch
[Logistik](warehouse-manage-warehouse.md)
[Kommissionierung nach FEFO](warehouse-picking-by-fefo.md)
[Lagerbestand](inventory-manage-inventory.md)
[Lagerortverwaltung einrichten](warehouse-setup-warehouse.md)
[Montageverwaltung](assembly-assemble-items.md)
[Designdetails: Lagerverwaltung](design-details-warehouse-management.md)
[Arbeiten mit [!INCLUDE[d365fin](includes/d365fin_md.md)]](ui-work-product.md)
| 86.535714 | 558 | 0.810772 | deu_Latn | 0.998513 |
263f8baa355cf3629243e88092c95dfae525315f | 5,404 | md | Markdown | Assignments/Proj01-Preprocessor.md | ysBach/SNU_AOclass | e2e364b08c2e6e129c267db9cbd76cfd0ab77527 | [
"BSD-3-Clause"
] | 6 | 2020-03-23T06:14:52.000Z | 2021-06-14T01:49:51.000Z | Assignments/Proj01-Preprocessor.md | ysBach/SNU_AOclass | e2e364b08c2e6e129c267db9cbd76cfd0ab77527 | [
"BSD-3-Clause"
] | 9 | 2020-05-04T17:21:49.000Z | 2021-05-24T11:41:55.000Z | Assignments/Proj01-Preprocessor.md | ysBach/SNU_AOclass | e2e364b08c2e6e129c267db9cbd76cfd0ab77527 | [
"BSD-3-Clause"
] | 5 | 2020-05-10T14:19:34.000Z | 2021-07-14T09:18:08.000Z | # Project 01
In this project, you will make a set of functions and scripts.
Please submit a simple report (including the code, showing *some* results, and giving simple explanations) which you think the TA can evaluate whether you finished the task appropriately. You may need to re-use these in your final report.
You may refer to the presentation material from this repo (the first lecture material of the TA seminar).
You don't need to answer all the questions separately. But all these processes are essential to get the final result of this project. Thus, please regard these items (Problems) as checklist: You don't have to make an answer sheet like "answer to number 1: blahblah". Just make codes which work well, and present the results.
## Problems [80 pt]
Download your raw FITS files from https://sao.snu.ac.kr (as we learned in the class). If you don't have any data, you can just test with other teams' data sets, and utilize the code for your data when you acquire it.
Make a code script, and/or module and/or package, which may include functions, classes, or whatever you may need (as many as you wish), to do the followings:
1. **Automatically classify** any input FITS file into one of ``['bias', 'dark', 'flat', 'comp', 'objt']``.
* The ``flat`` can either be flat lamp image, dome flat, or sky flat.
* The ``comp`` is the comparison (arc lamp) in spectroscopy.
* The ``objt`` is the light frame image of any celestial object, other than calibration frames.
2. **Combine** to make master bias, dark, and flat. [1 & 2 = 40 pt]
* bias must be median combined.
* dark must be median combined for *each exposure time*. Then bias subtracted.
* For each flat, it must be first bias and dark subtracted. Use the dark of the same exposure time as your flat. Then normalize each flat by its average. Then the normalized flats must be median combined for *each filter* (in polarimetry, for each wave-plate angle; in spectroscopy, for each slit/grating/... setting).
* Depending on your choice, you can use sigma clipping for combining processes.
* You may develope combiner by yourself, but also you can use [``ccdproc.combine``](https://ccdproc.readthedocs.io/en/latest/image_combination.html)
3. **Save** the obtained master bias, dark, and flat. [10 pt]
* Each dark and flat files must have indicators for its exposure time, filter, etc, to distinguish it from the other images of the same kind.
4. Do **preprocessing** for `comp` and `objt` (bias subtraction, dark subtraction, flat correction). [20 pt]
* Note that dark must be used with the same exposure time, and flat must be with the same filter or other settings.
* You may do cosmic-ray rejection, by benchmarking the L.A.Cosmic.
5. **Save** the preprocessed images as separate files. [10 pt]
* You may put them in a separate directory, which will be easier for you to check.
* You may put ``_bxx``, ``_bdx``, ``_bdf`` at the start/end of the original file name to indicate the preprocessing.
### Conditions
1. You must use the header information, not the file names or hand-written log.
2. Please try **not** to import packages **other than**
* python default packages (e.g., `pathlib`, `itertools`, etc)
* Basic science packages: ``numpy``, ``scipy``, ``astropy``, ``scikit learn``
* Basic astropy-affiliated packages: ``ginga``, ``imexam``, ``ccdproc``, ``photutils``, ``specutils``
* But you may benchmark the source codes from other packages. For instance, "[**Python users in Astronomy**](https://www.facebook.com/groups/astropython/)" group of Facebook, and the Gist/snippets they communicate with, or random GitHub repo, or our lecture notes or ``ysfitsutilpy``, ``ysphotutilpy``, ``TRIPOLpy``, and ``SNUO1Mpy``. Many times, the ready-made packages may not fulfill your needs. You must write codes by yourself by benchmarking the packages' source codes.
3. Later you may use the codes for your semester project work. Please try to make it **as reusable as possible**. (Google for "DRY principle")
### NOTE
It's always better to archive what you've done into the header information. After bias, dark, and flat corrections, for instance, add history and comments to FITS header as we did in the homework.
Also you may refer to the snippet we used in the class:
```python
from pathlib import Path
from astropy.io import fits
import pandas as pd
#%%
allfits = list(Path("2019-10-15").glob("*.fit"))
hdul = fits.open(allfits[0])
data = hdul[0].data
hdr = hdul[0].header
print(data[:1,:10])
#%%
fpaths = dict(bias=[], dark={}, flat=[], comp=[], objt=[])
objt_name = []
allfits.sort()
for fpath in allfits:
hdr = fits.getheader(fpath)
imagetyp = hdr["IMAGETYP"].lower()
exptime = float(hdr["EXPTIME"])
obs_object = hdr["OBJECT"]
if imagetyp.startswith("bias frame") and exptime == 0.:
fpaths['bias'].append(fpath)
elif imagetyp.startswith('dark frame'):
try:
fpaths['dark'][exptime]
except KeyError:
fpaths['dark'][exptime] = []
fpaths['dark'][exptime].append(fpath)
elif imagetyp.startswith('flat field'):
fpaths['flat'].append(fpath)
elif imagetyp.startswith('light frame') and obs_object.lower() == 'comp':
fpaths['comp'].append(fpath)
else:
fpaths['objt'].append(fpath)
objt_name.append(hdr["OBJECT"])
```
| 50.504673 | 478 | 0.708179 | eng_Latn | 0.994596 |
263f8e92f0eb104bf17972da7b58d11f7bc30dc7 | 3,507 | md | Markdown | README.md | TokenChatApp/BitcoinKit | 127fe1e1bd88a7e5323b7c5cc7724fb862893b2f | [
"Apache-2.0"
] | 2 | 2018-10-01T19:05:26.000Z | 2018-10-01T19:06:31.000Z | README.md | TokenChatApp/BitcoinKit | 127fe1e1bd88a7e5323b7c5cc7724fb862893b2f | [
"Apache-2.0"
] | null | null | null | README.md | TokenChatApp/BitcoinKit | 127fe1e1bd88a7e5323b7c5cc7724fb862893b2f | [
"Apache-2.0"
] | 2 | 2018-07-10T08:54:32.000Z | 2018-12-11T11:51:22.000Z | BitcoinKit
===========
[](https://travis-ci.org/kishikawakatsumi/BitcoinKit)
[](https://codecov.io/gh/kishikawakatsumi/BitcoinKit)
[](https://github.com/Carthage/Carthage)
[](http://cocoadocs.org/docsets/BitcoinKit)
[](http://cocoadocs.org/docsets/BitcoinKit)
BitcoinKit implements Bitcoin protocol in Swift. It is an implementation of the Bitcoin SPV protocol written (almost) entirely in swift.
<img src="https://user-images.githubusercontent.com/40610/35793683-0d497b4e-0a96-11e8-8e49-2b0ce09211a4.png" width="320px" /> <img src="https://user-images.githubusercontent.com/40610/35793685-0da36a32-0a96-11e8-855b-ecbc3ce1474c.png" width="320px" />
Features
--------
- Send/receive transactions.
- See current balance in a wallet.
- Encoding/decoding addresses: P2PKH, WIF format.
- Transaction building blocks: inputs, outputs, scripts.
- EC keys and signatures.
- BIP32, BIP44 hierarchical deterministic wallets.
- BIP39 implementation.
Usage
-----
#### Creating new wallet
```swift
let privateKey = PrivateKey(network: .testnet) // You can choose .mainnet or .testnet
let wallet = Wallet(privateKey: privateKey)
```
#### Import wallet from WIF
```swift
let wallet = try Wallet(wif: "92pMamV6jNyEq9pDpY4f6nBy9KpV2cfJT4L5zDUYiGqyQHJfF1K")
```
#### Hierarchical Deterministic Wallet
```swift
// Generate mnemonic
let mnemonic = try Mnemonic.generate()
// Generate seed from the mnemonic
let seed = Mnemonic.seed(mnemonic: mnemonic)
let wallet = HDWallet(seed: seed, network: .testnet)
```
#### Key derivation
```
let mnemonic = try Mnemonic.generate()
let seed = Mnemonic.seed(mnemonic: mnemonic)
let privateKey = HDPrivateKey(seed: seed, network: .testnet)
// m/0'
let m0prv = try! privateKey.derived(at: 0, hardened: true)
// m/0'/1
let m01prv = try! m0prv.derived(at: 1)
// m/0'/1/2'
let m012prv = try! m01prv.derived(at: 2, hardened: true)
```
#### HD Wallet Key derivation
```
let keychain = HDKeychain(seed: seed, network: .mainnet)
let privateKey = try! keychain.derivedKey(path: "m/44'/1'/0'/0/0")
...
```
#### Extended Keys
```
let extendedKey = privateKey.extended()
```
#### Sync blockchain
```
let blockStore = try! SQLiteBlockStore.default()
let blockChain = BlockChain(wallet: wallet, blockStore: blockStore)
let peerGroup = PeerGroup(blockChain: blockChain)
let peerGroup.delegate = self
let peerGroup.start()
```
Installation
------------
### Carthage
BitcoinKit is available through [Carthage](https://github.com/Carthage/Carthage). To install
it, simply add the following line to your Cartfile:
`github "kishikawakatsumi/BitcoinKit"`
### CocoaPods
BitcoinKit is available through [CocoaPods](http://cocoapods.org). To install
it, simply add the following lines to your Podfile:
```ruby
use_frameworks!
pod 'BitcoinKit'
```
Contribute
----------
Feel free to open issues, drop us pull requests or contact us to discuss how to do things.
Email: [kishikawakatsumi@mac.com](mailto:kishikawakatsumi@mac.com)
Twitter: [@k_katsumi](http://twitter.com/k_katsumi)
License
-------
BitcoinKit is available under the Apache 2.0 license. See the LICENSE file for more info.
| 26.976923 | 256 | 0.738238 | eng_Latn | 0.410656 |
2640494ee5d5c7ed8eb3bb346dffdd6e9612b3b9 | 1,120 | md | Markdown | api/Word.Revisions.AcceptAll.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-03-09T13:24:12.000Z | 2020-03-09T16:19:11.000Z | api/Word.Revisions.AcceptAll.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Word.Revisions.AcceptAll.md | kibitzerCZ/VBA-Docs | 046664c5f09c17707e8ee92fd1505ddd0f6c9a91 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-28T06:51:45.000Z | 2019-11-28T06:51:45.000Z | ---
title: Revisions.AcceptAll method (Word)
keywords: vbawd10.chm159383653
f1_keywords:
- vbawd10.chm159383653
ms.prod: word
api_name:
- Word.Revisions.AcceptAll
ms.assetid: bf1fa0d5-22ab-d426-9411-ae3147277448
ms.date: 06/08/2017
localization_priority: Normal
---
# Revisions.AcceptAll method (Word)
Accepts all the tracked changes in a document or range, removes all revision marks, and incorporates the changes into the document.
## Syntax
_expression_. `AcceptAll`
_expression_ Required. A variable that represents a '[Revisions](Word.revisions.md)' collection.
## Remarks
Use the **AcceptAllRevisions** method to accept all revisions in a document.
## Example
The following code example accepts all the tracked changes in the active document.
```vb
If ActiveDocument.Revisions.Count >= 1 Then _
ActiveDocument.Revisions.AcceptAll
```
The following code example accepts all the tracked changes in the selection.
```vb
Selection.Range.Revisions.AcceptAll
```
## See also
[Revisions Collection Object](Word.revisions.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 19.649123 | 131 | 0.773214 | eng_Latn | 0.905755 |
2641513b0658eae8a4cab229585b443776ffd982 | 303 | md | Markdown | docs/pages/solvers_nonlinear/tutorial_4.md | Pressio/pressio-tutorials | 5762d17a8cd2990d84ccc80e1f5ba9759b55b5b9 | [
"BSD-3-Clause"
] | 1 | 2022-03-06T12:06:19.000Z | 2022-03-06T12:06:19.000Z | docs/pages/solvers_nonlinear/tutorial_4.md | Pressio/pressio-tutorials | 5762d17a8cd2990d84ccc80e1f5ba9759b55b5b9 | [
"BSD-3-Clause"
] | 16 | 2019-09-30T11:34:49.000Z | 2022-01-17T20:58:46.000Z | docs/pages/solvers_nonlinear/tutorial_4.md | Pressio/pressio-tutorials | 5762d17a8cd2990d84ccc80e1f5ba9759b55b5b9 | [
"BSD-3-Clause"
] | null | null | null |
# Gauss-Newton via Normal Equations with Custom Types
@m_class{m-block m-info}
@par
This tutorial demonstrates how to use
the normal-equations-based Gauss-Newton solver from `pressio/solvers_nonlinear`
using custom data types.
```cpp
@codesnippet
../../../tutorials/nonlinsolvers_gn_2.cc
49:256
```
| 18.9375 | 79 | 0.768977 | eng_Latn | 0.768336 |
2641bc282b29cb23af16db99b7e1764b66ffbbf1 | 2,086 | md | Markdown | README.md | fanyizhe/aws-rds-auto-snapshot | c88ede8be3b61a9b73e03720c35d6cef400f4c68 | [
"MIT"
] | null | null | null | README.md | fanyizhe/aws-rds-auto-snapshot | c88ede8be3b61a9b73e03720c35d6cef400f4c68 | [
"MIT"
] | null | null | null | README.md | fanyizhe/aws-rds-auto-snapshot | c88ede8be3b61a9b73e03720c35d6cef400f4c68 | [
"MIT"
] | 2 | 2019-11-26T07:00:39.000Z | 2019-11-26T07:18:36.000Z |
# 1. EBS, RDS定时备份
## 1.1. 部署指南页面
- [1、RDS定时备份部署流程](https://github.com/fanyizhe/aws-rds-auto-snapshot/blob/dev/rds-backup.md)
- [2、EBS定时备份部署流程](https://github.com/fanyizhe/aws-rds-auto-snapshot/blob/dev/ebs-backup.md)
- [3. 定时创建AMI部署流程](https://github.com/fanyizhe/aws-rds-auto-snapshot/blob/dev/ami-backup.md)
## 1.2. 目录
- [1.3. 简介](#13-简介)
- [1.4. 解决方案架构图](#14-解决方案架构图)
- [1.5. 部署流程](#15-部署流程)
## 1.3. 简介
目前RDS的自动备份方法是在每日的固定时间进行备份,换言之备份频率为固定每日一次,若想要实现小时级或者分钟级的备份频率则无法通过这种方法来解决。因此,本文提供了一种解决方案:通过AWS CloudWatch Events定时任务触发AWS Lambda函数来执行备份RDS的操作。
同样EBS也缺乏相应的备份解决方案,而它同样也能通过上述的解决方案来解决。
本文提供了手动部署的流程以及相关lambda的代码。同样,本文还提供了一个CloudFormation自动化部署脚本。该脚本可以快速自动完成部署,不需要您修改任何代码相关的部分,但相比起手动创建来说会多创建2个标准参数 (AWS System Manager服务中的Parameter store服务,具体说明参见下文)。
## 1.4. 解决方案架构图
因为EBS部分的架构图和创建的资源与RDS基本相同,所以下文介绍RDS的解决方案架构图。
* ### 手动部署

**创建的资源:**
- [CloudWatch Events](https://docs.aws.amazon.com/zh_cn/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) :使用 CloudWatch Events 来计划使用 cron 或 rate 表达式在某些时间自行触发的自动化操作。
- [Lambda](https://docs.aws.amazon.com/zh_cn/lambda/latest/dg/welcome.html) : 计算服务,可使您无需预配置或管理服务器即可运行代码。
- [IAM Role](https://docs.aws.amazon.com/zh_cn/IAM/latest/UserGuide/id_roles_terms-and-concepts.html) : IAM 角色类似于 IAM 用户,因为它是一个 AWS 身份,该身份具有确定其在 AWS 中可执行和不可执行的操作的权限策略。
* ### 自动部署

相比手动部署,自动部署多运用了Parameter Store服务创建了两个参数,用以在CloudFormation脚本的创建过程中,向Lambda中指定所需的参数。
- [Parameter store](https://docs.aws.amazon.com/zh_cn/systems-manager/latest/userguide/systems-manager-parameter-store.html) : 将 Parameter Store 参数与其他 Systems Manager 功能和 AWS 服务配合使用,以从中央存储检索密钥和配置数据。
## 1.5. 部署流程
- [1、RDS定时备份部署流程](https://github.com/fanyizhe/aws-rds-auto-snapshot/blob/dev/rds-backup.md)
- [2、EBS定时备份部署流程](https://github.com/fanyizhe/aws-rds-auto-snapshot/blob/dev/ebs-backup.md)
- [3. 定时创建AMI部署流程](https://github.com/fanyizhe/aws-rds-auto-snapshot/blob/dev/ami-backup.md)
| 34.196721 | 200 | 0.774688 | yue_Hant | 0.888614 |
264290cbe564272ec447856c42b5501be74c42f5 | 85 | md | Markdown | README.md | nitikayad96/chandra_suli | 905ded69825f8b3d4fa29a84661697abdb827a87 | [
"BSD-3-Clause"
] | 1 | 2020-01-08T19:57:19.000Z | 2020-01-08T19:57:19.000Z | README.md | nitikayad96/chandra_suli | 905ded69825f8b3d4fa29a84661697abdb827a87 | [
"BSD-3-Clause"
] | null | null | null | README.md | nitikayad96/chandra_suli | 905ded69825f8b3d4fa29a84661697abdb827a87 | [
"BSD-3-Clause"
] | null | null | null | # chandra_suli
SULI project on Chandra data
Looking for transients in Chandra data.
| 17 | 39 | 0.811765 | eng_Latn | 0.917347 |
2642a5702118b93e263ce038961a62106174f195 | 264 | md | Markdown | CONTRIBUTING.md | braincow/flubber | 691f1fd441d03fc9e093d9dabc7fd8940d1bef27 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | braincow/flubber | 691f1fd441d03fc9e093d9dabc7fd8940d1bef27 | [
"MIT"
] | 9 | 2018-04-02T07:41:11.000Z | 2018-04-12T08:40:53.000Z | CONTRIBUTING.md | braincow/flubber | 691f1fd441d03fc9e093d9dabc7fd8940d1bef27 | [
"MIT"
] | null | null | null | # Contributing
If you are reading this, we thank you in advance for willing to contribute to the Flubber project! You are awesome.
If you have a bug report or feature request please:
1. Open an issue for discussion
2. Submit pull request for review
Thank you!
| 24 | 115 | 0.776515 | eng_Latn | 0.999711 |
2642c169e2d81c2c706afc12969565e1dc8c0454 | 7,075 | md | Markdown | _posts/2021-01-03-goscheduler.md | RickyBoyd/rickyboyd.github.io | bedd8d5a28949dffdbb07c41158dfc724b1b6b1a | [
"MIT"
] | null | null | null | _posts/2021-01-03-goscheduler.md | RickyBoyd/rickyboyd.github.io | bedd8d5a28949dffdbb07c41158dfc724b1b6b1a | [
"MIT"
] | null | null | null | _posts/2021-01-03-goscheduler.md | RickyBoyd/rickyboyd.github.io | bedd8d5a28949dffdbb07c41158dfc724b1b6b1a | [
"MIT"
] | null | null | null | ---
layout: post
title: Introduction to the Go Runtime Scheduler
---
The Go runtime scheduler is at the heart of what gives Go great performance when writing programs that are highly I/O bound. Tens or even hundreds of thousands of Goroutines can be run concurrently. It's not necessary to understand how the Go Runtime Scheduler works to take advantage of its power but it can certainly help to take further advantage. For instance, it's a commonplace misconception that the number of goroutines should be kept low for higher performance and that the number of goroutines should be similar to the number of threads you may have in a multithreaded program, but this is not the case. Let's dive into it.
# Goroutines
A Goroutine is a light-weight process managed by the Go runtime which is compiled into every Go program. This means it's managed completely in userspace and not by the operating system. There many other names you may have heard of for this such as green threads or M:N threading. Many goroutines are typically mapped onto a much smaller set of operating system threads. In Go you could imagine the "N" in M:N threading to be a G.
# Goroutine Scheduler
The scheduling system in Go has three main components of its model, the processor (P), the goroutine (G) and the machine (M). Goroutines could be more accurately described as M:P:G threading rather than M:N threading due to having an intermediate abstraction for mapping userspace runtime threads (G) to real OS threads (M). The machine (M) is an operating system thread and may have one goroutine executing on it at once. The Go runtime manages a pool of Ms. A processor (P) is associated with a specific M. The number of Ps is set by the GOMAXPROCs environment variable which is normally set to be the number of CPUs available to minimise the amount of context switching needed between real threads by the OS, thus enabling higher utilisation. The P holds the state for deciding which G will be run on the M and the main mechanism for this is the local run queue (LRQ) that each P holds. When deciding which G to run next on the M the P will check which G is next in the LRQ. There is also a global run queue (GRQ). Every 1/61 ticks of the scheduler the P will schedule the next G on the GRQ to run on the M. This is done to ensure fairness.
The primary mechanism for Gs to enter the LRQ of a P is whenever the G running on the current P's M spawns a new G. The currently running G will be preempted after 10ms by the runtime and put to the back of the queue. Whenever the LQR becomes empty the P will attempt to find some new Gs to put into the LRQ. At the time of writing, the scheduler will try and find a runnable G by first checking the GRQ, then the network poller and then resort to work stealing if neither of the previous methods yields any runnable Gs.
The work-stealing mechanism has some interesting aspects. The first is that the work-stealing may run up to 4 times if it is not successful. The second is that on the first iteration of the work-stealing, the P will first try and run any ready timers on every other P and then check its own LRQ again in case the timers caused any ready Gs to be enqueued back onto its LRQ. After running timers and checking its own LRQ the P will attempt to steal half the Gs from another P. If no Gs are available after 4 iterations the scheduler returns back to the start of the scheduling process of checking its own LRQ then the netpoller and GRQ.
When a G is executing on an M there are several mechanisms that will stop it from running to allow another G to run. Gs may be preempted after 10ms as already mentioned. The G may also finish execution and have nothing left to run. There are a group of operations that will cause the G to be dropped by the M. These include syscalls and other blocking operations such as waiting on timers and writing or reading from channels. Syscalls are treated differently to blocking operations controlled by the runtime. For blocking operations such as a timer the G is parked and re-queued when the operation is completed. Blocking syscalls are handled in a special way and non-blocking syscalls (network I/O) are handled by a component called the netpoller. The netpoller will be discussed in a section of its own.
When a blocking syscall occurs, the processor will become detached from the machine and will be attached to a new machine. The blocking goroutine stays executing on the machine. Once the syscall is complete the G will be detached from the M and placed back onto the LRQ of the P it originated from and the M will be added to the list of free Ms. The G may fail to be added back to the old P, in which case the runtime will try and add the G to the LRQ of any idle P. If this fails it will be added to the GRQ.
# The Network Poller
The network poller (netpoller) handles any non-blocking IO. Goroutines are passed the netpoller which manages a set of file descriptors to be "polled" for readiness. The mechanism behind the polling is mostly operating system specific with some non operating specfic specific code.
The netpoller runs as a background thread that calls the underlying OS periodically for file descriptors that have finished their IO. Since there is no running process related to the background thread, any goroutines that become ready during the periodic checks are added to the global run queue. The netpoller is also called at special points during scheduling as an optimisation. One of these special cases is when the LRQ of a process has run out of goroutines, the netpoller is checked for any ready goroutines and these goroutines are put onto the process's LRQ before attempting to work steal from another LRQ.
The netpoller uses different mechanisms depending on which OS it's running on. On Linux, the go runtime uses the "epoll" API. The "epoll" API provided by Linux is described as the "I/O event notification facility" by the Linux man-pages. On MacOS the go runtime uses a similar facility provided called "kqueue". How these operating system APIs work and how the Go runtime integrates with them is out of scope for this article.
# Summary
The Go runtime scheduler is designed in a way that the scheduling decisions can be distributed among a set of what we call "processors". This is done to ensure the scheduler can scale to a large number of CPUs. We have also seen that the scheduler employs a number of mechanisms to ensure fairness including work-stealing, pre-emption and running Gs from the GRQ periodically instead of always running from the LRQ. The purpose of such a userspace threading mechanism is to maximise CPU utilisiation for massively concurrent workloads. The Go runtime's handling of nonblocking IO allows for a large number of Gs to be waiting on network calls while still allowing the CPUs to be used for Gs which can execute work. The combination of all these features is a large factor in the rise of Go's use in the Cloud Native space where programs often need to handle a large number of incoming and outgoing network connections. | 202.142857 | 1,143 | 0.797173 | eng_Latn | 0.999975 |
26434e9c9d576ce1725240505885d3bf6f823279 | 1,165 | md | Markdown | P08-Displaying-Cart-In-Console/content.md | MakeSchool-Tutorials/ecommerce-tutorial | 814a53f7dd63e9a10b48d3db077d9dd6dd570de1 | [
"MIT"
] | 1 | 2021-12-24T17:42:08.000Z | 2021-12-24T17:42:08.000Z | P08-Displaying-Cart-In-Console/content.md | MakeSchool-Tutorials/ecommerce-tutorial | 814a53f7dd63e9a10b48d3db077d9dd6dd570de1 | [
"MIT"
] | 3 | 2020-09-19T21:11:49.000Z | 2020-09-22T19:19:40.000Z | P08-Displaying-Cart-In-Console/content.md | MakeSchool-Tutorials/ecommerce-tutorial | 814a53f7dd63e9a10b48d3db077d9dd6dd570de1 | [
"MIT"
] | 4 | 2020-09-29T21:32:44.000Z | 2021-07-29T02:15:18.000Z | ---
title: "Displaying Cart In Console"
slug: displaying-cart-in-console
---
In this chapter there will be 3 videos that will guide you in:
- Displaying cart items in the console using a for loop
- Displaying the total amount of price for our cart
- Displaying the number of items we have in our cart
# 3rd Video
This video will walk you through on how to use a **for-loop** to loop through items in an array and display them in the console.

# 4th Video
In this video you will calculate the total quantity of items in the cart and the total cost of all items by using a loop to iterate over all items in the cart.

# 5th Video
Here you'll look over the code and make some syntactic improvements. We will also talk about returning values from functions to make them more useful.

# Update progress on Github
> [action]
>
> Now is a good time to update your progress on Github.
>
```bash
git add .
git commit -m 'displayed cart items on console'
git push
```
| 24.787234 | 159 | 0.745064 | eng_Latn | 0.994423 |
264389076a77484bc8f4588e99eea57c32eac3fa | 23,270 | md | Markdown | DeepLearningExamples/PyTorch/SpeechRecognition/Jasper/triton/README.md | puririshi98/benchmark | 79f554f1e1cf36f62994c78e0e6e5b360f554022 | [
"BSD-3-Clause"
] | null | null | null | DeepLearningExamples/PyTorch/SpeechRecognition/Jasper/triton/README.md | puririshi98/benchmark | 79f554f1e1cf36f62994c78e0e6e5b360f554022 | [
"BSD-3-Clause"
] | null | null | null | DeepLearningExamples/PyTorch/SpeechRecognition/Jasper/triton/README.md | puririshi98/benchmark | 79f554f1e1cf36f62994c78e0e6e5b360f554022 | [
"BSD-3-Clause"
] | null | null | null | # Deploying the Jasper Inference model using Triton Inference Server
This subfolder of the Jasper for PyTorch repository contains scripts for deployment of high-performance inference on NVIDIA Triton Inference Server as well as detailed performance analysis. It offers different options for the inference model pipeline.
## Table Of Contents
- [Solution overview](#solution-overview)
- [Inference Pipeline in Triton Inference Server](#inference-pipeline-in-triton-inference-server)
- [Setup](#setup)
- [Quick Start Guide](#quick-start-guide)
- [Advanced](#advanced)
* [Scripts and sample code](#scripts-and-sample-code)
- [Performance](#performance)
* [Inference Benchmarking in Triton Inference Server](#inference-benchmarking-in-triton-inference-server)
* [Results](#results)
* [Performance Analysis for Triton Inference Server: NVIDIA T4](#performance-analysis-for-triton-inference-server-nvidia-t4)
* [Maximum batch size](#maximum-batch-size)
* [Batching techniques: Static versus Dynamic Batching](#batching-techniques-static-versus-dynamic)
* [TensorRT, ONNXRT-CUDA, and PyTorch JIT comparisons](#tensorrt-onnxrt-cuda-and-pytorch-jit-comparisons)
- [Release Notes](#release-notes)
* [Changelog](#change-log)
* [Known issues](#known-issues)
## Solution Overview
The [NVIDIA Triton Inference Server](https://github.com/NVIDIA/triton-inference-server) provides a datacenter and cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any number of GPU or CPU models being managed by the server.
This folder contains detailed performance analysis as well as scripts to run Jasper inference using Triton Inference Server.
A typical Triton Inference Server pipeline can be broken down into the following steps:
1. The client serializes the inference request into a message and sends it to the server (Client Send).
2. The message travels over the network from the client to the server (Network).
3. The message arrives at the server, and is deserialized (Server Receive).
4. The request is placed on the queue (Server Queue).
5. The request is removed from the queue and computed (Server Compute).
6. The completed request is serialized in a message and sent back to the client (Server Send).
7. The completed message then travels over the network from the server to the client (Network).
8. The completed message is deserialized by the client and processed as a completed inference request (Client Receive).
Generally, for local clients, steps 1-4 and 6-8 will only occupy a small fraction of time, compared to step 5. As backend deep learning systems like Jasper are rarely exposed directly to end users, but instead only interfacing with local front-end servers, for the sake of Jasper, we can consider that all clients are local.
In this section, we will go over how to launch both the Triton Inference Server and the client and get the best performance solution that fits your specific application needs.
More information on how to perform inference using NVIDIA Triton Inference Server can be found in [triton/README.md](https://github.com/triton-inference-server/server/blob/master/README.md).
## Inference Pipeline in Triton Inference Server
The Jasper model pipeline consists of 3 components, where each part can be customized to be a different backend:
**Data preprocessor**
The data processor transforms an input raw audio file into a spectrogram. By default the pipeline uses mel filter banks as spectrogram features. This part does not have any learnable weights.
**Acoustic model**
The acoustic model takes in the spectrogram and outputs a probability over a list of characters. This part is the most compute intensive, taking more than 90% of the entire end-to-end pipeline. The acoustic model is the only component with learnable parameters and what differentiates Jasper from other end-to-end neural speech recognition models. In the original paper, the acoustic model contains a masking operation for training (More details in [Jasper PyTorch README](../README.md)). We do not use masking for inference.
**Greedy decoder**
The decoder takes the probabilities over the list of characters and outputs the final transcription. Greedy decoding is a fast and simple way of doing this by always choosing the character with the maximum probability.
To run a model with TensorRT, we first construct the model in PyTorch, which is then exported into a ONNX static graph. Finally, a TensorRT engine is constructed from the ONNX file and can be launched to do inference. The following table shows which backends are supported for each part along the model pipeline.
|Backend\Pipeline component|Data preprocessor|Acoustic Model|Decoder|
|---|---|---|---|
|PyTorch JIT|x|x|x|
|ONNX|-|x|-|
|TensorRT|-|x|-|
In order to run inference with TensorRT outside of the inference server, refer to the [Jasper TensorRT README](../tensort/README.md).
## Setup
The repository contains a folder `./triton` with a `Dockerfile` which extends the PyTorch 20.10-py3 NGC container and encapsulates some dependencies. Ensure you have the following components:
- [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker)
- [PyTorch 20.10-py3 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch)
- [Triton Inference Server 20.10 NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver)
- Access to [NVIDIA machine learning repository](https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb) and [NVIDIA CUDA repository](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.243-1_amd64.deb) for NVIDIA TensorRT 6
- Supported GPUs:
- [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/)
- [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/)
- [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/)
- [Pretrained Jasper Model Checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_ckpt_amp)
Required Python packages are listed in `requirements.txt`. These packages are automatically installed when the Docker container is built.
## Quick Start Guide
Running the following scripts will build and launch the container containing all required dependencies for native PyTorch as well as Triton. This is necessary for using inference and can also be used for data download, processing, and training of the model. For more information on the scripts and arguments, refer to the [Advanced](#advanced) section.
1. Clone the repository.
```bash
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/SpeechRecognition/Jasper
```
2. Build the Jasper PyTorch container.
Running the following scripts will build the container which contains all the required dependencies for data download and processing as well as converting the model.
```bash
bash scripts/docker/build.sh
```
3. Start an interactive session in the Docker container:
```bash
bash scripts/docker/launch.sh <DATA_DIR> <CHECKPOINT_DIR> <RESULT_DIR>
```
Where <DATA_DIR>, <CHECKPOINT_DIR> and <RESULT_DIR> can be either empty or absolute directory paths to dataset, existing checkpoints or potential output files. When left empty, they default to `datasets/`, `/checkpoints`, and `results/`, respectively. The `/datasets`, `/checkpoints`, `/results` directories will be mounted as volumes and mapped to the corresponding directories `<DATA_DIR>`, `<CHECKPOINT_DIR>`, `<RESULT_DIR>` on the host.
Note that `<DATA_DIR>`, `<CHECKPOINT_DIR>`, and `<RESULT_DIR>` directly correspond to the same arguments in `scripts/docker/launch.sh` and `trt/scripts/docker/launch.sh` mentioned in the [Jasper PyTorch README](../README.md) and [Jasper TensorRT README](../tensorrt/README.md).
Briefly, `<DATA_DIR>` should contain, or be prepared to contain a `LibriSpeech` sub-directory (created in [Acquiring Dataset](../trt/README.md)), `<CHECKPOINT_DIR>` should contain a PyTorch model checkpoint (`*.pt`) file obtained through training described in [Jasper PyTorch README](../README.md), and `<RESULT_DIR>` should be prepared to contain converted model and logs.
4. Downloading the `test-clean` part of `LibriSpeech` is required for model conversion. But it is not required for inference on Triton Inference Server, which can use a single .wav audio file. To download and preprocess LibriSpeech, run the following inside the container:
```bash
bash triton/scripts/download_triton_librispeech.sh
bash triton/scripts/preprocess_triton_librispeech.sh
```
5. (Option 1) Convert pretrained PyTorch model checkpoint into Triton Inference Server compatible model backends.
Inside the container, run:
```bash
export CHECKPOINT_PATH=<CHECKPOINT_PATH>
export CONVERT_PRECISIONS=<CONVERT_PRECISIONS>
export CONVERTS=<CONVERTS>
bash triton/scripts/export_model.sh
```
Where `<CHECKPOINT_PATH>` (`"/checkpoints/jasper_fp16.pt"`) is the absolute file path of the pretrained checkpoint, `<CONVERT_PRECISIONS>` (`"fp16" "fp32"`) is the list of precisions used for conversion, and `<CONVERTS>` (`"feature-extractor" "decoder" "ts-trace" "onnx" "tensorrt"`) is the list of conversions to be applied. The feature extractor converts only to TorchScript trace module (`feature-extractor`), the decoder only to TorchScript script module (`decoder`), and the Jasper model can convert to TorchScript trace module (`ts-trace`), ONNX (`onnx`), or TensorRT (`tensorrt`).
A pretrained PyTorch model checkpoint for model conversion can be downloaded from the [NGC model repository](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_ckpt_amp).
More details can be found in the [Advanced](#advanced) section under [Scripts and sample code](#scripts-and-sample-code).
6. (Option 2) Download pre-exported inference checkpoints from NGC.
Alternatively, you can skip the manual model export and download already generated model backends for every version of the model pipeline.
* [Jasper_ONNX](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_onnx_fp16_amp/version),
* [Jasper_TorchScript](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_torchscript_fp16_amp/version),
* [Jasper_TensorRT_Turing](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_trt_fp16_amp_turing/version),
* [Jasper_TensorRT_Volta](https://ngc.nvidia.com/catalog/models/nvidia:jasper_pyt_trt_fp16_amp_volta/version).
If you wish to use TensorRT pipeline, make sure to download the correct version for your hardware. The extracted model folder should contain 3 subfolders `feature-extractor-ts-trace`, `decoder-ts-script` and `jasper-x` where `x` can be `ts-trace`, `onnx`, `tensorrt` depending on the model backend. Copy the 3 model folders to the directory `./triton/model_repo/fp16` in your Jasper project.
7. Build a container that extends Triton Inference Client:
From outside the container, run:
```bash
bash triton/scripts/docker/build_triton_client.sh
```
Once the above steps are completed you can either run inference benchmarks or perform inference on real data.
8. (Option 1) Run all inference benchmarks.
From outside the container, run:
```bash
export RESULT_DIR=<RESULT_DIR>
export PRECISION_TESTS=<PRECISION_TESTS>
export BATCH_SIZES=<BATCH_SIZES>
export SEQ_LENS=<SEQ_LENS>
bash triton/scripts/execute_all_perf_runs.sh
```
Where `<RESULT_DIR>` is the absolute path to potential output files (`./results`), `<PRECISION_TESTS>` is a list of precisions to be tested (`"fp16" "fp32"`), `<BATCH_SIZES>` is a list of tested batch sizes (`"1" "2" "4" "8"`), and `<SEQ_LENS>` are tested sequnce lengths (`"32000" "112000" "267200"`).
Note: This can take several hours to complete due to the extensiveness of the benchmark. More details about the benchmark are found in the [Advanced](#advanced) section under [Performance](#performance).
9. (Option 2) Run inference on real data using the Client and Triton Inference Server.
8.1 From outside the container, restart the server:
```bash
bash triton/scripts/run_server.sh <MODEL_TYPE> <PRECISION>
```
8.2 From outside the container, submit the client request using:
```bash
bash triton/scripts/run_client.sh <MODEL_TYPE> <DATA_DIR> <FILE>
```
Where `<MODEL_TYPE>` can be either "ts-trace", "tensorrt" or "onnx", `<PRECISION>` is either "fp32" or "fp16". `<DATA_DIR>` is an absolute local path to the directory of files. <FILE> is the relative path to <DATA_DIR> to either an audio file in .wav format or a manifest file in .json format.
Note: If <FILE> is *.json <DATA_DIR> should be the path to the LibriSpeech dataset. In this case this script will do both inference and evaluation on the accoring LibriSpeech dataset.
## Advanced
The following sections provide greater details about the Triton Inference Server pipeline and inference analysis and benchmarking results.
### Scripts and sample code
The `triton/` directory contains the following files:
* `jasper-client.py`: Python client script that takes an audio file and a specific model pipeline type and submits a client request to the server to run inference with the model on the given audio file.
* `speech_utils.py`: helper functions for `jasper-client.py`.
* `converter.py`: Python script for model conversion to different backends.
* `jasper_module.py`: helper functions for `converter.py`.
* `model_repo_configs/`: directory with Triton model config files for different backend and precision configurations.
The `triton/scripts/` directory has easy to use scripts to run supported functionalities, such as:
* `./docker/build_triton_client.sh`: builds container
* `execute_all_perf_runs.sh`: runs all benchmarks using Triton Inference Server performance client; calls `generate_perf_results.sh`
* `export_model.sh`: from pretrained PyTorch checkpoint generates backends for every version of the model inference pipeline.
* `prepare_model_repository.sh`: copies model config files from `./model_repo_configs/` to `./deploy/model_repo` and creates links to generated model backends, setting up the model repository for Triton Inference Server
* `generate_perf_results.sh`: runs benchmark with `perf-client` for specific configuration and calls `run_perf_client.sh`
* `run_server.sh`: launches Triton Inference Server
* `run_client.sh`: launches client by using `jasper-client.py` to submit inference requests to server
### Running the Triton Inference Server
Launch the Triton Inference Server in detached mode to run in the background by default:
```bash
bash triton/scripts/run_server.sh
```
To run in the foreground interactively, for debugging purposes, run:
```bash
DAEMON="--detach=false" bash trinton/scripts/run_server.sh
```
The script mounts and loads models at `$PWD/triton/deploy/model_repo` to the server with all visible GPUs. In order to selectively choose the devices, set `NVIDIA_VISIBLE_DEVICES`.
### Running the Triton Inference Client
*Real data*
In order to run the client with real data, run:
```bash
bash triton/scripts/run_client.sh <backend> <data directory> <audio file>
```
The script calls `triton/jasper-client.py` which preprocesses data and sends/receives requests to/from the server.
*Synthetic data*
In order to run the client with synthetic data for performance measurements, run:
```bash
export MODEL_NAME=jasper-tensorrt-ensemble
export MODEL_VERSION=1
export BATCH_SIZE=1
export MAX_LATENCY=500
export MAX_CONCURRENCY=64
export AUDIO_LENGTH=32000
export SERVER_HOSTNAME=localhost
export RESULT_DIR_H=${PWD}/results/perf_client/${MODEL_NAME}/batch_${BATCH_SIZE}_len_${AUDIO_LENGTH}
bash triton/scripts/run_perf_client.sh
```
The export values above are default values. The script waits until the server is up and running, sends requests as per the constraints set and writes results to `/results/results_${TIMESTAMP}.csv` where `TIMESTAMP=$(date "+%y%m%d_%H%M")` and `/results/` is the results directory mounted in the docker .
For more information about `perf_client`, refer to the [official documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/master-user-guide/docs/optimization.html#perf-client).
## Performance
### Inference Benchmarking in Triton Inference Server
To benchmark the inference performance on Volta Turing or Ampere GPU, run `bash triton/scripts/execute_all_perf_runs.sh` according to [Quick-Start-Guide](#quick-start-guide) Step 7.
By default, this script measures inference performance for all 3 model pipelines: PyTorch JIT (`ts-trace`) pipeline, ONNX (`onnx`) pipeline, TensorRT(`tensorrt`) pipeline, both with FP32 and FP16 precision. Each of these pipelines is measured for different audio input lengths (2sec, 7sec, 16.7sec) and a range of different server batch sizes (up to 8). This takes place in `triton/scripts/generate_perf_results.sh`. For a specific audio length and batch size, static and dynamic batching comparison is performed.
### Results
In the following section, we analyze the results using the example of the Triton pipeline.
#### Performance Analysis for Triton Inference Server: NVIDIA T4
All results below are obtained using the following configurations:
* Single T4 16GB GPU on a local server
* FP16 precision
* Python 3.6.10
* PyTorch 1.7.0a0+7036e91
* TensorRT 7.2.1.4
* CUDA 11.1.0.024
* CUDNN 8.0.4.30
##### Batching techniques: Static Batching
Static batching is a feature of the inference server that allows inference requests to be served as they are received. The largest improvements to throughput come from increasing the batch size due to efficiency gains in the GPU with larger batches.

Figure 1: Throughput vs. Latency for Jasper, Audio Length = 2sec using various model backends available in Triton Inference Server and static batching.

Figure 2: Throughput vs. Latency for Jasper, Audio Length = 7sec using various model backends available in Triton Inference Server and static batching.

Figure 3: Throughput vs. Latency for Jasper, Audio Length = 16.7sec using various model backends available in Triton Inference Server and static batching.
These charts can be used to establish the optimal batch size to use in dynamic batching, given a latency budget. For example, in Figure 2 (Audio length = 7s) given a budget of 50ms, the optimal batch size to use for the TensorRT backend is 4. This will result in a maximum throughput of 100 inf/s under the latency constraint. In all three charts, TensorRT shows the best throughput and latency performance for a given batch size
##### Batching techniques: Dynamic Batching
Dynamic batching is a feature of the inference server that allows inference requests to be combined by the server, so that a batch is created dynamically, resulting in an increased throughput. It is preferred in scenarios where we would like to maximize throughput and GPU utilization at the cost of higher latencies. You can set the Dynamic Batcher parameter `max_queue_delay_microseconds` to indicate the maximum amount of time you are willing to wait and `preferred_batch_size` to indicate your maximum server batch size in the Triton Inference Server model config.
Figures 4, 5, and 6 emphasizes the increase in overall throughput with dynamic batching. At low numbers of concurrent requests, the increased throughput comes at the cost of increasing latency as the requests are queued up to max_queue_delay_microseconds.

Figure 4: Triton pipeline - Latency & Throughput vs Concurrency using dynamic Batching at maximum server batch size = 8, max_queue_delay_microseconds = 5000, input audio length = 2 seconds, TensorRT backend.

Figure 5: Triton pipeline - Latency & Throughput vs Concurrency using dynamic Batching at maximum server batch size = 8, max_queue_delay_microseconds = 5000, input audio length = 7 seconds, TensorRT backend.

Figure 6: Triton pipeline - Latency & Throughput vs Concurrency using dynamic Batching at maximum server batch size = 8, max_queue_delay_microseconds = 5000, input audio length = 16.7 seconds, TensorRT backend.
##### TensorRT, ONNXRT-CUDA, and PyTorch JIT comparisons
The following tables show inference and latency comparisons across all 3 backends for mixed precision and static batching. The main observations are:
Increasing the batch size leads to higher inference throughput and - latency up to a certain batch size, after which it slowly saturates.
The longer the audio length, the lower the throughput and the higher the latency.
###### Throughput Comparison
The following table shows the throughput benchmark results for all 3 model backends in Triton Inference Server using static batching under optimal concurrency
|Audio length in seconds|Batch Size|TensorRT (inf/s)|PyTorch (inf/s)|ONNXRT-CUDA (inf/s)|TensorRT/PyTorch Speedup|TensorRT/ONNXRT-CUDA Speedup|
|--- |--- |--- |--- |--- |--- |--- |
| 2.0| 1| 49.67| 55.67| 41.67| 0.89| 1.19|
| 2.0| 2| 98.67| 96.00| 77.33| 1.03| 1.28|
| 2.0| 4| 180.00| 141.33| 118.67| 1.27| 1.52|
| 2.0| 8| 285.33| 202.67| 136.00| 1.41| 2.10|
| 7.0| 1| 47.67| 37.00| 18.00| 1.29| 2.65|
| 7.0| 2| 79.33| 47.33| 46.00| 1.68| 1.72|
| 7.0| 4| 100.00| 73.33| 36.00| 1.36| 2.78|
| 7.0| 8| 117.33| 82.67| 40.00| 1.42| 2.93|
| 16.7| 1| 36.33| 21.67| 11.33| 1.68| 3.21|
| 16.7| 2| 40.67| 25.33| 16.00| 1.61| 2.54|
| 16.7| 4| 46.67| 37.33| 16.00| 1.25| 2.92|
| 16.7| 8| 48.00| 40.00| 18.67| 1.20| 2.57|
###### Latency Comparison
The following table shows the throughput benchmark results for all 3 model backends in Triton Inference Server using static batching and a single concurrent request.
|Audio length in seconds|Batch Size|TensorRT (ms)|PyTorch (ms)|ONNXRT-CUDA (ms)|TensorRT/PyTorch Speedup|TensorRT/ONNXRT-CUDA Speedup|
|--- |--- |--- |--- |--- |--- |--- |
| 2.0| 1| 23.61| 25.06| 31.84| 1.06| 1.35|
| 2.0| 2| 24.56| 25.11| 37.54| 1.02| 1.53|
| 2.0| 4| 25.90| 31.00| 37.20| 1.20| 1.44|
| 2.0| 8| 31.57| 41.76| 37.13| 1.32| 1.18|
| 7.0| 1| 24.79| 30.55| 32.16| 1.23| 1.30|
| 7.0| 2| 28.48| 45.05| 37.47| 1.58| 1.32|
| 7.0| 4| 41.71| 57.71| 37.92| 1.38| 0.91|
| 7.0| 8| 72.19| 98.84| 38.13| 1.37| 0.53|
| 16.7| 1| 30.66| 48.42| 32.74| 1.58| 1.07|
| 16.7| 2| 52.79| 81.89| 37.82| 1.55| 0.72|
| 16.7| 4| 92.86| 115.03| 37.91| 1.24| 0.41|
| 16.7| 8| 170.34| 203.52| 37.84| 2.36| 0.22|
## Release Notes
### Changelog
March 2021
* Updated ONNX runtime information
February 2021
* Updated Triton scripts for compatibility with Triton Inference Server version 2
* Updated Quick Start Guide
* Updated performance results
### Known issues
There are no known issues in this deployment.
| 60.129199 | 590 | 0.759948 | eng_Latn | 0.969633 |
2643df9398ad65d3460c5adf18ab41bdc1666af5 | 110 | md | Markdown | README.md | rafaasmiranda/renew-cdot-svm-certificate | e10cef6c2204c4f96ca8a245741ddf489b9ba5a3 | [
"MIT"
] | null | null | null | README.md | rafaasmiranda/renew-cdot-svm-certificate | e10cef6c2204c4f96ca8a245741ddf489b9ba5a3 | [
"MIT"
] | 2 | 2017-04-15T00:55:54.000Z | 2017-04-15T01:04:58.000Z | README.md | rafaasmiranda/renew-cdot-svm-certificate | e10cef6c2204c4f96ca8a245741ddf489b9ba5a3 | [
"MIT"
] | null | null | null | # renew-cdot-svm-certificate
Script to renew the self signed certificates on NetApp Clustered Data ONTAP SVMs
| 36.666667 | 80 | 0.827273 | eng_Latn | 0.723964 |
264403fdc655e80192defe40272ef4d706b6caa7 | 1,141 | md | Markdown | README.md | jessicajacelyn/tp | ffc49a11f1d281352f774167097a8769e821df6f | [
"MIT"
] | null | null | null | README.md | jessicajacelyn/tp | ffc49a11f1d281352f774167097a8769e821df6f | [
"MIT"
] | null | null | null | README.md | jessicajacelyn/tp | ffc49a11f1d281352f774167097a8769e821df6f | [
"MIT"
] | null | null | null |
# **Mr. Agent**
[](https://github.com/AY2122S2-CS2103-F09-3/tp/actions)

* The application is no different than normal mobile contact applications. Our application, targeted towards insurance agents, will ease them in storing their customer's information.
* It is **written in OOP fashion**. It provides a **reasonably well-written** code base **bigger** (around 6 KLoC) than what students usually write in beginner-level SE modules, without being overwhelmingly big.
* It comes with a **reasonable level of user and developer documentation**.
* Features:
- Customizable insurance and client categories
- Connections between customers
- Client’s insurance contract → end date, price, company
- Advanced search (by location, contract, date, company etc)
- Appointment (todo) → when list out can sort by date
- Transaction/contract
- Contacts become inactive after a specific period
* This project is based on the AddressBook-Level3 project created by the [SE-EDU initiative](https://se-education.org).
| 47.541667 | 213 | 0.753725 | eng_Latn | 0.984216 |