markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื•ืžื ื ืœืžื“ื ื• ืฉืœื ืื•ืžืจื™ื ืื™ื›ืก ืขืœ ืคื•ื ืงืฆื™ื•ืช, ืื‘ืœ ืžื” ืงื•ืจื” ืคื”? </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ื ื™ื’ื•ื“ ืœืคื•ื ืงืฆื™ื•ืช ืจื’ื™ืœื•ืช, ืงืจื™ืื” ืœึพgenerator ืœื ืžื—ื–ื™ืจื” ืขืจืš ืžื™ื™ื“.<br> ื‘ืžืงื•ื ืขืจืš ื”ื™ื ืžื—ื–ื™ืจื” ืžืขื™ืŸ ืกืžืŸ, ื›ืžื• ื‘ืงื•ื‘ืฅ, ืฉืืคืฉืจ ืœื“ืžื™ื™ืŸ ื›ื—ืฅ ืฉืžืฆื‘ื™ืข ืขืœ ื”ืฉื•ืจื” ื”ืจืืฉื•ื ื” ืฉืœ ื”ืคื•ื ืงืฆื™ื”.<br> ื ืฉืžื•ืจ ืืช ื”ืกืžืŸ ืขืœ ืžืฉืชื ื”: </p>
our_generator = silly_generator()
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืขืงื‘ื•ืช ื”ืงืจื™ืื” ืœึพ<code>silly_generator</code> ื ื•ืฆืจ ืœื ื• ืกืžืŸ ืฉืžืฆื‘ื™ืข ื›ืจื’ืข ืขืœ ื”ืฉื•ืจื” <code>a = 1</code>.<br> ื”ืžื™ื ื•ื— ื”ืžืงืฆื•ืขื™ ืœืกืžืŸ ื”ื–ื” ื”ื•ื <dfn>generator iterator</dfn>. </p> <img src="images/silly_generator1.png?v=1" width="300px" style="display: block; margin-left: auto; margin-right: auto;" alt="ืชื•ื›ืŸ ื”ืคื•ื ืงืฆื™ื” silly_generator, ื›ืืฉืจ ื—ืฅ ืžืฆื‘ื™ืข ืœืฉื•ืจื” ื”ืจืืฉื•ื ื” ืฉืœื” โ€“ a = 1"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื—ืจื™ ืฉื”ืจืฆื ื• ืืช ื”ืฉื•ืจื” <code dir="ltr">our_generator = silly_generator()</code>, ื”ืกืžืŸ ื”ืžื“ื•ื‘ืจ ื ืฉืžืจ ื‘ืžืฉืชื ื” ื‘ืฉื <var>our_generator</var>.<br> ื–ื” ื–ืžืŸ ืžืฆื•ื™ืŸ ืœื‘ืงืฉ ืžื”ึพgenerator ืœื”ื—ื–ื™ืจ ืขืจืš.<br> ื ืขืฉื” ื–ืืช ื‘ืขื–ืจืช ื”ืคื•ื ืงืฆื™ื” ื”ืคื™ื™ืชื•ื ื™ืช <code>next</code>: </p>
next_value = next(our_generator) print(next_value)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ื“ื™ ืœื”ื‘ื™ืŸ ืžื” ื”ืชืจื—ืฉ ื ืฆื˜ืจืš ืœื”ื‘ื™ืŸ ืฉื ื™ ื“ื‘ืจื™ื ื—ืฉื•ื‘ื™ื ืฉืงืฉื•ืจื™ื ืœึพgenerators:<br> </p> <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>ืงืจื™ืื” ืœึพ<code>next</code> ื”ื™ื ื›ืžื• ืœื—ื™ืฆื” ืขืœ "ื ื’ืŸ" (Play) โ€“ ื”ื™ื ื’ื•ืจืžืช ืœืกืžืŸ ืœืจื•ืฅ ืขื“ ืฉื”ื•ื ืžื’ื™ืข ืœืฉื•ืจื” ืฉืœ ื”ื—ื–ืจืช ืขืจืš.</li> <li>ืžื™ืœืช ื”ืžืคืชื— <code>yield</code> ื“ื•ืžื” ืœืžื™ืœืช ื”ืžืคืชื— <code>return</code> โ€“ ื”ื™ื ืžืคืกื™ืงื” ืืช ืจื™ืฆืช ื”ืกืžืŸ, ื•ืžื—ื–ื™ืจื” ืืช ื”ืขืจืš ืฉืžื•ืคื™ืข ืื—ืจื™ื”.</li> </ol> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื– ื”ื™ื” ืœื ื• ืกืžืŸ ืฉื”ืฆื‘ื™ืข ืขืœ ื”ืฉื•ืจื” ื”ืจืืฉื•ื ื”. ืœื—ืฆื ื• Play, ื•ื”ื•ื ื”ืจื™ืฅ ืืช ื”ืงื•ื“ ืขื“ ืฉื”ื•ื ื”ื’ื™ืข ืœื ืงื•ื“ื” ืฉื‘ื” ืžื—ื–ื™ืจื™ื ืขืจืš.<br> ื”ื”ื‘ื“ืœ ื‘ื™ืŸ ืคื•ื ืงืฆื™ื” ืœื‘ื™ืŸ generator, ื”ื•ื ืฉ<mark>ื›ืฉืื ื—ื ื• ืžื—ื–ื™ืจื™ื ืขืจืš ื‘ืขื–ืจืช <code>yield</code> ืื ื—ื ื• "ืžืงืคื™ืื™ื" ืืช ื”ืžืฆื‘ ืฉื‘ื• ื™ืฆืื ื• ืžื”ืคื•ื ืงืฆื™ื”.</mark><br> ืžืžืฉ ื›ืžื• ืœืœื—ื•ืฅ ืขืœ "Pause".<br> ื›ืฉื ืงืจื ืœึพ<code>next</code> ื‘ืคืขื ื”ื‘ืื” โ€“ ื”ืคื•ื ืงืฆื™ื” ืชืžืฉื™ืš ืœืจื•ืฅ ืžืื•ืชื• ื”ืžืงื•ื ืฉื‘ื• ื”ืฉืืจื ื• ืืช ื”ืกืžืŸ, ืขื ืื•ืชื ืขืจื›ื™ ืžืฉืชื ื™ื.<br> ืขื›ืฉื™ื• ื”ืกืžืŸ ืžืฆื‘ื™ืข ืขืœ ื”ืฉื•ืจื” <code>b = a + 1</code>, ื•ืžื—ื›ื” ืฉืžื™ืฉื”ื• ื™ืงืจื ืฉื•ื‘ ืœึพ<code>next</code> ื›ื“ื™ ืฉื”ืคื•ื ืงืฆื™ื” ืชื•ื›ืœ ืœื”ืžืฉื™ืš ืœืจื•ืฅ: </p>
print(next(our_generator))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ืกื›ื ืžื” ืงืจื” ืขื“ ืขื›ืฉื™ื•: </p> <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>ื”ื’ื“ืจื ื• ืคื•ื ืงืฆื™ื” ื‘ืฉื <var>silly_generator</var>, ืฉืืžื•ืจื” ืœื”ื—ื–ื™ืจ ืืช ื”ืขืจื›ื™ื <samp>1</samp>, <samp>2</samp> ื•ึพ<samp dir="ltr">[1, 2, 3]</samp>. ืงืจืื ื• ืœื” "<em>ืคื•ื ืงืฆื™ื™ืช ื”ื’ื ืจื˜ื•ืจ</em>".</li> <li>ื‘ืขื–ืจืช ืงืจื™ืื” ืœืคื•ื ืงืฆื™ื™ืช ื”ื’ื ืจื˜ื•ืจ, ื™ืฆืจื ื• "ืกืžืŸ" (generator iterator) ืฉื ืงืจื <var>our_generator</var> ื•ืžืฆื‘ื™ืข ืœืฉื•ืจื” ื”ืจืืฉื•ื ื” ื‘ืคื•ื ืงืฆื™ื”.</li> <li>ื‘ืขื–ืจืช ืงืจื™ืื” ืœึพ<code>next</code> ืขืœ ื”ึพgenerator iterator, ื”ืจืฆื ื• ืืช ื”ืกืžืŸ ืขื“ ืฉื”ึพgenerator ื”ื—ื–ื™ืจ ืขืจืš.</li> <li>ืœืžื“ื ื• ืฉึพgeneratorึพื™ื ืžื—ื–ื™ืจื™ื ืขืจื›ื™ื ื‘ืขื™ืงืจ ื‘ืขื–ืจืช yield โ€“ ืฉืžื—ื–ื™ืจ ืขืจืš ื•ืฉื•ืžืจ ืืช ื”ืžืฆื‘ ืฉื‘ื• ื”ืคื•ื ืงืฆื™ื” ืขืฆืจื”.</li> <li>ืงืจืื ื• ืฉื•ื‘ ืœึพ<code>next</code> ืขืœ ื”ึพgenerator iterator, ื•ืจืื™ื ื• ืฉื”ื•ื ืžืžืฉื™ืš ืžื”ืžืงื•ื ืฉื‘ื• ื”ึพgenerator ื”ืคืกื™ืง ืœืจื•ืฅ ืคืขื ืงื•ื“ืžืช.</li> </ol> <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="ืชืจื’ื•ืœ"> </div> <div style="width: 70%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืชื•ื›ืœื• ืœื—ื–ื•ืช ืžื” ื™ืงืจื” ืื ื ืงืจื ืฉื•ื‘ ืœึพ<code>next(our_generator)</code>? </p> </div> <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;"> <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;"> <strong>ื—ืฉื•ื‘!</strong><br> ืคืชืจื• ืœืคื ื™ ืฉืชืžืฉื™ื›ื•! </p> </div> </div> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื ืกื”: </p>
print(next(our_generator))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื™ื•ืคื™! ื”ื›ืœ ื”ืœืš ื›ืžืฆื•ืคื”.<br> ืื‘ืœ ืžื” ืฆื•ืคืŸ ืœื ื• ื”ืขืชื™ื“?<br> ื‘ืคืขื ื”ื‘ืื” ืฉื ื‘ืงืฉ ืขืจืš ืžื”ืคื•ื ืงืฆื™ื”, ื”ืกืžืŸ ืฉืœื ื• ื™ืจื•ืฅ ื”ืœืื” ื•ืœื ื™ื™ืชืงืœ ื‘ึพ<code>yield</code>.<br> ื‘ืžืงืจื” ื›ื–ื”, ื ืงื‘ืœ ืฉื’ื™ืืช <var>StopIteration</var>, ืฉืžื‘ืฉืจืช ืœื ื• ืฉึพ<code>next</code> ืœื ื”ืฆืœื™ื— ืœื—ืœืฅ ืžื”ึพgenerator ืืช ื”ืขืจืš ื”ื‘ื. </p>
print(next(our_generator))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืžื•ื‘ืŸ ืฉืื™ืŸ ืกื™ื‘ื” ืœื”ื™ืœื—ืฅ.<br> ื‘ืžืงืจื” ื”ื–ื” ืืคื™ืœื• ืœื ืžื“ื•ื‘ืจ ื‘ืžืฉื”ื• ืจืข โ€“ ืคืฉื•ื˜ ื›ื™ืœื™ื ื• ืืช ื›ืœ ื”ืขืจื›ื™ื ืžื”ึพgenerator iterator ืฉืœื ื•.<br> ืคื•ื ืงืฆื™ื™ืช ื”ึพgenerator ืขื“ื™ื™ืŸ ืงื™ื™ืžืช!<br> ืืคืฉืจ ืœื™ืฆื•ืจ ืขื•ื“ generator iterator ืื ื ืจืฆื”, ื•ืœืงื‘ืœ ืืช ื›ืœ ื”ืขืจื›ื™ื ืฉื ืžืฆืื™ื ื‘ื• ื‘ืื•ืชื” ืฆื•ืจื”: </p>
our_generator = silly_generator() print(next(our_generator)) print(next(our_generator)) print(next(our_generator))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื‘ืœ ื›ืฉื—ื•ืฉื‘ื™ื ืขืœ ื–ื”, ื–ื” ืงืฆืช ืžื’ื•ื—ืš.<br> ื‘ื›ืœ ืคืขื ืฉื ืจืฆื” ืœื”ืฉื™ื’ ืืช ื”ืขืจืš ื”ื‘ื ื ืฆื˜ืจืš ืœืจืฉื•ื <code>next</code>?<br> ื—ื™ื™ื‘ืช ืœื”ื™ื•ืช ื“ืจืš ื˜ื•ื‘ื” ื™ื•ืชืจ! </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">ื›ืœ generator ื”ื•ื ื’ื iterable</span> <span style="text-align: right; direction: rtl; float: right; clear: both;">for</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื– ืœืžืขืฉื”, ื™ืฉ ื™ื•ืชืจ ืžื“ืจืš ื˜ื•ื‘ื” ืื—ืช ืœื”ืฉื™ื’ ืืช ื›ืœ ื”ืขืจื›ื™ื ืฉื™ื•ืฆืื™ื ืžึพgenerator ืžืกื•ื™ื.<br> ื›ื”ืงื“ืžื”, ื ื ื™ื— ืคื” ืขื•ื‘ื“ื” ืฉืœื ืชืฉืื™ืจ ืืชื›ื ืื“ื™ืฉื™ื: ื”ึพgenerator iterator ื”ื•ื... iterable! ื”ืคืชืขืช ื”ืฉื ื”, ืื ื™ ื™ื•ื“ืข!<br> ืืžื ื ืื™ ืืคืฉืจ ืœืคื ื•ืช ืœืื™ื‘ืจื™ื ืฉืœื• ืœืคื™ ืžื™ืงื•ื, ืืš ื‘ื”ื—ืœื˜ ืืคืฉืจ ืœืขื‘ื•ืจ ืขืœื™ื”ื ื‘ืขื–ืจืช ืœื•ืœืืช <code>for</code>, ืœื“ื•ื’ืžื”: </p>
our_generator = silly_generator() for item in our_generator: print(item)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืžื” ืžืชืจื—ืฉ ื›ืืŸ?<br> ืื ื—ื ื• ืžื‘ืงืฉื™ื ืžืœื•ืœืืช ื”ึพ<code>for</code> ืœืขื‘ื•ืจ ืขืœ ื”ึพgenerator iterator ืฉืœื ื•.<br> ื”ึพ<code>for</code> ืขื•ืฉื” ืขื‘ื•ืจื ื• ืืช ื”ืขื‘ื•ื“ื” ืื•ื˜ื•ืžื˜ื™ืช: </p> <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>ื”ื•ื ืžื‘ืงืฉ ืืช ื”ืื™ื‘ืจ ื”ื‘ื ืžื”ึพgenerator iterator ื‘ืืžืฆืขื•ืช <code>next</code>.</li> <li>ื”ื•ื ืžื›ื ื™ืก ืืช ื”ืื™ื‘ืจ ืฉื”ื•ื ืงื™ื‘ืœ ืžื”ึพgenerator ืœึพ<var>item</var>.</li> <li>ื”ื•ื ืžื‘ืฆืข ืืช ื’ื•ืฃ ื”ืœื•ืœืื” ืคืขื ืื—ืช ืขื‘ื•ืจ ื”ืื™ื‘ืจ ืฉื ืžืฆื ื‘ึพ<var>item</var>.</li> <li>ื”ื•ื ื—ื•ื–ืจ ืœืจืืฉ ื”ืœื•ืœืื” ืฉื•ื‘, ื•ืžื ืกื” ืœืงื‘ืœ ืืช ื”ืื™ื‘ืจ ื”ื‘ื ื‘ืืžืฆืขื•ืช <code>next</code>. ื›ืš ืขื“ ืฉื™ื™ื’ืžืจื• ื”ืื™ื‘ืจื™ื ื‘ึพgenerator iterator.</li> </ol> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืฉื™ืžื• ืœื‘ ืฉื”ืขื•ื‘ื“ื•ืช ืฉืœืžื“ื ื• ื‘ื ื•ื’ืข ืœืื•ืชื• "ืกืžืŸ" ื™ื‘ื•ืื• ืœื™ื“ื™ ื‘ื™ื˜ื•ื™ ื’ื ื›ืืŸ.<br> ื”ืจืฆื” ื ื•ืกืคืช ืฉืœ ื”ืœื•ืœืื” ืขืœ ืื•ืชื• ืกืžืŸ ืœื ืชื“ืคื™ืก ื™ื•ืชืจ ืื™ื‘ืจื™ื, ื›ื™ื•ื•ืŸ ืฉื”ืกืžืŸ ืžืฆื‘ื™ืข ื›ืขืช ืขืœ ืกื•ืฃ ืคื•ื ืงืฆื™ื™ืช ื”ึพgenerator: </p>
for item in our_generator: print(item)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืžื–ืœื ื•, ืœื•ืœืื•ืช <code>for</code> ื™ื•ื“ืขื•ืช ืœื˜ืคืœ ื‘ืขืฆืžืŸ ื‘ืฉื’ื™ืืช <code>StopIteration</code>, ื•ืœื›ืŸ ืฉื’ื™ืื” ืฉื›ื–ื• ืœื ืชืงืคื•ืฅ ืœื ื• ื‘ืžืงืจื” ื”ื–ื”. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">ื”ืžืจืช ื˜ื™ืคื•ืกื™ื</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื“ืจืš ืื—ืจืช, ืœื“ื•ื’ืžื”, ื”ื™ื ืœื‘ืงืฉ ืœื”ืžื™ืจ ืืช ื”ึพgenerator iterator ืœืกื•ื’ ืžืฉืชื ื” ืื—ืจ ืฉื”ื•ื ื’ื iterable: </p>
our_generator = silly_generator() items = list(our_generator) print(items)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืงื•ื“ ืฉืœืžืขืœื”, ื”ืฉืชืžืฉื ื• ื‘ืคื•ื ืงืฆื™ื” <code>list</code> ืฉื™ื•ื“ืขืช ืœื”ืžื™ืจ ืขืจื›ื™ื iterableึพื™ื ืœืจืฉื™ืžื•ืช.<br> ืฉื™ืžื• ืœื‘ ืฉืžื” ืฉืœืžื“ื ื• ื‘ื ื•ื’ืข ืœ"ืกืžืŸ" ื™ื‘ื•ื ืœื™ื“ื™ ื‘ื™ื˜ื•ื™ ื’ื ื‘ื”ืžืจื•ืช: </p>
print(list(our_generator))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืฉื™ืžื•ืฉื™ื ืคืจืงื˜ื™ื™ื</span> <span style="text-align: right; direction: rtl; float: right; clear: both;">ื—ื™ืกื›ื•ืŸ ื‘ื–ื™ื›ืจื•ืŸ</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื›ืชื•ื‘ ืคื•ื ืงืฆื™ื” ืจื’ื™ืœื” ืฉืžืงื‘ืœืช ืžืกืคืจ ืฉืœื, ื•ืžื—ื–ื™ืจื” ืจืฉื™ืžื” ืฉืœ ื›ืœ ื”ืžืกืคืจื™ื ื”ืฉืœืžื™ื ืžึพ0 ื•ืขื“ ืื•ืชื• ืžืกืคืจ (ื ืฉืžืข ืœื›ื ืžื•ื›ืจ?): </p>
def my_range(upper_limit): numbers = [] current_number = 0 while current_number < upper_limit: numbers.append(current_number) current_number = current_number + 1 return numbers for number in my_range(1000): print(number)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืคื•ื ืงืฆื™ื” ื”ื–ื• ืื ื—ื ื• ื™ื•ืฆืจื™ื ืจืฉื™ืžืช ืžืกืคืจื™ื ื—ื“ืฉื”, ื”ืžื›ื™ืœื” ืืช ื›ืœ ื”ืžืกืคืจื™ื ื‘ื™ืŸ 0 ืœื‘ื™ืŸ ื”ืžืกืคืจ ืฉื”ื•ืขื‘ืจ ืœืคืจืžื˜ืจ <var>upper_limit</var>.<br> ืืš ื™ืฉื ื” ื‘ืขื™ื” ื—ืžื•ืจื” โ€“ ื”ืคืขืœืช ื”ืคื•ื ืงืฆื™ื” ื’ื•ืจืžืช ืœื ื™ืฆื•ืœ ืžืฉืื‘ื™ื ืจื‘ื™ื!<br> ืื ื ื›ื ื™ืก ื›ืืจื’ื•ืžื ื˜ 1,000 โ€“ ื ืฆื˜ืจืš ืœื”ื—ื–ื™ืง ืจืฉื™ืžื” ื”ืžื›ื™ืœื” 1,000 ืื™ื‘ืจื™ื ืฉื•ื ื™ื, ื•ืื ื ื›ื ื™ืก ืžืกืคืจ ื’ื“ื•ืœ ืžื“ื™ โ€“ ืขืœื•ืœ ืœื”ื™ื’ืžืจ ืœื ื• ื”ื–ื™ื›ืจื•ืŸ. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื‘ืœ ืื™ื–ื• ืกื™ื‘ื” ื™ืฉ ืœื ื• ืœื”ื—ื–ื™ืง ื‘ื–ื™ื›ืจื•ืŸ ืืช ืจืฉื™ืžืช ื›ืœ ื”ืžืกืคืจื™ื?<br> ืื ืœื ืขื•ืœื” ืฆื•ืจืš ืžื•ื‘ื”ืง ืฉื›ื–ื”, ื™ื™ืชื›ืŸ ืฉืขื“ื™ืฃ ืœื”ื—ื–ื™ืง ื‘ื–ื™ื›ืจื•ืŸ ืžืกืคืจ ืื—ื“ ื‘ืœื‘ื“ ื‘ื›ืœ ืคืขื, ื•ืœื”ื—ื–ื™ืจื• ืžื™ื™ื“ ื‘ืขื–ืจืช generator: </p>
def my_range(upper_limit): current_number = 0 while current_number < upper_limit: yield current_number current_number = current_number + 1 our_generator = my_range(1000) for number in our_generator: print(number)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืฉื™ืžื• ืœื‘ ื›ืžื” ื”ื’ืจืกื” ื”ื–ื• ืืœื’ื ื˜ื™ืช ื™ื•ืชืจ!<br> ื‘ื›ืœ ืคืขื ืื ื—ื ื• ืคืฉื•ื˜ ืฉื•ืœื—ื™ื ืืช ืขืจื›ื• ืฉืœ ืžืกืคืจ ืื—ื“ (<var>current_number</var>) ื”ื—ื•ืฆื”.<br> ื›ืฉืžื‘ืงืฉื™ื ืืช ื”ืขืจืš ื”ื‘ื ืžื”ึพgenerator iterator, ืคื•ื ืงืฆื™ื™ืช ื”ึพgenerator ื—ื•ื–ืจืช ืœืขื‘ื•ื“ ืžื”ื ืงื•ื“ื” ืฉื‘ื” ื”ื™ื ืขืฆืจื”:<br> ื”ื™ื ืžืขืœื” ืืช ืขืจื›ื• ืฉืœ ื”ืžืกืคืจ ื”ื ื•ื›ื—ื™, ื‘ื•ื“ืงืช ืื ื”ื•ื ื ืžื•ืš ืžึพ<var>upper_limit</var>, ื•ืฉื•ืœื—ืช ื’ื ืื•ืชื• ื”ื—ื•ืฆื”.<br> ื‘ืฉื™ื˜ื” ื”ื–ื•, <code>my_range(numbers)</code> ืœื ืžื—ื–ื™ืจื” ืœื ื• ืจืฉื™ืžื” ืฉืœ ื”ืชื•ืฆืื•ืช โ€“ ืืœื generator iterator ืฉืžื—ื–ื™ืจ ืขืจืš ืื—ื“ ื‘ื›ืœ ืคืขื.<br> ื›ืš ืื ื—ื ื• ืœืขื•ืœื ืœื ืžื—ื–ื™ืงื™ื ื‘ื–ื™ื›ืจื•ืŸ 1,000 ืžืกืคืจื™ื ื‘ื•ึพื–ืžื ื™ืช. </p> <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="ืชืจื’ื•ืœ"> </div> <div style="width: 70%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืคื ื™ื›ื ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืจืฉื™ืžื”, ื•ืžื—ื–ื™ืจื” ืขื‘ื•ืจ ื›ืœ ืžืกืคืจ ื‘ืจืฉื™ืžื” ืืช ืขืจื›ื• ื‘ืจื™ื‘ื•ืข.<br> ื–ื•ื”ื™ ื’ืจืกื” ืžืขื˜ ื‘ื–ื‘ื–ื ื™ืช ืฉืžืฉืชืžืฉืช ื‘ื”ืจื‘ื” ื–ื™ื›ืจื•ืŸ. ืชื•ื›ืœื• ืœื”ืžื™ืจ ืื•ืชื” ืœื”ื™ื•ืช generator? </p> </div> <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;"> <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;"> <strong>ื—ืฉื•ื‘!</strong><br> ืคืชืจื• ืœืคื ื™ ืฉืชืžืฉื™ื›ื•! </p> </div> </div>
def square_numbers(numbers): squared_numbers = [] for number in numbers: squared_numbers.append(number ** 2) return squared_numbers for number in square_numbers(my_range(1000)): print(number)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืชืฉื•ื‘ื•ืช ื—ืœืงื™ื•ืช</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืขื™ืชื™ื ื ื™ืืœืฅ ืœื‘ืฆืข ื—ื™ืฉื•ื‘ ืืจื•ืš, ืฉื”ืฉืœืžืชื• ืชื™ืžืฉืš ื–ืžืŸ ืจื‘ ืžืื•ื“.<br> ื‘ืžืงืจื” ื›ื–ื”, ื ื•ื›ืœ ืœื”ืฉืชืžืฉ ื‘ึพgenerator ื›ื“ื™ ืœืงื‘ืœ ื—ืœืง ืžื”ืชื•ืฆืื” ื‘ื–ืžืŸ ืืžืช,<br> ื‘ื–ืžืŸ ืฉื‘ืคื•ื ืงืฆื™ื” "ืจื’ื™ืœื”" ื ืฆื˜ืจืš ืœื”ืžืชื™ืŸ ืขื“ ืกื™ื•ื ื”ื—ื™ืฉื•ื‘ ื›ื•ืœื•. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืฉืœืฉื” ืคื™ืชื’ื•ืจื™ืช, ืœื“ื•ื’ืžื”, ื”ื™ื ืฉืœื™ืฉื™ื™ืช ืžืกืคืจื™ื ืฉืœืžื™ื ื•ื—ื™ื•ื‘ื™ื™ื, $a$, $b$ ื•ึพ$c$, ืฉืขื•ื ื™ื ืขืœ ื”ื“ืจื™ืฉื” $a^2 + b^2 = c^2$.<br> ืื ื›ืš, ื›ื“ื™ ืฉืฉืœื•ืฉื” ืžืกืคืจื™ื ืฉืื ื—ื ื• ื‘ื•ื—ืจื™ื ื™ื™ื—ืฉื‘ื• ืฉืœืฉื” ืคื™ืชื’ื•ืจื™ืช,<br> ื”ืกื›ื•ื ืฉืœ ืจื™ื‘ื•ืข ื”ืžืกืคืจ ื”ืจืืฉื•ืŸ ื•ืจื™ื‘ื•ืข ื”ืžืกืคืจ ื”ืฉื ื™, ืืžื•ืจ ืœื”ื™ื•ืช ืฉื•ื•ื” ืœืขืจื›ื• ืฉืœ ื”ืžืกืคืจ ื”ืฉืœื™ืฉื™ ื‘ืจื™ื‘ื•ืข. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืืœื• ื“ื•ื’ืžืื•ืช ืœืฉืœืฉื•ืช ืคื™ืชื’ื•ืจื™ื•ืช: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>$(3, 4, 5)$, ื›ื™ื•ื•ืŸ ืฉึพ$9 + 16 = 25$.<br> 9 ื”ื•ื 3 ื‘ืจื™ื‘ื•ืข, 16 ื”ื•ื 4 ื‘ืจื™ื‘ื•ืข ื•ึพ25 ื”ื•ื 5 ื‘ืจื™ื‘ื•ืข. </li> <li>$(5, 12, 13)$, ื›ื™ื•ื•ืŸ ืฉึพ$25 + 144 = 169$.</li> <li>$(8, 15, 17)$, ื›ื™ื•ื•ืŸ ืฉึพ$64 + 225 = 289$.</li> </ul> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื ืกื” ืœืžืฆื•ื ืืช ื›ืœ ื”ืฉืœืฉื•ืช ื”ืคื™ืชื’ื•ืจื™ื•ืช ืžืชื—ืช ืœึพ10,000 ื‘ืขื–ืจืช ืงื•ื“ ืฉืจืฅ ืขืœ ื›ืœ ื”ืฉืœืฉื•ืช ื”ืืคืฉืจื™ื•ืช: </p>
def find_pythagorean_triples(upper_bound=10_000): pythagorean_triples = [] for c in range(3, upper_bound): for b in range(2, c): for a in range(1, b): if a ** 2 + b **2 == c ** 2: pythagorean_triples.append((a, b, c)) return pythagorean_triples for triple in find_pythagorean_triples(): print(triple)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl;"> <div style="display: flex; width: 10%; float: right; "> <img src="images/warning.png" style="height: 50px !important;" alt="ืื–ื”ืจื”!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl;"> ื”ืจืฆืช ื”ืชื ื”ืงื•ื“ื ืชืชืงืข ืืช ื”ืžื—ื‘ืจืช (ื—ื™ืฉื•ื‘ ื”ืชื•ืฆืื” ื™ื™ืžืฉืš ื–ืžืŸ ืจื‘).<br> ื›ื“ื™ ืœื”ื™ื•ืช ืžืกื•ื’ืœื™ื ืœื”ืจื™ืฅ ืืช ื”ืชืื™ื ื”ื‘ืื™ื, ืœื—ืฆื• <samp>00</samp> ืœืื—ืจ ื”ืจืฆืช ื”ืชื, ื•ื‘ื—ืจื• <em>Restart</em>.<br> ืืœ ื“ืื’ื” โ€“ ื”ืืชื—ื•ืœ ื™ืชื‘ืฆืข ืืš ื•ืจืง ืขื‘ื•ืจ ื”ืžื—ื‘ืจืช, ื•ืœื ืขื‘ื•ืจ ืžื—ืฉื‘. </p> </div> </div> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื™ื•, ื›ืžื” ื–ืžืŸ ื ืžืฉื›ืช ื”ืจืฆืช ื”ืงื•ื“ ื”ื–ื”... ๐Ÿ˜ด<br> ื”ืœื•ื•ืื™ ืฉืขื“ ืฉื”ืงื•ื“ ื”ื–ื” ื”ื™ื” ืžืกื™ื™ื ื”ื™ื™ื ื• ืžืงื‘ืœื™ื ืœืคื—ื•ืช <em>ื—ืœืง</em> ืžื”ืชื•ืฆืื•ืช!<br> ื ืคื ื” ืœึพgeneratorึพื™ื ืœืขื–ืจื”: </p>
def find_pythagorean_triples(upper_bound=10_000): for c in range(3, upper_bound): for b in range(2, c): for a in range(1, b): if a ** 2 + b **2 == c ** 2: yield a, b, c for triple in find_pythagorean_triples(): print(triple)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื™ืš ื–ื” ืงืจื”? ืงื™ื‘ืœื ื• ืืช ื”ืชืฉื•ื‘ื” ื‘ืชื•ืš ืฉื‘ืจื™ืจ ืฉื ื™ื™ื”!<br> ื•ื‘ื›ืŸ, ื–ื” ืœื ืžื“ื•ื™ืง โ€“ ืงื™ื‘ืœื ื• ื—ืœืง ืžื”ืชืฉื•ื‘ื•ืช. ืฉื™ืžื• ืœื‘ ืฉื”ืงื•ื“ ืžืžืฉื™ืš ืœื”ื“ืคื™ืก :)<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœื”ื–ื›ื™ืจื›ื, ื”ึพgenerator ืฉื•ืœื— ืืช ื”ืชื•ืฆืื” ื”ื—ื•ืฆื” ืžื™ื™ื“ ื›ืฉื”ื•ื ืžื•ืฆื ืฉืœืฉื” ืื—ืช,<br> ื•ื”ึพfor ืžืงื‘ืœ ืžื”ึพgenerator iterable ื›ืœ ืฉืœืฉื” ื‘ืจื’ืข ืฉื”ื™ื ื ืžืฆืื”.<br> ื‘ืจื’ืข ืฉื”ึพfor ืžืงื‘ืœ ืฉืœืฉื”, ื”ื•ื ืžื‘ืฆืข ืืช ื’ื•ืฃ ื”ืœื•ืœืื” ืขื‘ื•ืจ ืื•ืชื” ืฉืœืฉื”, ื•ืจืง ืื– ืžื‘ืงืฉ ืžึพgenerator ืืช ื”ืื™ื‘ืจ ื”ื‘ื.<br> ื‘ื’ืœืœ ื”ืื•ืคื™ ืฉืœ generators, ื”ืงื•ื“ ื‘ืชื ื”ืื—ืจื•ืŸ ืžื“ืคื™ืก ืœื ื• ื›ืœ ืฉืœืฉื” ื‘ืจื’ืข ืฉื”ื•ื ืžืฆื ืื•ืชื”, ื•ืœื ืžื—ื›ื” ืขื“ ืฉื™ื™ืžืฆืื• ื›ืœ ื”ืฉืœืฉื•ืช. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">ืชืจื’ื•ืœ ื‘ื™ื ื™ื™ื: ืžืกืคืจื™ื ืคืจืื™ื™ื</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> "ืคื™ืจื•ืง ืœื’ื•ืจืžื™ื ืฉืœ ืžืกืคืจ ืฉืœื" ื”ื™ื ื‘ืขื™ื” ืฉื—ื™ืฉื•ื‘ ืคืชืจื•ื ื” ื ืžืฉืš ื–ืžืŸ ืจื‘ ื‘ืžื—ืฉื‘ื™ื ืžื•ื“ืจื ื™ื™ื. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืขืœื™ื›ื ืœื›ืชื•ื‘ ืคื•ื ืงืฆื™ื” ืฉืžืงื‘ืœืช ืžืกืคืจ ื—ื™ื•ื‘ื™ ืฉืœื $n$, ื•ืžื—ื–ื™ืจื” ืงื‘ื•ืฆืช ืžืกืคืจื™ื ืฉืžื›ืคืœืชื (ืชื•ืฆืืช ื”ื›ืคืœ ื‘ื™ื ื™ื”ื) ื”ื™ื $n$.<br> ืœื“ื•ื’ืžื”, ื”ืžืกืคืจ 1,386 ื‘ื ื•ื™ ืžื”ืžื›ืคืœื” ืฉืœ ืงื‘ื•ืฆืช ื”ืžืกืคืจื™ื $2 \cdot 3 \cdot 3 \cdot 7 \cdot 11$.<br> ื›ืœ ืžืกืคืจ ื‘ืงื‘ื•ืฆืช ื”ืžืกืคืจื™ื ื”ื–ื• ื—ื™ื™ื‘ ืœื”ื™ื•ืช ืจืืฉื•ื ื™.<br> ืœื”ื–ื›ื™ืจื›ื: ืžืกืคืจ ืจืืฉื•ื ื™ ื”ื•ื ืžืกืคืจ ืฉืื™ืŸ ืœื• ืžื—ืœืงื™ื ื—ื•ืฅ ืžืขืฆืžื• ื•ืžึพ1. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ื ื™ื—ื• ืฉื”ืžืกืคืจ ืฉื”ืชืงื‘ืœ ืื™ื ื• ืจืืฉื•ื ื™.<br> ืžื” ื”ื™ืชืจื•ืŸ ืฉืœ generator ืขืœ ืคื ื™ ืคื•ื ืงืฆื™ื” ืจื’ื™ืœื” ืฉืขื•ืฉื” ืื•ืชื• ื“ื‘ืจ? </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืจืžื–: <span style="background: black;">ืื ืชื ืกื• ืœื—ืœืง ืืช ื”ืžืกืคืจ ื‘ึพ2, ื•ืื– ื‘ึพ3 (ื•ื›ืŸ ื”ืœืื”), ื‘ืกื•ืคื• ืฉืœ ื“ื‘ืจ ืชื’ื™ืขื• ืœืžื—ืœืง ืจืืฉื•ื ื™ ืฉืœ ื”ืžืกืคืจ.</span><br> ืจืžื– ืขื‘ื”: <span style="background: black;">ื‘ื›ืœ ืคืขื ืฉืžืฆืืชื ืžื—ืœืง ืื—ื“ ืœืžืกืคืจ, ื—ืœืงื• ืืช ื”ืžืกืคืจ ื‘ืžื—ืœืง, ื•ื”ืชื—ื™ืœื• ืืช ื”ื—ื™ืคื•ืฉ ืžื—ื“ืฉ. ืžืชื™ ืขืœื™ื›ื ืœืขืฆื•ืจ?</span> </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">ืื•ืกืคื™ื ืื™ืŸึพืกื•ืคื™ื™ื</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ืขื‘ื•ืจ ื‘ืขื™ื•ืช ืžืกื•ื™ืžื•ืช, ื ืจืฆื” ืœื”ื™ื•ืช ืžืกื•ื’ืœื™ื ืœื”ื—ื–ื™ืจ ืื™ืŸึพืกื•ืฃ ืชื•ืฆืื•ืช.<br> ื ื™ืงื— ื›ื“ื•ื’ืžื” ืœืกื“ืจื” ืื™ืŸึพืกื•ืคื™ืช ืืช ืกื“ืจืช ืคื™ื‘ื•ื ืืฆ'ื™, ืฉื‘ื” ื›ืœ ืื™ื‘ืจ ื”ื•ื ืกื›ื•ื ื–ื•ื’ ื”ืื™ื‘ืจื™ื ื”ืงื•ื“ืžื™ื ืœื•:<br> $1, 1, 2, 3, 5, 8, \ldots$ </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ืžืžืฉ ืคื•ื ืงืฆื™ื” ืฉืžื—ื–ื™ืจื” ืœื ื• ืืช ืกื“ืจืช ืคื™ื‘ื•ื ืืฆ'ื™.<br> ื‘ืคื•ื ืงืฆื™ื” ืจื’ื™ืœื” ืื™ืŸ ืœื ื• ืืคืฉืจื•ืช ืœื”ื—ื–ื™ืจ ืžืกืคืจ ืื™ืŸึพืกื•ืคื™ ืฉืœ ืื™ื‘ืจื™ื, ื•ืœื›ืŸ ื ืฆื˜ืจืš ืœื”ื—ืœื™ื˜ ืขืœ ืžืกืคืจ ื”ืื™ื‘ืจื™ื ื”ืžืจื‘ื™ ืฉื ืจืฆื” ืœื”ื—ื–ื™ืจ: </p>
def fibonacci(max_items): a = 1 b = 1 numbers = [1, 1] while len(numbers) < max_items: a, b = b, a + b # Unpacking numbers.append(b) return numbers for number in fibonacci(10): print(number)
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืœืขื•ืžืช ื–ืืช, ืœึพgenerators ืœื ื—ื™ื™ื‘ ืœื”ื™ื•ืช ืกื•ืฃ ืžื•ื’ื“ืจ.<br> ื ืฉืชืžืฉ ื‘ึพ<code>while True</code> ืฉืชืžื™ื“ ืžืชืงื™ื™ื, ื›ื“ื™ ืฉื‘ืกื•ืคื• ืฉืœ ื“ื‘ืจ โ€“ ืชืžื™ื“ ื ื’ื™ืข ืœึพ<code>yield</code>: </p>
def fibonacci(): a = 1 b = 1 numbers = [1, 1] while True: # ืชืžื™ื“ ืžืชืงื™ื™ื yield a a, b = b, a + b generator_iterator = fibonacci() for number in range(10): print(next(generator_iterator)) # ืื ื™ ื™ื›ื•ืœ ืœื‘ืงืฉ ื‘ืงืœื•ืช ืจื‘ื” ืืช 10 ื”ืื™ื‘ืจื™ื ื”ื‘ืื™ื ื‘ืกื“ืจื” for number in range(10): print(next(generator_iterator))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl;"> <div style="display: flex; width: 10%; float: right; "> <img src="images/warning.png" style="height: 50px !important;" alt="ืื–ื”ืจื”!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: rtl;"> generators ืื™ืŸึพืกื•ืคื™ื™ื ื™ื›ื•ืœื™ื ืœื’ืจื•ื ื‘ืงืœื•ืช ืœืœื•ืœืื•ืช ืื™ืŸึพืกื•ืคื™ื•ืช, ื’ื ื‘ืœื•ืœืื•ืช <code>for</code>.<br> ืฉื™ืžื• ืœื‘ ืœืฆื•ืจืช ื”ื”ืชืขืกืงื•ืช ื”ืขื“ื™ื ื” ื‘ื“ื•ื’ืžืื•ืช ืœืžืขืœื”.<br> ื”ืจืฆืช ืœื•ืœืืช <code>for</code> ื™ืฉื™ืจื•ืช ืขืœ ื”ึพgenerator iterator ื”ื™ื™ืชื” ืžื›ื ื™ืกื” ืื•ืชื ื• ืœืœื•ืœืื” ืื™ืŸึพืกื•ืคื™ืช. </p> </div> </div> <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;"> <div style="display: flex; width: 10%; float: right; clear: both;"> <img src="images/exercise.svg" style="height: 50px !important;" alt="ืชืจื’ื•ืœ"> </div> <div style="width: 70%"> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื›ืชื‘ื• generator ืฉืžื—ื–ื™ืจ ืืช ื›ืœ ื”ืžืกืคืจื™ื ื”ืฉืœืžื™ื ื”ื’ื“ื•ืœื™ื ืžึพ0. </p> </div> <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;"> <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;"> <strong>ื—ืฉื•ื‘!</strong><br> ืคืชืจื• ืœืคื ื™ ืฉืชืžืฉื™ื›ื•! </p> </div> </div> <span style="text-align: right; direction: rtl; float: right; clear: both;">ืจื™ื‘ื•ื™ generator iterators</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื’ื“ื™ืจ generator ืคืฉื•ื˜ ืฉืžื—ื–ื™ืจ ืืช ื”ืื™ื‘ืจื™ื <samp>1</samp>, <samp>2</samp> ื•ึพ<samp>3</samp>: </p>
def simple_generator(): yield 1 yield 2 yield 3
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื™ืฆื•ืจ ืฉื ื™ generator iterators ("ืกืžื ื™ื") ืฉื•ื ื™ื ืฉืžืฆื‘ื™ืขื™ื ืœืฉื•ืจื” ื”ืจืืฉื•ื ื” ืฉืœ ื”ึพgenerator ืฉืžื•ืคื™ืข ืœืžืขืœื”: </p>
first_gen = simple_generator() second_gen = simple_generator()
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืขื ื™ื™ืŸ ื–ื”, ื—ืฉื•ื‘ ืœื”ื‘ื™ืŸ ืฉื›ืœ ืื—ื“ ืžื”ึพgenerator iterators ื”ื•ื "ื—ืฅ" ื ืคืจื“ ืฉืžืฆื‘ื™ืข ืœืฉื•ืจื” ื”ืจืืฉื•ื ื” ื‘ึพ<var>simple_generator</var>.<br> ืื ื ื‘ืงืฉ ืžื›ืœ ืื—ื“ ืžื”ื ืœื”ื—ื–ื™ืจ ืขืจืš, ื ืงื‘ืœ ืžืฉื ื™ื”ื ืืช 1, ื•ืื•ืชื• ื—ืฅ ื“ืžื™ื•ื ื™ ื™ืขื‘ื•ืจ ื‘ืฉื ื™ ื”ึพgenerator iterators ืœื”ืžืชื™ืŸ ื‘ืฉื•ืจื” ื”ืฉื ื™ื™ื”: </p>
print(next(first_gen)) print(next(second_gen))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื ื•ื›ืœ ืœืงื“ื ืืช <var>first_gen</var>, ืœื“ื•ื’ืžื”, ืœืกื•ืฃ ื”ืคื•ื ืงืฆื™ื”: </p>
print(next(first_gen)) print(next(first_gen))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ืื‘ืœ <var>second_gen</var> ื”ื•ื ื—ืฅ ื ืคืจื“, ืฉืขื“ื™ื™ืŸ ืžืฆื‘ื™ืข ืœืฉื•ืจื” ื”ืฉื ื™ื™ื” ืฉืœ ืคื•ื ืงืฆื™ื™ืช ื”ึพgenerator.<br> ืื ื ื‘ืงืฉ ืžืžื ื• ืืช ื”ืขืจืš ื”ื‘ื, ื”ื•ื ื™ืžืฉื™ืš ืืช ื”ืžืกืข ืžื”ืขืจืš <samp>2</samp>:<br> </p>
print(next(second_gen))
week05/3_Generators.ipynb
PythonFreeCourse/Notebooks
mit
Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook)
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat')
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0)
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), y
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Network Inputs Here, just creating some placeholders like normal.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x # Output layer, 32x32x3 logits = out = tf.tanh(logits) return out
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately. Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x = logits = out = return out, logits
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Model Loss Calculating the loss like before, nothing new here.
def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=0.2) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Here is a function for displaying generated images.
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Hyperparameters GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
real_size = (32,32,3) z_size = 100 learning_rate = 0.001 batch_size = 64 epochs = 1 alpha = 0.01 beta1 = 0.9 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) # Load the data and train the network here dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5))
dcgan-svhn/DCGAN_Exercises.ipynb
kitu2007/dl_class
mit
Setting up the Code Before we can plot our Dirichlet distributions, we need to do three things: Generate a set of x-y coordinates over our equilateral triangle Map the x-y coordinates to the 2-simplex coordinate space Compute Dir(ฮฑ)Dir(ฮฑ) for each point
def xy2bc(xy, tol=1.e-3): '''Converts 2D Cartesian coordinates to barycentric.''' s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 for i in range(3)] return np.clip(s, tol, 1.0 - tol)
examples/python/Dirichlet distribution.ipynb
iaja/scalaLDAvis
apache-2.0
Gamma: $\Gamma \left( z \right) = \int\limits_0^\infty {x^{z - 1} e^{ - x} dx}$ $ \text{Dir}\left(\boldsymbol{\alpha}\right)\rightarrow \mathrm{p}\left(\boldsymbol{\theta}\mid\boldsymbol{\alpha}\right)=\frac{\Gamma\left(\sum_{i=1}^{k}\boldsymbol{\alpha}{i}\right)}{\prod{i=1}^{k}\Gamma\left(\boldsymbol{\alpha}{i}\right)}\prod{i=1}^{k}\boldsymbol{\theta}{i}^{\boldsymbol{\alpha}{i}-1} \ K\geq2\ \text{number of categories} \ {\alpha {1},\ldots ,\alpha {K}}\ concentration\ parameters,\ where\ {\alpha_{i}>0} $
class Dirichlet(object): def __init__(self, alpha): self._alpha = np.array(alpha) self._coef = gamma(np.sum(self._alpha)) / reduce(mul, [gamma(a) for a in self._alpha]) def pdf(self, x): '''Returns pdf value for `x`.''' return self._coef * reduce(mul, [xx ** (aa - 1) for (xx, aa)in zip(x, self._alpha)]) def draw_pdf_contours(dist, nlevels=200, subdiv=8, **kwargs): import math refiner = tri.UniformTriRefiner(triangle) trimesh = refiner.refine_triangulation(subdiv=subdiv) pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)] plt.tricontourf(trimesh, pvals, nlevels, **kwargs) plt.axis('equal') plt.xlim(0, 1) plt.ylim(0, 0.75**0.5) plt.axis('off') draw_pdf_contours(Dirichlet([1, 1, 1])) draw_pdf_contours(Dirichlet([0.999, 0.999, 0.999])) draw_pdf_contours(Dirichlet([5, 5, 5])) draw_pdf_contours(Dirichlet([1, 2, 3])) draw_pdf_contours(Dirichlet([3, 2, 1])) draw_pdf_contours(Dirichlet([2, 3, 1]))
examples/python/Dirichlet distribution.ipynb
iaja/scalaLDAvis
apache-2.0
Download and Explore the Data
#downloading dataset !wget -nv -O ../data/PierceCricketData.csv https://ibm.box.com/shared/static/fjbsu8qbwm1n5zsw90q6xzfo4ptlsw96.csv df = pd.read_csv("../data/PierceCricketData.csv") df.head()
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<h6> Plot the Data Points </h6>
%matplotlib inline x_data, y_data = (df["Chirps"].values,df["Temp"].values) # plots the data points plt.plot(x_data, y_data, 'ro') # label the axis plt.xlabel("# Chirps per 15 sec") plt.ylabel("Temp in Farenhiet")
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
Looking at the scatter plot we can analyse that there is a linear relationship between the data points that connect chirps to the temperature and optimal way to infer this knowledge is by fitting a line that best describes the data. Which follows the linear equation: #### Ypred = m X + c We have to estimate the values of the slope 'm' and the inrtercept 'c' to fit a line where, X is the "Chirps" and Ypred is "Predicted Temperature" in this case. Create a Data Flow Graph using TensorFlow Model the above equation by assigning arbitrary values of your choice for slope "m" and intercept "c" which can predict the temp "Ypred" given Chirps "X" as input. example m=3 and c=2 Also, create a place holder for actual temperature "Y" which we will be needing for Optimization to estimate the actual values of slope and intercept.
# Create place holders and Variables along with the Linear model. m = tf.Variable(3, dtype=tf.float32) c = tf.Variable(2, dtype=tf.float32) x = tf.placeholder(dtype=tf.float32, shape=x_data.size) y = tf.placeholder(dtype=tf.float32, shape=y_data.size) # Linear model y_pred = m * x + c
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<div align="right"> <a href="#createvar" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="createvar" class="collapse"> ``` X = tf.placeholder(tf.float32, shape=(x_data.size)) Y = tf.placeholder(tf.float32,shape=(y_data.size)) # tf.Variable call creates a single updatable copy in the memory and efficiently updates # the copy to relfect any changes in the variable values through out the scope of the tensorflow session m = tf.Variable(3.0) c = tf.Variable(2.0) # Construct a Model Ypred = tf.add(tf.multiply(X, m), c) ``` </div> Create and Run a Session to Visualize the Predicted Line from above Graph <h6> Feel free to change the values of "m" and "c" in future to check how the initial position of line changes </h6>
#create session and initialize variables session = tf.Session() session.run(tf.global_variables_initializer()) #get prediction with initial parameter values y_vals = session.run(y_pred, feed_dict={x: x_data}) #Your code goes here plt.plot(x_data, y_vals, label='Predicted') plt.scatter(x_data, y_data, color='red', label='GT')
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<div align="right"> <a href="#matmul1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="matmul1" class="collapse"> ``` pred = session.run(Ypred, feed_dict={X:x_data}) #plot initial prediction against datapoints plt.plot(x_data, pred) plt.plot(x_data, y_data, 'ro') # label the axis plt.xlabel("# Chirps per 15 sec") plt.ylabel("Temp in Farenhiet") ``` </div> Define a Graph for Loss Function The essence of estimating the values for "m" and "c" lies in minimizing the difference between predicted "Ypred" and actual "Y" temperature values which is defined in the form of Mean Squared error loss function. $$ loss = \frac{1}{n}\sum_{i=1}^n{[Ypred_i - {Y}_i]^2} $$ Note: There are also other ways to model the loss function based on distance metric between predicted and actual temperature values. For this exercise Mean Suared error criteria is considered.
loss = tf.reduce_mean(tf.squared_difference(y_pred*0.1, y*0.1))
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<div align="right"> <a href="#matmul12" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="matmul12" class="collapse"> ``` # normalization factor nf = 1e-1 # seting up the loss function loss = tf.reduce_mean(tf.squared_difference(Ypred*nf,Y*nf)) ``` </div> Define an Optimization Graph to Minimize the Loss and Training the Model
# Your code goes here optimizer = tf.train.GradientDescentOptimizer(0.01) train_op = optimizer.minimize(loss)
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<div align="right"> <a href="#matmul13" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="matmul13" class="collapse"> ``` optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) #optimizer = tf.train.AdagradOptimizer(0.01 ) # pass the loss function that optimizer should optimize on. train = optimizer.minimize(loss) ``` </div> Initialize all the vairiables again
session.run(tf.global_variables_initializer())
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
Run session to train and predict the values of 'm' and 'c' for different training steps along with storing the losses in each step Get the predicted m and c values by running a session on Training a linear model. Also collect the loss for different steps to print and plot.
convergenceTolerance = 0.0001 previous_m = np.inf previous_c = np.inf steps = {} steps['m'] = [] steps['c'] = [] losses=[] for k in range(10000): ########## Your Code goes Here ########### _, _l, _m, _c = session.run([train_op, loss, m, c], feed_dict={x: x_data, y: y_data}) steps['m'].append(_m) steps['c'].append(_c) losses.append(_l) if (np.abs(previous_m - _m) or np.abs(previous_c - _c) ) <= convergenceTolerance : print("Finished by Convergence Criterion") print(k) print(_l) break previous_m = _m previous_c = _c
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<div align="right"> <a href="#matmul18" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="matmul18" class="collapse"> ``` # run a session to train , get m and c values with loss function _, _m , _c,_l = session.run([train, m, c,loss],feed_dict={X:x_data,Y:y_data}) ``` </div> Print the loss function
# Your Code Goes Here plt.plot(losses)
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
<div align="right"> <a href="#matmul199" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="matmul199" class="collapse"> ``` plt.plot(losses[:]) ``` </div>
y_vals_pred = y_pred.eval(session=session, feed_dict={x: x_data}) plt.scatter(x_data, y_vals_pred, marker='x', color='blue', label='Predicted') plt.scatter(x_data, y_data, label='GT', color='red') plt.legend() plt.ylabel('Temperature (Fahrenheit)') plt.xlabel('# Chirps per 15 s') session.close()
dl_tf_BDU/1.Intro_TF/ML0120EN-1.2-Exercise-LinearRegression.ipynb
santipuch590/deeplearning-tf
mit
Efficient serving <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/efficient_serving"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/efficient_serving.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/efficient_serving.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/efficient_serving.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Retrieval models are often built to surface a handful of top candidates out of millions or even hundreds of millions of candidates. To be able to react to the user's context and behaviour, they need to be able to do this on the fly, in a matter of milliseconds. Approximate nearest neighbour search (ANN) is the technology that makes this possible. In this tutorial, we'll show how to use ScaNN - a state of the art nearest neighbour retrieval package - to seamlessly scale TFRS retrieval to millions of items. What is ScaNN? ScaNN is a library from Google Research that performs dense vector similarity search at large scale. Given a database of candidate embeddings, ScaNN indexes these embeddings in a manner that allows them to be rapidly searched at inference time. ScaNN uses state of the art vector compression techniques and carefully implemented algorithms to achieve the best speed-accuracy tradeoff. It can greatly outperform brute force search while sacrificing little in terms of accuracy. Building a ScaNN-powered model To try out ScaNN in TFRS, we'll build a simple MovieLens retrieval model, just as we did in the basic retrieval tutorial. If you have followed that tutorial, this section will be familiar and can safely be skipped. To start, install TFRS and TensorFlow Datasets:
!pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
We also need to install scann: it's an optional dependency of TFRS, and so needs to be installed separately.
!pip install -q scann
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Set up all the necessary imports.
from typing import Dict, Text import os import pprint import tempfile import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
And load the data:
# Load the MovieLens 100K data. ratings = tfds.load( "movielens/100k-ratings", split="train" ) # Get the ratings data. ratings = (ratings # Retain only the fields we need. .map(lambda x: {"user_id": x["user_id"], "movie_title": x["movie_title"]}) # Cache for efficiency. .cache(tempfile.NamedTemporaryFile().name) ) # Get the movies data. movies = tfds.load("movielens/100k-movies", split="train") movies = (movies # Retain only the fields we need. .map(lambda x: x["movie_title"]) # Cache for efficiency. .cache(tempfile.NamedTemporaryFile().name))
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Before we can build a model, we need to set up the user and movie vocabularies:
user_ids = ratings.map(lambda x: x["user_id"]) unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000)))) unique_user_ids = np.unique(np.concatenate(list(user_ids.batch(1000))))
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
We'll also set up the training and test sets:
tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Model definition Just as in the basic retrieval tutorial, we build a simple two-tower model.
class MovielensModel(tfrs.Model): def __init__(self): super().__init__() embedding_dimension = 32 # Set up a model for representing movies. self.movie_model = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_movie_titles, mask_token=None), # We add an additional embedding to account for unknown tokens. tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension) ]) # Set up a model for representing users. self.user_model = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_user_ids, mask_token=None), # We add an additional embedding to account for unknown tokens. tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) # Set up a task to optimize the model and compute metrics. self.task = tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).cache().map(self.movie_model) ) ) def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor: # We pick out the user features and pass them into the user model. user_embeddings = self.user_model(features["user_id"]) # And pick out the movie features and pass them into the movie model, # getting embeddings back. positive_movie_embeddings = self.movie_model(features["movie_title"]) # The task computes the loss and the metrics. return self.task(user_embeddings, positive_movie_embeddings, compute_metrics=not training)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Fitting and evaluation A TFRS model is just a Keras model. We can compile it:
model = MovielensModel() model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Estimate it:
model.fit(train.batch(8192), epochs=3)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
And evaluate it.
model.evaluate(test.batch(8192), return_dict=True)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Approximate prediction The most straightforward way of retrieving top candidates in response to a query is to do it via brute force: compute user-movie scores for all possible movies, sort them, and pick a couple of top recommendations. In TFRS, this is accomplished via the BruteForce layer:
brute_force = tfrs.layers.factorized_top_k.BruteForce(model.user_model) brute_force.index_from_dataset( movies.batch(128).map(lambda title: (title, model.movie_model(title))) )
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Once created and populated with candidates (via the index method), we can call it to get predictions out:
# Get predictions for user 42. _, titles = brute_force(np.array(["42"]), k=3) print(f"Top recommendations: {titles[0]}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
On a small dataset of under 1000 movies, this is very fast:
%timeit _, titles = brute_force(np.array(["42"]), k=3)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
But what happens if we have more candidates - millions instead of thousands? We can simulate this by indexing all of our movies multiple times:
# Construct a dataset of movies that's 1,000 times larger. We # do this by adding several million dummy movie titles to the dataset. lots_of_movies = tf.data.Dataset.concatenate( movies.batch(4096), movies.batch(4096).repeat(1_000).map(lambda x: tf.zeros_like(x)) ) # We also add lots of dummy embeddings by randomly perturbing # the estimated embeddings for real movies. lots_of_movies_embeddings = tf.data.Dataset.concatenate( movies.batch(4096).map(model.movie_model), movies.batch(4096).repeat(1_000) .map(lambda x: model.movie_model(x)) .map(lambda x: x * tf.random.uniform(tf.shape(x))) )
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
We can build a BruteForce index on this larger dataset:
brute_force_lots = tfrs.layers.factorized_top_k.BruteForce() brute_force_lots.index_from_dataset( tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings)) )
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
The recommendations are still the same
_, titles = brute_force_lots(model.user_model(np.array(["42"])), k=3) print(f"Top recommendations: {titles[0]}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
But they take much longer. With a candidate set of 1 million movies, brute force prediction becomes quite slow:
%timeit _, titles = brute_force_lots(model.user_model(np.array(["42"])), k=3)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
As the number of candidate grows, the amount of time needed grows linearly: with 10 million candidates, serving top candidates would take 250 milliseconds. This is clearly too slow for a live service. This is where approximate mechanisms come in. Using ScaNN in TFRS is accomplished via the tfrs.layers.factorized_top_k.ScaNN layer. It follow the same interface as the other top k layers:
scann = tfrs.layers.factorized_top_k.ScaNN(num_reordering_candidates=100) scann.index_from_dataset( tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings)) )
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
The recommendations are (approximately!) the same
_, titles = scann(model.user_model(np.array(["42"])), k=3) print(f"Top recommendations: {titles[0]}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
But they are much, much faster to compute:
%timeit _, titles = scann(model.user_model(np.array(["42"])), k=3)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
In this case, we can retrieve the top 3 movies out of a set of ~1 million in around 2 milliseconds: 15 times faster than by computing the best candidates via brute force. The advantage of approximate methods grows even larger for larger datasets. Evaluating the approximation When using approximate top K retrieval mechanisms (such as ScaNN), speed of retrieval often comes at the expense of accuracy. To understand this trade-off, it's important to measure the model's evaluation metrics when using ScaNN, and to compare them with the baseline. Fortunately, TFRS makes this easy. We simply override the metrics on the retrieval task with metrics using ScaNN, re-compile the model, and run evaluation. To make the comparison, let's first run baseline results. We still need to override our metrics to make sure they are using the enlarged candidate set rather than the original set of movies:
# Override the existing streaming candidate source. model.task.factorized_metrics = tfrs.metrics.FactorizedTopK( candidates=lots_of_movies_embeddings ) # Need to recompile the model for the changes to take effect. model.compile() %time baseline_result = model.evaluate(test.batch(8192), return_dict=True, verbose=False)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
We can do the same using ScaNN:
model.task.factorized_metrics = tfrs.metrics.FactorizedTopK( candidates=scann ) model.compile() # We can use a much bigger batch size here because ScaNN evaluation # is more memory efficient. %time scann_result = model.evaluate(test.batch(8192), return_dict=True, verbose=False)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
ScaNN based evaluation is much, much quicker: it's over ten times faster! This advantage is going to grow even larger for bigger datasets, and so for large datasets it may be prudent to always run ScaNN-based evaluation to improve model development velocity. But how about the results? Fortunately, in this case the results are almost the same:
print(f"Brute force top-100 accuracy: {baseline_result['factorized_top_k/top_100_categorical_accuracy']:.2f}") print(f"ScaNN top-100 accuracy: {scann_result['factorized_top_k/top_100_categorical_accuracy']:.2f}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
This suggests that on this artificial datase, there is little loss from the approximation. In general, all approximate methods exhibit speed-accuracy tradeoffs. To understand this in more depth you can check out Erik Bernhardsson's ANN benchmarks. Deploying the approximate model The ScaNN-based model is fully integrated into TensorFlow models, and serving it is as easy as serving any other TensorFlow model. We can save it as a SavedModel object
lots_of_movies_embeddings # We re-index the ScaNN layer to include the user embeddings in the same model. # This way we can give the saved model raw features and get valid predictions # back. scann = tfrs.layers.factorized_top_k.ScaNN(model.user_model, num_reordering_candidates=1000) scann.index_from_dataset( tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings)) ) # Need to call it to set the shapes. _ = scann(np.array(["42"])) with tempfile.TemporaryDirectory() as tmp: path = os.path.join(tmp, "model") tf.saved_model.save( scann, path, options=tf.saved_model.SaveOptions(namespace_whitelist=["Scann"]) ) loaded = tf.saved_model.load(path)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
and then load it and serve, getting exactly the same results back:
_, titles = loaded(tf.constant(["42"])) print(f"Top recommendations: {titles[0][:3]}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
The resulting model can be served in any Python service that has TensorFlow and ScaNN installed. It can also be served using a customized version of TensorFlow Serving, available as a Docker container on Docker Hub. You can also build the image yourself from the Dockerfile. Tuning ScaNN Now let's look into tuning our ScaNN layer to get a better performance/accuracy tradeoff. In order to do this effectively, we first need to measure our baseline performance and accuracy. From above, we already have a measurement of our model's latency for processing a single (non-batched) query (although note that a fair amount of this latency is from non-ScaNN components of the model). Now we need to investigate ScaNN's accuracy, which we measure through recall. A recall@k of x% means that if we use brute force to retrieve the true top k neighbors, and compare those results to using ScaNN to also retrieve the top k neighbors, x% of ScaNN's results are in the true brute force results. Let's compute the recall for the current ScaNN searcher. First, we need to generate the brute force, ground truth top-k:
# Process queries in groups of 1000; processing them all at once with brute force # may lead to out-of-memory errors, because processing a batch of q queries against # a size-n dataset takes O(nq) space with brute force. titles_ground_truth = tf.concat([ brute_force_lots(queries, k=10)[1] for queries in test.batch(1000).map(lambda x: model.user_model(x["user_id"])) ], axis=0)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Our variable titles_ground_truth now contains the top-10 movie recommendations returned by brute-force retrieval. Now we can compute the same recommendations when using ScaNN:
# Get all user_id's as a 1d tensor of strings test_flat = np.concatenate(list(test.map(lambda x: x["user_id"]).batch(1000).as_numpy_iterator()), axis=0) # ScaNN is much more memory efficient and has no problem processing the whole # batch of 20000 queries at once. _, titles = scann(test_flat, k=10)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Next, we define our function that computes recall. For each query, it counts how many results are in the intersection of the brute force and the ScaNN results and divides this by the number of brute force results. The average of this quantity over all queries is our recall.
def compute_recall(ground_truth, approx_results): return np.mean([ len(np.intersect1d(truth, approx)) / len(truth) for truth, approx in zip(ground_truth, approx_results) ])
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
This gives us baseline recall@10 with the current ScaNN config:
print(f"Recall: {compute_recall(titles_ground_truth, titles):.3f}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
We can also measure the baseline latency:
%timeit -n 1000 scann(np.array(["42"]), k=10)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
Let's see if we can do better! To do this, we need a model of how ScaNN's tuning knobs affect performance. Our current model uses ScaNN's tree-AH algorithm. This algorithm partitions the database of embeddings (the "tree") and then scores the most promising of these partitions using AH, which is a highly optimized approximate distance computation routine. The default parameters for TensorFlow Recommenders' ScaNN Keras layer sets num_leaves=100 and num_leaves_to_search=10. This means our database is partitioned into 100 disjoint subsets, and the 10 most promising of these partitions is scored with AH. This means 10/100=10% of the dataset is being searched with AH. If we have, say, num_leaves=1000 and num_leaves_to_search=100, we would also be searching 10% of the database with AH. However, in comparison to the previous setting, the 10% we would search will contain higher-quality candidates, because a higher num_leaves allows us to make finer-grained decisions about what parts of the dataset are worth searching. It's no surprise then that with num_leaves=1000 and num_leaves_to_search=100 we get significantly higher recall:
scann2 = tfrs.layers.factorized_top_k.ScaNN( model.user_model, num_leaves=1000, num_leaves_to_search=100, num_reordering_candidates=1000) scann2.index_from_dataset( tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings)) ) _, titles2 = scann2(test_flat, k=10) print(f"Recall: {compute_recall(titles_ground_truth, titles2):.3f}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
However, as a tradeoff, our latency has also increased. This is because the partitioning step has gotten more expensive; scann picks the top 10 of 100 partitions while scann2 picks the top 100 of 1000 partitions. The latter can be more expensive because it involves looking at 10 times as many partitions.
%timeit -n 1000 scann2(np.array(["42"]), k=10)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
In general, tuning ScaNN search is about picking the right tradeoffs. Each individual parameter change generally won't make search both faster and more accurate; our goal is to tune the parameters to optimally trade off between these two conflicting goals. In our case, scann2 significantly improved recall over scann at some cost in latency. Can we dial back some other knobs to cut down on latency, while preserving most of our recall advantage? Let's try searching 70/1000=7% of the dataset with AH, and only rescoring the final 400 candidates:
scann3 = tfrs.layers.factorized_top_k.ScaNN( model.user_model, num_leaves=1000, num_leaves_to_search=70, num_reordering_candidates=400) scann3.index_from_dataset( tf.data.Dataset.zip((lots_of_movies, lots_of_movies_embeddings)) ) _, titles3 = scann3(test_flat, k=10) print(f"Recall: {compute_recall(titles_ground_truth, titles3):.3f}")
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
scann3 delivers about a 3% absolute recall gain over scann while also delivering lower latency:
%timeit -n 1000 scann3(np.array(["42"]), k=10)
docs/examples/efficient_serving.ipynb
tensorflow/recommenders
apache-2.0
ๅค‰ๆ•ฐๅใจใƒ‡ใƒผใ‚ฟใฎๅ†…ๅฎนใƒกใƒข CENSUS: ๅธ‚ๅŒบ็”บๆ‘ใ‚ณใƒผใƒ‰(9ๆก) P: ๆˆ็ด„ไพกๆ ผ S: ๅฐ‚ๆœ‰้ข็ฉ L: ๅœŸๅœฐ้ข็ฉ R: ้ƒจๅฑ‹ๆ•ฐ RW: ๅ‰้ข้“่ทฏๅน…ๅ“ก CY: ๅปบ็ฏ‰ๅนด A: ๅปบ็ฏ‰ๅพŒๅนดๆ•ฐ(ๆˆ็ด„ๆ™‚) TS: ๆœ€ๅฏ„้ง…ใพใงใฎ่ท้›ข TT: ๆฑไบฌ้ง…ใพใงใฎๆ™‚้–“ ACC: ใ‚ฟใƒผใƒŸใƒŠใƒซ้ง…ใพใงใฎๆ™‚้–“ WOOD: ๆœจ้€ ใƒ€ใƒŸใƒผ SOUTH: ๅ—ๅ‘ใใƒ€ใƒŸใƒผ RSD: ไฝๅฑ…็ณปๅœฐๅŸŸใƒ€ใƒŸใƒผ CMD: ๅ•†ๆฅญ็ณปๅœฐๅŸŸใƒ€ใƒŸใƒผ IDD: ๅทฅๆฅญ็ณปๅœฐๅŸŸใƒ€ใƒŸใƒผ FAR: ๅปบใบใ„็އ FLR: ๅฎน็ฉ็އ TDQ: ๆˆ็ด„ๆ™‚็‚น(ๅ››ๅŠๆœŸ) X: ็ทฏๅบฆ Y: ็ตŒๅบฆ CITY_CODE: ๅธ‚ๅŒบ็”บๆ‘ใ‚ณใƒผใƒ‰(5ๆก) CITY_NAME: ๅธ‚ๅŒบ็”บๆ‘ๅ BLOCK: ๅœฐๅŸŸใƒ–ใƒญใƒƒใ‚ฏๅ
data = pd.read_csv("TokyoSingle.csv") data = data.dropna() CITY_NAME = data['CITY_CODE'].copy() CITY_NAME[CITY_NAME == 13101] = '01ๅƒไปฃ็”ฐๅŒบ' CITY_NAME[CITY_NAME == 13102] = "02ไธญๅคฎๅŒบ" CITY_NAME[CITY_NAME == 13103] = "03ๆธฏๅŒบ" CITY_NAME[CITY_NAME == 13104] = "04ๆ–ฐๅฎฟๅŒบ" CITY_NAME[CITY_NAME == 13105] = "05ๆ–‡ไบฌๅŒบ" CITY_NAME[CITY_NAME == 13106] = "06ๅฐๆฑๅŒบ" CITY_NAME[CITY_NAME == 13107] = "07ๅขจ็”ฐๅŒบ" CITY_NAME[CITY_NAME == 13108] = "08ๆฑŸๆฑๅŒบ" CITY_NAME[CITY_NAME == 13109] = "09ๅ“ๅทๅŒบ" CITY_NAME[CITY_NAME == 13110] = "10็›ฎ้ป’ๅŒบ" CITY_NAME[CITY_NAME == 13111] = "11ๅคง็”ฐๅŒบ" CITY_NAME[CITY_NAME == 13112] = "12ไธ–็”ฐ่ฐทๅŒบ" CITY_NAME[CITY_NAME == 13113] = "13ๆธ‹่ฐทๅŒบ" CITY_NAME[CITY_NAME == 13114] = "14ไธญ้‡ŽๅŒบ" CITY_NAME[CITY_NAME == 13115] = "15ๆ‰ไธฆๅŒบ" CITY_NAME[CITY_NAME == 13116] = "16่ฑŠๅณถๅŒบ" CITY_NAME[CITY_NAME == 13117] = "17ๅŒ—ๅŒบ" CITY_NAME[CITY_NAME == 13118] = "18่’ๅทๅŒบ" CITY_NAME[CITY_NAME == 13119] = "19ๆฟๆฉ‹ๅŒบ" CITY_NAME[CITY_NAME == 13120] = "20็ทด้ฆฌๅŒบ" CITY_NAME[CITY_NAME == 13121] = "21่ถณ็ซ‹ๅŒบ" CITY_NAME[CITY_NAME == 13122] = "22่‘›้ฃพๅŒบ" CITY_NAME[CITY_NAME == 13123] = "23ๆฑŸๆˆธๅทๅŒบ" #Make Japanese Block name BLOCK = data["CITY_CODE"].copy() BLOCK[BLOCK == 13101] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13102] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13103] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13104] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13109] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13110] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13111] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13112] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13113] = "01้ƒฝๅฟƒใƒปๅŸŽๅ—" BLOCK[BLOCK == 13114] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13115] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13105] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13106] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13116] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13117] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13119] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13120] = "02ๅŸŽ่ฅฟใƒปๅŸŽๅŒ—" BLOCK[BLOCK == 13107] = "03ๅŸŽๆฑ" BLOCK[BLOCK == 13108] = "03ๅŸŽๆฑ" BLOCK[BLOCK == 13118] = "03ๅŸŽๆฑ" BLOCK[BLOCK == 13121] = "03ๅŸŽๆฑ" BLOCK[BLOCK == 13122] = "03ๅŸŽๆฑ" BLOCK[BLOCK == 13123] = "03ๅŸŽๆฑ" names = list(data.columns) + ['CITY_NAME', 'BLOCK'] data = pd.concat((data, CITY_NAME, BLOCK), axis = 1) data.columns = names
ไธๅ‹•็”ฃ/model2_1.ipynb
NlGG/Projects
mit
ๅธ‚ๅŒบ็”บๆ‘ๅˆฅใฎไปถๆ•ฐใ‚’้›†่จˆ
print(data['CITY_NAME'].value_counts()) vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y'] eq = fml_build(vars) y, X = dmatrices(eq, data=data, return_type='dataframe') CITY_NAME = pd.get_dummies(data['CITY_NAME']) TDQ = pd.get_dummies(data['TDQ']) X = pd.concat((X, CITY_NAME, TDQ), axis=1) datas = pd.concat((y, X), axis=1) datas = datas[datas['12ไธ–็”ฐ่ฐทๅŒบ'] == 1][0:5000] class CAR(Chain): def __init__(self, unit1, unit2, unit3, col_num): self.unit1 = unit1 self.unit2 = unit2 self.unit3 = unit3 super(CAR, self).__init__( l1 = L.Linear(col_num, unit1), l2 = L.Linear(self.unit1, self.unit1), l3 = L.Linear(self.unit1, self.unit2), l4 = L.Linear(self.unit2, self.unit3), l5 = L.Linear(self.unit3, self.unit3), l6 = L.Linear(self.unit3, 1), ) def __call__(self, x, y): fv = self.fwd(x, y) loss = F.mean_squared_error(fv, y) return loss def fwd(self, x, y): h1 = F.sigmoid(self.l1(x)) h2 = F.sigmoid(self.l2(h1)) h3 = F.sigmoid(self.l3(h2)) h4 = F.sigmoid(self.l4(h3)) h5 = F.sigmoid(self.l5(h4)) h6 = self.l6(h5) return h6 class DLmodel(object): def __init__(self, data, vars, bs=200, n=1000): self.vars = vars eq = fml_build(vars) y, X = dmatrices(eq, data=datas, return_type='dataframe') self.y_in = y[:-n] self.X_in = X[:-n] self.y_ex = y[-n:] self.X_ex = X[-n:] self.logy_in = np.log(self.y_in) self.logy_ex = np.log(self.y_ex) self.bs = bs def DL(self, ite=100, bs=200, add=False): y_in = np.array(self.y_in, dtype='float32') X_in = np.array(self.X_in, dtype='float32') y = Variable(y_in) x = Variable(X_in) num, col_num = X_in.shape if add is False: self.model1 = CAR(13, 13, 3, col_num) optimizer = optimizers.Adam() optimizer.setup(self.model1) loss_val = 100000000 for j in range(ite + 10000): sffindx = np.random.permutation(num) for i in range(0, num, bs): x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]]) y = Variable(y_in[sffindx[i:(i+bs) if (i+bs) < num else num]]) self.model1.zerograds() loss = self.model1(x, y) loss.backward() optimizer.update() if loss_val >= loss.data: loss_val = loss.data if j > ite: if loss_val >= loss.data: loss_val = loss.data print('epoch:', j) print('train mean loss={}'.format(loss_val)) print(' - - - - - - - - - ') break if j % 1000 == 0: print('epoch:', j) print('train mean loss={}'.format(loss_val)) print(' - - - - - - - - - ') def predict(self): y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex)) X_ex = np.array(self.X_ex, dtype='float32') X_ex = Variable(X_ex) resid_pred = self.model1.fwd(X_ex, X_ex).data print(resid_pred[:10]) self.pred = resid_pred self.error = np.array(y_ex - self.pred.reshape(len(self.pred),))[0] def compare(self): plt.hist(self.error) vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR'] #vars += vars + list(TDQ.columns) model = DLmodel(datas, vars) model.DL(ite=20000, bs=200) model.DL(ite=20000, bs=200, add=True) model.predict()
ไธๅ‹•็”ฃ/model2_1.ipynb
NlGG/Projects
mit
้’ใŒOLSใฎ่ชคๅทฎใ€็ท‘ใŒOLSใจๆทฑๅฑคๅญฆ็ฟ’ใ‚’็ต„ใฟๅˆใ‚ใ›ใŸ่ชคๅทฎใ€‚
model.compare() print(np.mean(model.error1)) print(np.mean(model.error2)) print(np.mean(np.abs(model.error1))) print(np.mean(np.abs(model.error2))) print(max(np.abs(model.error1))) print(max(np.abs(model.error2))) print(np.var(model.error1)) print(np.var(model.error2)) fig = plt.figure() ax = fig.add_subplot(111) errors = [model.error1, model.error2] bp = ax.boxplot(errors) plt.grid() plt.ylim([-5000,5000]) plt.title('ๅˆ†ๅธƒใฎ็ฎฑใฒใ’ๅ›ณ') plt.show() X = model.X_ex['X'].values Y = model.X_ex['Y'].values e = model.error2 import numpy import matplotlib.pyplot as plt from mpl_toolkits.mplot3d.axes3d import Axes3D fig=plt.figure() ax=Axes3D(fig) ax.scatter3D(X, Y, e) plt.show() import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import matplotlib.tri as mtri #============ # First plot #============ # Plot the surface. The triangles in parameter space determine which x, y, z # points are connected by an edge. ax = fig.add_subplot(1, 2, 1, projection='3d') ax.plot_trisurf(X, Y, e) ax.set_zlim(-1, 1) plt.show()
ไธๅ‹•็”ฃ/model2_1.ipynb
NlGG/Projects
mit
Sawtooth map Let's examine simple system where each next element is derived by previous element by the following rule: $$x_{n+1}={2 x_n}$$ where operator ${}$ means taking decimal part of a number.
#from math import trunc assert trunc(1.5) == 1.0 assert trunc(12.59) == 12.0 assert trunc(1) == 1.0 sawtooth = lambda x: round(2*x-trunc(2*x),8) sawtooth_borders = [pd.DataFrame([(0,0),(0.5,1)]), pd.DataFrame([(0.5,0),(1,1)])] for x0 in [0.4, 0.41, 0.42, 0.43]: cobweb_plot(take(iterator(sawtooth, x0=x0), n=30, skip=500), [0,1], *sawtooth_borders)
Dynamical chaos.ipynb
schmooser/physics
mit
Logistic map Here we have very simple equation: $$x_{n+1} = k x_n (1 - x_n)$$ where $k$ is some fixed constant.
logistic = lambda k, x: k*x*(1-x) limits = [0,1] for k in [2.8, 3.2, 3.5, 3.9]: l = lambda x: logistic(k, x) cobweb_plot(take(iterator(l, x0=0.1), n=30, skip=500), limits, 'plot', boundary(l, limits)) # let's plot the bifurcation diagram dots = [] for k in linspace(2.5, 4, 0.001): for dot in set(take(iterator(lambda x: logistic(k, x), x0=0.5), n=50, skip=500)): dots.append((k, dot)) df = pd.DataFrame(dots, columns=('k', 'xs')) df.plot(x='k', y='xs', kind='scatter', style='.', figsize=(16,12), s=1, xlim=[2.5,4], ylim=[0,1])
Dynamical chaos.ipynb
schmooser/physics
mit
Define A Function, f(x)
# Define a function that, def f(x): # Outputs x multiplied by a random number drawn from a normal distribution return x * np.random.normal(size=1)[0]
mathematics/argmin_and_argmax.ipynb
tpin3694/tpin3694.github.io
mit
Create Some Values Of x
# Create some values of x xs = [1,2,3,4,5,6]
mathematics/argmin_and_argmax.ipynb
tpin3694/tpin3694.github.io
mit
Find The Argmin Of f(x)
#Define argmin that def argmin(f, xs): # Applies f on all the x's data = [f(x) for x in xs] # Finds index of the smallest output of f(x) index_of_min = data.index(min(data)) # Returns the x that produced that output return xs[index_of_min] # Run the argmin function argmin(f, xs)
mathematics/argmin_and_argmax.ipynb
tpin3694/tpin3694.github.io
mit
Check Our Results
print('x','|', 'f(x)') print('--------------') for x in xs: print(x,'|', f(x))
mathematics/argmin_and_argmax.ipynb
tpin3694/tpin3694.github.io
mit
We use experimental absorption coefficients of crystalline silicon and assume that the absorptivity follows Beer-Lambert's law. Also, we assume that every abosorped photons can be converted into electrons. In this case, having a 100% reflector on the back side of silicon can be thought of as doubling the thickness of silicon substrate. Calculation
%matplotlib inline from pypvcell.photocurrent import conv_abs_to_qe,calc_jsc from pypvcell.illumination import Illumination from pypvcell.spectrum import Spectrum import numpy as np import matplotlib.pyplot as plt abs_file = "/Users/kanhua/Dropbox/Programming/pypvcell/legacy/si_alpha.csv" si_alpha = np.loadtxt(abs_file, delimiter=',') si_alpha_sp = Spectrum(si_alpha[:,0],si_alpha[:,1],'m') layer_t=np.logspace(-8,-3,num=100) jsc_baseline=np.zeros(layer_t.shape) jsc_full_r=np.zeros(layer_t.shape) it=np.nditer(layer_t,flags=['f_index']) ill=Illumination("AM1.5g") def filter_spec(ill): ill_a=ill.get_spectrum(to_x_unit='eV',to_photon_flux=True) ill_a=ill_a[:,ill_a[0,:]>1.1] ill_a=ill_a[:,ill_a[0,:]<1.42] ill_a[1,1]=0 return Spectrum(ill_a[0,:],ill_a[1,:],'eV',y_unit='m**-2',is_spec_density=True,is_photon_flux=False) #ill=filter_spec(ill) while not it.finished: t=it[0] #thickness of Si layer qe=conv_abs_to_qe(si_alpha_sp,t) jsc_baseline[it.index]=calc_jsc(ill, qe) # Assme 100% reflection on the back side, essentially doubling the thickness of silicon qe_full_r=conv_abs_to_qe(si_alpha_sp,t*2) jsc_full_r[it.index]=calc_jsc(ill,qe_full_r) it.iternext() it.reset()
legacy/Enhancement in Jsc of back reflector.ipynb
kanhua/pypvcell
apache-2.0
Photocurrent with and without the back reflector
plt.semilogx(layer_t*1e6, jsc_baseline,hold=True,label="Si") plt.semilogx(layer_t*1e6,jsc_full_r,label="Si+100% mirror") plt.xlabel("thickness of Si substrate (um)") plt.ylabel("Jsc (A/m^2)") plt.legend(loc="best")
legacy/Enhancement in Jsc of back reflector.ipynb
kanhua/pypvcell
apache-2.0
Normlize the Jsc(Si+mirror) by Jsc(Si only)
plt.semilogx(layer_t*1e6,jsc_full_r/jsc_baseline) plt.xlabel("thickness of Si substrate (um)") plt.ylabel("Normalized Jsc enhancement") plt.savefig("jsc_enhancement.pdf") plt.show()
legacy/Enhancement in Jsc of back reflector.ipynb
kanhua/pypvcell
apache-2.0
We can see that the back reflector can be very effective when the thickness of silicon substrate is thin (< 1um). Silicon substrates with more than 10-um thicknesses cannot be benefited from this structure very well. This is the reason that photonic or plasmonic structure are useful for thin-film or ultra-thin-film silicon cell, but not conventional bulk crystalline silicon cell.
# more detailed investigation plt.semilogx(layer_t*1e6,jsc_full_r/jsc_baseline) plt.xlabel("thickness of Si substrate (um)") plt.ylabel("Jsc enhancement (2x)") plt.xlim([100,1000]) plt.ylim([1.0,1.5]) plt.show()
legacy/Enhancement in Jsc of back reflector.ipynb
kanhua/pypvcell
apache-2.0
More audacious assumption Assume that somehow we have a novel reflector that can increase the optical absorption length by 10 times.
while not it.finished: t=it[0] #thickness of Si layer qe=conv_abs_to_qe(si_alpha_sp,t) jsc_baseline[it.index]=calc_jsc(Illumination("AM1.5g"), qe) # Assme 100% reflection on the back side, essentially doubling the thickness of silicon qe_full_r=conv_abs_to_qe(si_alpha_sp,t*10) jsc_full_r[it.index]=calc_jsc(Illumination("AM1.5g"),qe_full_r) it.iternext() it.reset() plt.semilogx(layer_t*1e6,jsc_full_r/jsc_baseline) plt.xlabel("thickness of Si substrate (um)") plt.ylabel("Jsc enhancement (10x)") plt.show()
legacy/Enhancement in Jsc of back reflector.ipynb
kanhua/pypvcell
apache-2.0
repo_path is the path to a clone of swcarpentry/make-novice
repo_path = os.path.join( home, 'Dropbox/spikes/make-novice', ) assert os.path.exists(repo_path)
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
paths are the paths to child directories in a clone of swcarpentry/make-novice
paths = ( 'code', 'data', ) paths = ( code, data, ) = [os.path.join(repo_path, path) for path in paths] assert all(os.path.exists(path) for path in paths)
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Begin tutorial. Use the magic run to execute the Python script wordcount.py. The variables with '$' in front of them are the values of the Python variables in this notebook.
run $code/wordcount.py $data/books/isles.txt $repo_path/isles.dat
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
Use shell to examine the first 5 lines of the output file from running wordcount.py
!head -5 $repo_path/isles.dat
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit
We can see that the file consists of one row per word. Each row shows the word itself, the number of occurrences of that word, and the number of occurrences as a percentage of the total number of words in the text file.
run $code/wordcount.py $data/books/abyss.txt $repo_path/abyss.dat !head -5 $repo_path/abyss.dat
content/posts/makefile-tutorial/makefile_tutorial_0.ipynb
dm-wyncode/zipped-code
mit