markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Notice how this notebook has an empty code cell at the end: | show_plain_md('test_files/hello_world.ipynb') | ```python
#meta:show_steps=start,train
print('hello world')
```
hello world
```python
```
| Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
With `RmEmptyCode` these empty code cells are stripped from the markdown: | c, _ = run_preprocessor([RmEmptyCode], 'test_files/hello_world.ipynb', display_results=True)
assert len(re.findall('```python',c)) == 1 | ```python
#meta:show_steps=start,train
print('hello world')
```
<CodeOutputBlock lang="python">
```
hello world
```
</CodeOutputBlock>
| Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Truncate Metaflow Output - | #export
class MetaflowTruncate(Preprocessor):
"""Remove the preamble and timestamp from Metaflow output."""
_re_pre = re.compile(r'([\s\S]*Metaflow[\s\S]*Validating[\s\S]+The graph[\s\S]+)(\n[\s\S]+Workflow starting[\s\S]+)')
_re_time = re.compile('\d{4}-\d{2}-\d{2}\s\d{2}\:\d{2}\:\d{2}.\d{3}')
def... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
When you run a metaflow Flow, you are presented with a fair amount of boilerpalte before the job starts running that is not necesary to show in the documentation: | show_plain_md('test_files/run_flow.ipynb') | ```python
#meta:show_steps=start
!python myflow.py run
```
[35m[1mMetaflow 2.5.3[0m[35m[22m executing [0m[31m[1mMyFlow[0m[35m[22m[0m[35m[22m for [0m[31m[1muser:hamel[0m[35m[22m[K[0m[35m[22m[0m
[35m[22mValidating your flow...[K[0m[35m[22m[0m
[32m[1m The graph looks good!... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
We don't need to see the beginning part that validates the graph, and we don't need the time-stamps either. We can remove these with the `MetaflowTruncate` preprocessor: | c, _ = run_preprocessor([MetaflowTruncate], 'test_files/run_flow.ipynb', display_results=True)
assert 'Validating your flow...' not in c | ```python
#meta:show_steps=start
!python myflow.py run
```
<CodeOutputBlock lang="python">
```
[35m [0m[1mWorkflow starting (run-id 1647304124981100):[0m
[35m [0m[32m[1647304124981100/start/1 (pid 41951)] [0m[1mTask is starting.[0m
[35m [0m[32m[1647304124981100/start/1 (pid 41951)] [0m[22mthis ... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Turn Metadata into Cell Tags - | #export
class UpdateTags(Preprocessor):
"""
Create cell tags based upon comment `#cell_meta:tags=<tag>`
"""
def preprocess_cell(self, cell, resources, index):
root = cell.metadata.get('nbdoc', {})
tags = root.get('tags', root.get('tag')) # allow the singular also
if tags: ce... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Consider this python notebook prior to processing. The comments can be used configure the visibility of cells. - `cell_meta:tags=remove_output` will just remove the output- `cell_meta:tags=remove_input` will just remove the input- `cell_meta:tags=remove_cell` will remove both the input and outputNote that you can use ... | show_plain_md('test_files/visibility.ipynb') | # Configuring Cell Visibility
#### Cell with the comment `#cell_meta:tag=remove_output`
```
#cell_meta:tag=remove_output
print('the output is removed, so you can only see the print statement.')
```
the output is removed, so you can only see the print statement.
#### Cell with the comment `#cell_meta:tag=remov... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
`UpdateTags` is meant to be used with `InjectMeta` and `TagRemovePreprocessor` to configure the visibility of cells in rendered docs. Here you can see what the notebook looks like after pre-processing: | # Configure an exporter from scratch
_test_file = 'test_files/visibility.ipynb'
c = Config()
c.TagRemovePreprocessor.remove_cell_tags = ("remove_cell",)
c.TagRemovePreprocessor.remove_all_outputs_tags = ('remove_output',)
c.TagRemovePreprocessor.remove_input_tags = ('remove_input',)
c.MarkdownExporter.preprocessors = [... | # Configuring Cell Visibility
#### Cell with the comment `#cell_meta:tag=remove_output`
```
#cell_meta:tag=remove_output
print('the output is removed, so you can only see the print statement.')
```
#### Cell with the comment `#cell_meta:tag=remove_input`
hello, you cannot see the code that created me.
#### C... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Selecting Metaflow Steps In Output - | #export
class MetaflowSelectSteps(Preprocessor):
"""
Hide Metaflow steps in output based on cell metadata.
"""
re_step = r'.*\d+/{0}/\d+\s\(pid\s\d+\).*'
def preprocess_cell(self, cell, resources, index):
root = cell.metadata.get('nbdoc', {})
steps = root.get('show_steps', root.... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
`MetaflowSelectSteps` is meant to be used with `InjectMeta` to only show specific steps in the output logs from Metaflow. For example, if you want to only show the `start` and `train` steps in your flow, you would annotate your cell with the following pattern: `cell_meta:show_steps=`Note that `show_step` and `show_ste... | c, _ = run_preprocessor([InjectMeta, MetaflowSelectSteps],
'test_files/run_flow_showstep.ipynb',
display_results=True)
assert 'end' not in c | ```
#cell_meta:show_steps=start,train
!python myflow.py run
```
<CodeOutputBlock lang="">
```
...
[35m2022-02-15 14:01:14.810 [0m[32m[1644962474801237/start/1 (pid 46758)] [0m[1mTask is starting.[0m
[35m2022-02-15 14:01:15.433 [0m[32m[1644962474801237/start/1 (pid 46758)] [0m[22mthis is the start[... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Hide Specific Lines of Output With Keywords - | #export
class FilterOutput(Preprocessor):
"""
Hide Output Based on Keywords.
"""
def preprocess_cell(self, cell, resources, index):
root = cell.metadata.get('nbdoc', {})
words = root.get('filter_words', root.get('filter_word'))
if 'outputs' in cell and words:
_re = f"... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
If we want to exclude output with certain keywords, we can use the `meta:filter_words` comment. For example, if we wanted to ignore all output that contains the text `FutureWarning` or `MultiIndex` we can use the comment:`meta:filter_words=FutureWarning,MultiIndex`Consider this output below: | show_plain_md('test_files/strip_out.ipynb') | ```python
#meta:filter_words=FutureWarning,MultiIndex
#meta:show_steps=end
!python serialize_xgb_dmatrix.py run
```
/Users/hamel/opt/anaconda3/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with ... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Notice how the lines containing the terms `FutureWarning` or `MultiIndex` are stripped out: | c, _ = run_preprocessor([InjectMeta, FilterOutput],
'test_files/strip_out.ipynb',
display_results=True)
assert 'FutureWarning:' not in c and 'from pandas import MultiIndex, Int64Index' not in c | ```python
#meta:filter_words=FutureWarning,MultiIndex
#meta:show_steps=end
!python serialize_xgb_dmatrix.py run
```
<CodeOutputBlock lang="python">
```
[35m[1mMetaflow 2.5.3[0m[35m[22m executing [0m[31m[1mSerializeXGBDataFlow[0m[35m[22m[0m[35m[22m for [0m[31m[1muser:hamel[0m[35m[22m[K[0m[35m[2... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Hide Specific Lines of Code - | #export
class HideInputLines(Preprocessor):
"""
Hide lines of code in code cells with the comment `#meta_hide_line` at the end of a line of code.
"""
tok = '#meta_hide_line'
def preprocess_cell(self, cell, resources, index):
if cell.cell_type == 'code':
if self.tok in cell.s... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
You can use the special comment `meta_hide_line` to hide a specific line of code in a code cell. This is what the code looks like before: | show_plain_md('test_files/hide_lines.ipynb') | ```python
def show():
a = 2
b = 3 #meta_hide_line
```
| Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
and after: | c, _ = run_preprocessor([InjectMeta, HideInputLines],
'test_files/hide_lines.ipynb',
display_results=True)
#hide
_res = """```python
def show():
a = 2
```"""
assert _res in c | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Handle Scripts With `%%writefile` - | #export
class WriteTitle(Preprocessor):
"""Modify the code-fence with the filename upon %%writefile cell magic."""
pattern = r'(^[\S\s]*%%writefile\s)(\S+)\n'
def preprocess_cell(self, cell, resources, index):
m = re.match(self.pattern, cell.source)
if m:
filename = m.group... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
`WriteTitle` creates the proper code-fence with a title in the situation where the `%%writefile` magic is used.For example, here are contents before pre-processing: | show_plain_md('test_files/writefile.ipynb') | A test notebook
```python
%%writefile myflow.py
from metaflow import FlowSpec, step
class MyFlow(FlowSpec):
@step
def start(self):
print('this is the start')
self.next(self.train)
@step
def train(self):
print('the train step')
self.next(self.end)
@st... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
When we use `WriteTitle`, you will see the code-fence will change appropriately: | c, _ = run_preprocessor([WriteTitle], 'test_files/writefile.ipynb', display_results=True)
assert '```py title="myflow.py"' in c and '```txt title="hello.txt"' in c | A test notebook
```py title="myflow.py"
%%writefile myflow.py
from metaflow import FlowSpec, step
class MyFlow(FlowSpec):
@step
def start(self):
print('this is the start')
self.next(self.train)
@step
def train(self):
print('the train step')
self.next(self.end... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Clean Flags and Magics - | #export
_tst_flags = get_config()['tst_flags'].split('|')
class CleanFlags(Preprocessor):
"""A preprocessor to remove Flags"""
patterns = [re.compile(r'^#\s*{0}\s*'.format(f), re.MULTILINE) for f in _tst_flags]
def preprocess_cell(self, cell, resources, index):
if cell.cell_type == 'code':
... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
`CleanMagics` strips magic cell commands `%%` so they do not appear in rendered markdown files: | c, _ = run_preprocessor([WriteTitle, CleanMagics], 'test_files/writefile.ipynb', display_results=True)
assert '%%' not in c | A test notebook
```py title="myflow.py"
from metaflow import FlowSpec, step
class MyFlow(FlowSpec):
@step
def start(self):
print('this is the start')
self.next(self.train)
@step
def train(self):
print('the train step')
self.next(self.end)
@step
d... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Here is how `CleanMagics` Works on the file with the Metaflow log outputs from earlier, we can see that the `cell_meta` comments are gone: | c, _ = run_preprocessor([InjectMeta, MetaflowSelectSteps, CleanMagics],
'test_files/run_flow_showstep.ipynb', display_results=True)
#hide
c, _ = run_preprocessor([WriteTitle, CleanMagics], 'test_files/hello_world.ipynb')
assert '#cell_meta' not in c | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Formatting Code With Black - | #export
black_mode = Mode()
class Black(Preprocessor):
"""Format code that has a cell tag `black`"""
def preprocess_cell(self, cell, resources, index):
tags = cell.metadata.get('tags', [])
if cell.cell_type == 'code' and 'black' in tags:
cell.source = format_str(src_contents=cell.so... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
`Black` is a preprocessor that will format cells that have the cell tag `black` with [Python black](https://github.com/psf/black) code formatting. You can apply tags via the notebook interface or with a comment `meta:tag=black`. This is how cell formatting looks before [black](https://github.com/psf/black) formatting: | show_plain_md('test_files/black.ipynb') | Format with black
```python
#meta:tag=black
j = [1,
2,
3
]
```
```python
%%writefile black_test.py
#meta:tag=black
def very_important_function(template: str, *variables, file: os.PathLike, engine: str, header: bool = True, debug: bool = False):
"""Applies `variables` to the `template` and writes to ... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
After black is applied, the code looks like this: | c, _ = run_preprocessor([InjectMeta, UpdateTags, CleanMagics, Black], 'test_files/black.ipynb', display_results=True)
assert '[1, 2, 3]' in c
assert 'very_important_function(\n template: str,' in c | Format with black
```python
j = [1, 2, 3]
```
```python
def very_important_function(
template: str,
*variables,
file: os.PathLike,
engine: str,
header: bool = True,
debug: bool = False
):
"""Applies `variables` to the `template` and writes to `file`."""
with open(file, "w") as f:
... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Show File Contents - | #export
class CatFiles(Preprocessor):
"""Cat arbitrary files with %cat"""
pattern = '^\s*!'
def preprocess_cell(self, cell, resources, index):
if cell.cell_type == 'code' and re.search(self.pattern, cell.source):
cell.metadata.magics_language = 'bash'
cell.source = re.su... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Format Shell Commands - | #export
class BashIdentify(Preprocessor):
"""A preprocessor to identify bash commands and mark them appropriately"""
pattern = re.compile('^\s*!', flags=re.MULTILINE)
def preprocess_cell(self, cell, resources, index):
if cell.cell_type == 'code' and self.pattern.search(cell.source):
... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
When we issue a shell command in a notebook with `!`, we need to change the code-fence from `python` to `bash` and remove the `!`: | c, _ = run_preprocessor([MetaflowTruncate, CleanMagics, BashIdentify], 'test_files/run_flow.ipynb', display_results=True)
assert "```bash" in c and '!python' not in c | ```bash
python myflow.py run
```
<CodeOutputBlock lang="bash">
```
[35m [0m[1mWorkflow starting (run-id 1647304124981100):[0m
[35m [0m[32m[1647304124981100/start/1 (pid 41951)] [0m[1mTask is starting.[0m
[35m [0m[32m[1647304124981100/start/1 (pid 41951)] [0m[22mthis is the start[0m
[35m ... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Remove `ShowDoc` Input Cells - | #export
_re_showdoc = re.compile(r'^ShowDoc', re.MULTILINE)
def _isShowDoc(cell):
"Return True if cell contains ShowDoc."
if cell['cell_type'] == 'code':
if _re_showdoc.search(cell.source): return True
else: return False
class CleanShowDoc(Preprocessor):
"""Ensure that ShowDoc output gets cl... | ```python
from fastcore.all import test_eq
from nbdoc.showdoc import ShowDoc
```
<DocSection type="function" name="test_eq" module="fastcore.test" link="https://github.com/fastcore/tree/masterhttps://github.com/fastai/fastcore/tree/master/fastcore/test.py#L34">
<SigArgSection>
<SigArg name="a" /><SigArg name="b" />
<... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Composing Preprocessors Into A PipelineLets see how you can compose all of these preprocessors together to process notebooks appropriately: | #export
def get_mdx_exporter(template_file='ob.tpl'):
"""A mdx notebook exporter which composes many pre-processors together."""
c = Config()
c.TagRemovePreprocessor.remove_cell_tags = ("remove_cell", "hide")
c.TagRemovePreprocessor.remove_all_outputs_tags = ("remove_output", "remove_outputs", "hide_out... | _____no_output_____ | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
`get_mdx_exporter` combines all of the previous preprocessors, along with the built in `TagRemovePreprocessor` to allow for hiding cell inputs/outputs based on cell tags. Here is an example of markdown generated from a notebook with the default preprocessing: | show_plain_md('test_files/example_input.ipynb') | ---
title: my hello page title
description: my hello page description
hide_table_of_contents: true
---
## This is a test notebook
This is a shell command:
```python
! echo hello
```
hello
We are writing a python script to disk:
```python
%%writefile myflow.py
from metaflow import FlowSpec, step
class MyFl... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
Here is the same notebook, but with all of the preprocessors that we defined in this module. Additionally, we hide the input of the last cell which prints `hello, you should not see the print statement...` by using the built in `TagRemovePreprocessor`: | exp = get_mdx_exporter()
print(exp.from_filename('test_files/example_input.ipynb')[0]) | ---
title: my hello page title
description: my hello page description
hide_table_of_contents: true
---
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
## This is a test notebook
This is a shell command:
```bash
echo hello
```
<CodeOutputB... | Apache-2.0 | nbs/mdx.ipynb | outerbounds/nbdoc |
函数- 函数可以用来定义可重复代码,组织和简化- 一般来说一个函数在实际开发中为一个小功能- 一个类为一个大功能- 同样函数的长度不要超过一屏 Python中的所有函数实际上都是有返回值(return None),如果你没有设置return,那么Python将不显示None.如果你设置return,那么将返回出return这个值. | def HJN():
print('Hello')
return 1000
b=HJN()
print(b)
HJN
def panduan(number):
if number % 2 == 0:
print('O')
else:
print('J')
panduan(number=1)
panduan(2) | O
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
定义一个函数def function_name(list of parameters): do something- 以前使用的random 或者range 或者print.. 其实都是函数或者类 函数的参数如果有默认值的情况,当你调用该函数的时候:可以不给予参数值,那么就会走该参数的默认值否则的话,就走你给予的参数值. | import random
def hahah():
n = random.randint(0,5)
while 1:
N = eval(input('>>'))
if n == N:
print('smart')
break
elif n < N:
print('太小了')
elif n > N:
print('太大了')
| _____no_output_____ | Apache-2.0 | 7.20.ipynb | ML00001/Python |
调用一个函数- functionName()- "()" 就代表调用 | def H():
print('hahaha')
def B():
H()
B()
def A(f):
f()
A(B) | hahaha
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
 带返回值和不带返回值的函数- return 返回的内容- return 返回多个值- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值 - 当然也可以自定义返回None EP: | def main():
print(min(min(5,6),(51,6)))
def min(n1,n2):
a = n1
if n2 < a:
a = n2
main() | _____no_output_____ | Apache-2.0 | 7.20.ipynb | ML00001/Python |
类型和关键字参数- 普通参数- 多个参数- 默认值参数- 不定长参数 普通参数 多个参数 默认值参数 强制命名 | def U(str_):
xiaoxie = 0
for i in str_:
ASCII = ord(i)
if 97<=ASCII<=122:
xiaoxie +=1
elif xxxx:
daxie += 1
elif xxxx:
shuzi += 1
return xiaoxie,daxie,shuzi
U('HJi12') | H
J
i
1
2
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
不定长参数- \*args> - 不定长,来多少装多少,不装也是可以的 - 返回的数据类型是元组 - args 名字是可以修改的,只是我们约定俗成的是args- \**kwargs > - 返回的字典 - 输入的一定要是表达式(键值对)- name,\*args,name2,\**kwargs 使用参数名 | def TT(a,b)
def TT(*args,**kwargs):
print(kwargs)
print(args)
TT(1,2,3,4,6,a=100,b=1000)
{'key':'value'}
TT(1,2,4,5,7,8,9,)
def B(name1,nam3):
pass
B(name1=100,2)
def sum_(*args,A='sum'):
res = 0
count = 0
for i in args:
res +=i
count += 1
if A == "sum":
return r... | _____no_output_____ | Apache-2.0 | 7.20.ipynb | ML00001/Python |
变量的作用域- 局部变量 local- 全局变量 global- globals 函数返回一个全局变量的字典,包括所有导入的变量- locals() 函数会以字典类型返回当前位置的全部局部变量。 | a = 1000
b = 10
def Y():
global a,b
a += 100
print(a)
Y()
def YY(a1):
a1 += 100
print(a1)
YY(a)
print(a) | 1200
1100
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
注意:- global :在进行赋值操作的时候需要声明- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.-  Homework- 1 | def getPentagonalNumber(a):
Num=0
for i in range(1,a+1):
Num_1=i*(3*i-1)/2
print('%5.0f'%Num_1,end=' ')
Num+=1
if Num%10==0:
print()
getPentagonalNumber(100) | 1 5 12 22 35 51 70 92 117 145
176 210 247 287 330 376 425 477 532 590
651 715 782 852 925 1001 1080 1162 1247 1335
1426 1520 1617 1717 1820 1926 2035 2147 2262 2380
2501 2625 2752 2882 3015 3151 3290 3432 3577 3725
3876 4030 41... | Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 2  | def sumDigits(n):
import math
a=n
Num=0
while a!=0:
b=a%10
a=math.floor(a/10)
Num+=b
return Num
print(sumDigits(336)) | 12
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 3 | def sortnum(num1,num2,num3):
a=[num1,num2,num3]
a.sort()
print(a)
def start():
num1,num2,num3=map(float,input().split(','))
sortnum(num1,num2,num3)
start() | 25,2.6,98
[2.6, 25.0, 98.0]
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 4 | def futureIn(money,rate,year):
for i in range(1,year*12+1):
money=float(1+rate/100/12)*money
if i%12==0:
print('%9d %10.2f'%((i/12),money))
def start():
money,rate,year=map(int,input().split(','))
futureIn(money,rate,year)
start() | 1000,3,30
1 1030.42
2 1061.76
3 1094.05
4 1127.33
5 1161.62
6 1196.95
7 1233.35
8 1270.87
9 1309.52
10 1349.35
11 1390.40
12 1432.69
13 1476.26
14 1521.16
15 156... | Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 5 | def printChars(ch1, ch2,numberPerLine):
# print(ord(ch2))
# print(ord(ch1))
num=ord(ch2)-ord(ch1)+1
num_2=int(numberPerLine)
# print(num_2)
num_1=0
for i in range(ord(ch1),ord(ch2)+1):
print(chr(i),end=" ")
num_1+=1
if num_1%num_2==0:
print()
def start():
... | a,o,3
a b c
d e f
g h i
j k l
m n o
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 6 | def neumber(year):
for i in range(11):
if year%4==0:
print('%20d年 %20.2f天'%(year,366))
if year%4!=0:
print('%20d年 %20.2f天'%(year,365))
year+=1
neumber(2010) | 2010年 365.00天
2011年 365.00天
2012年 366.00天
2013年 365.00天
2014年 365.00天
2015年 365.00天
2016年 366.00天
... | Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 7 | #俩点之间距离
import math
def distance(x1,y1,x2,y2):
a=(x1-x2)**2
b=(y1-y2)**2
c=math.sqrt(a+b)
print(c)
def start():
x1,y1,x2,y2=map(float,input().split(','))
distance(x1,y1,x2,y2)
start() | 9,11,12,15
5.0
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 8 | def Mersenne_prime(P):
print('P值 梅森素数(P^2-1)')
for i in range(1,P+1):
Number1=2**i-1
Number2=0
for j in range(2,Number1):
if Number1%j==0:
Number2+=1
if Number2==0:
print('%3d %10d'%(i,Number1))
def start():
P=int(input())
Mersen... | 31
P值 梅森素数(P^2-1)
1 1
2 3
3 7
5 31
7 127
13 8191
17 131071
19 524287
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 9 | def time():
import time
localtime = time.asctime(time.localtime(time.time()))
print(localtime)
time() | Fri Aug 2 16:00:20 2019
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 10 | def Roll_dice():
import random
dice1=random.randint(1,6)
dice2=random.randint(1,6)
print('Dice1:%d Dice2:%d'%(dice1,dice2))
sum = dice1+dice2
return sum
def Judge():
sum=Roll_dice()
print('点数:%d'%sum)
a=(2,3,12)
b=(7,11)
c=(4,5,6,8,9,10)
if sum in a:
print('你输了... | Dice1:4 Dice2:5
点数:9
Dice1:3 Dice2:1
点数:4
Dice1:6 Dice2:1
点数:7
你输了
| Apache-2.0 | 7.20.ipynb | ML00001/Python |
- 11 去网上寻找如何用Python代码发送邮件 |
def send_first_email():
"""
#发送一个纯文本邮件
"""
# 使用email模块 构造邮件
from email.mime.text import MIMEText
# 第一个参数,邮件正文
# 第二个参数,_subtype='plain', Content-Type:text/plain,这个应该是一种规范吧?
# 第三个参数 utf-8编码保证兼容
msg = MIMEText("你害怕大雨吗?","plain","utf-8")
# 添加信... | _____no_output_____ | Apache-2.0 | 7.20.ipynb | ML00001/Python |
线性模型和梯度下降这是神经网络的第一课,我们会学习一个非常简单的模型,线性回归,同时也会学习一个优化算法-梯度下降法,对这个模型进行优化。线性回归是监督学习里面一个非常简单的模型,同时梯度下降也是深度学习中应用最广的优化算法,我们将从这里开始我们的深度学习之旅 一元线性回归一元线性模型非常简单,假设我们有变量 $x_i$ 和目标 $y_i$,每个 i 对应于一个数据点,希望建立一个模型$$\hat{y}_i = w x_i + b$$$\hat{y}_i$ 是我们预测的结果,希望通过 $\hat{y}_i$ 来拟合目标 $y_i$,通俗来讲就是找到这个函数拟合 $y_i$ 使得误差最小,即最小化$$\frac{1}{n} \s... | import torch
import numpy as np
from torch.autograd import Variable
torch.manual_seed(2017)
# 读入数据 x 和 y
x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
[9.779], [6.182], [7.59], [2.167], [7.042],
[10.791], [5.313], [7.997], [3.1]], dtype=np.float32)
y_train =... | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
经过上面的步骤我们就定义好了模型,在进行参数更新之前,我们可以先看看模型的输出结果长什么样 | plt.plot(x_train.data.numpy(), y_train.data.numpy(), 'bo', label='real')#实际的
plt.plot(x_train.data.numpy(), y_.data.numpy(), 'ro', label='estimated')#实际的预测
plt.legend() | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
**思考:红色的点表示预测值,似乎排列成一条直线,请思考一下这些点是否在一条直线上?** 这个时候需要计算我们的误差函数,也就是$$\frac{1}{n} \sum_{i=1}^n(\hat{y}_i - y_i)^2$$ | # 计算误差
def get_loss(y_, y):
return torch.mean((y_ - y_train) ** 2)
loss = get_loss(y_, y_train)
# 打印一下看看 loss 的大小
print(loss) | tensor(153.3520, grad_fn=<MeanBackward1>)
| MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
定义好了误差函数,接下来我们需要计算 w 和 b 的梯度了,这时得益于 PyTorch 的自动求导,我们不需要手动去算梯度,有兴趣的同学可以手动计算一下,w 和 b 的梯度分别是$$\frac{\partial}{\partial w} = \frac{2}{n} \sum_{i=1}^n x_i(w x_i + b - y_i) \\\frac{\partial}{\partial b} = \frac{2}{n} \sum_{i=1}^n (w x_i + b - y_i)$$ | # 自动求导
loss.backward()
# 查看 w 和 b 的梯度
print(w.grad)
print(b.grad)
# 更新一次参数
w.data = w.data - 1e-2 * w.grad.data
b.data = b.data - 1e-2 * b.grad.data | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
更新完成参数之后,我们再一次看看模型输出的结果 | y_ = linear_model(x_train)
plt.plot(x_train.data.numpy(), y_train.data.numpy(), 'bo', label='real')
plt.plot(x_train.data.numpy(), y_.data.numpy(), 'ro', label='estimated')
plt.legend() | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
从上面的例子可以看到,更新之后红色的线跑到了蓝色的线下面,没有特别好的拟合蓝色的真实值,所以我们需要在进行几次更新 | for e in range(100): # 10 次更新
y_ = linear_model(x_train)
loss = get_loss(y_, y_train)
w.grad.zero_() # 记得归零梯度
b.grad.zero_() # 记得归零梯度
loss.backward()#这时候w.grad.data 和b.grad.data 是可以内存之中,要清零提前好吧,否则不是梯度下降的情况
w.data = w.data - 1e-2 * w.grad.data # 更新 w
b.data = b.data - 1e-2 * b.grad.data... | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
经过 10 次更新,我们发现红色的预测结果已经比较好的拟合了蓝色的真实值。现在你已经学会了你的第一个机器学习模型了,再接再厉,完成下面的小练习。 **小练习:**重启 notebook 运行上面的线性回归模型,但是改变训练次数以及不同的学习率进行尝试得到不同的结果 多项式回归模型 下面我们更进一步,讲一讲多项式回归。什么是多项式回归呢?非常简单,根据上面的线性回归模型$$\hat{y} = w x + b$$这里是关于 x 的一个一次多项式,这个模型比较简单,没有办法拟合比较复杂的模型,所以我们可以使用更高次的模型,比如$$\hat{y} = w_0 + w_1 x + w_2 x^2 + w_3 x^3 + \cdots$$这样... | # 定义一个多变量函数
w_target = np.array([0.5, 3, 2.4]) # 定义参数 目标参数
b_target = np.array([0.9]) # 定义参数
f_des = 'y = {:.2f} + {:.2f} * x + {:.2f} * x^2 + {:.2f} * x^3'.format(
b_target[0], w_target[0], w_target[1], w_target[2]) # 打印出函数的式子
print(f_des) | y = 0.90 + 0.50 * x + 3.00 * x^2 + 2.40 * x^3
| MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
我们可以先画出这个多项式的图像 | # 画出这个函数的曲线
x_sample = np.arange(-3, 3.1, 0.1)
y_sample = b_target[0] + w_target[0] * x_sample + w_target[1] * x_sample ** 2 + w_target[2] * x_sample ** 3
plt.plot(x_sample, y_sample, label='real curve')
plt.legend() | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
接着我们可以构建数据集,需要 x 和 y,同时是一个三次多项式,所以我们取了 $x,\ x^2, x^3$ 下边的数据分析不是很懂 | # 构建数据 x 和 y
# x 是一个如下矩阵 [x, x^2, x^3]
# y 是函数的结果 [y]
x_train = np.stack([x_sample ** i for i in range(1, 4)], axis=1)
x_train = torch.from_numpy(x_train).float() # 转换成 float tensor
y_train = torch.from_numpy(y_sample).float().unsqueeze(1) # 转化成 float tensor | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
接着我们可以定义需要优化的参数,就是前面这个函数里面的 $w_i$ | # 定义要估计参数和模型,参数必须是variable
w = Variable(torch.randn(3, 1), requires_grad=True)
b = Variable(torch.zeros(1), requires_grad=True)
# 将 x 和 y 转换成 Variable #数据也要用variable
x_train = Variable(x_train)
y_train = Variable(y_train)
def multi_linear(x):
return torch.mm(x, w) + b | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
我们可以画出没有更新之前的模型和真实的模型之间的对比 | # 画出更新之前的模型
y_pred = multi_linear(x_train)
plt.plot(x_train.data.numpy()[:, 0], y_pred.data.numpy(), label='fitting curve', color='r')
plt.plot(x_train.data.numpy()[:, 0], y_sample, label='real curve', color='b')
plt.legend() | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
可以发现,这两条曲线之间存在差异,我们计算一下他们之间的误差 | # 计算误差,这里的误差和一元的线性模型的误差是相同的,前面已经定义过了 get_loss
loss = get_loss(y_pred, y_train)
print(loss)
# 自动求导
loss.backward()
# 查看一下 w 和 b 的梯度
print(w.grad)
print(b.grad)
# 更新一下参数 其实直接用那个优化器还是 easy的
w.data = w.data - 0.001 * w.grad.data
b.data = b.data - 0.001 * b.grad.data
# 画出更新一次之后的模型
y_pred = multi_linear(x_train)
plt.plot(x... | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
因为只更新了一次,所以两条曲线之间的差异仍然存在,我们进行 100 次迭代 这里先清零在进行梯度下降,因为梯度下降之后在清零就不对了,梯度编程零0了 | # 进行 100 次参数更新
for e in range(100):
y_pred = multi_linear(x_train)
loss = get_loss(y_pred, y_train)
#这里先清零在进行梯度下降
w.grad.data.zero_()
b.grad.data.zero_()
loss.backward()
# 更新参数
w.data = w.data - 0.001 * w.grad.data
b.data = b.data - 0.001 * b.grad.data
| _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
可以看到更新完成之后 loss 已经非常小了,我们画出更新之后的曲线对比 | # 画出更新之后的结果
y_pred = multi_linear(x_train)
plt.plot(x_train.data.numpy()[:, 0], y_pred.data.numpy(), label='fitting curve', color='r')
plt.plot(x_train.data.numpy()[:, 0], y_sample, label='real curve', color='b')
plt.legend() | _____no_output_____ | MIT | chapter3_NN/linear-regression-gradient-descend.ipynb | kumi123/pytorch-learning |
Data | from sklearn.datasets import fetch_kddcup99
data = fetch_kddcup99()
from pprint import pprint
print(list(data.target_names))
df = pd.DataFrame(data.data, columns=data.feature_names)
df["labels"] = data.target
df.to_csv("data/KDD99/fetch_kddcup99.csv", index=False)
path = "data/Forest_Cover/train_centered.csv"
target... | _____no_output_____ | Apache-2.0 | examples/PreProcess.ipynb | ppmdatix/rtdl |
build vocab본 노트에서는 자연어처리 관련 딥러닝 모형학습을 위한 필수적인 전처리중 하나인 training corpus에 존재하는 token들의 집합인 **vocabulary**를 만들어 봅니다. vocabulary 구성 자체는 미리 제공하는 `model.utils` module에 있는 `Vocab` class를 활용해봅니다. `model.utils` module에 있는 `Tokenizer` class, `PadSequence` class를 같이 활용하여, 효율적인 전처리를 어떻게 할 수 있는 지 확인합니다. Setup | import sys
import pickle
import itertools
import pandas as pd
from pathlib import Path
from pprint import pprint
from typing import List
from collections import Counter
from model.utils import Vocab, Tokenizer, PadSequence | _____no_output_____ | MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
Load dataset | data_dir = Path.cwd() / 'data'
list_of_dataset = list(data_dir.iterdir())
pprint(list_of_dataset)
tr_dataset = pd.read_csv(list_of_dataset[2], sep='\t')
tr_dataset.head() | _____no_output_____ | MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
Split training corpus and count each tokens앞선 `eda.ipynb` 노트에서 활용한 `Mecab` class의 instance의 멤버함수인 `morphs` 이용, **training corpus를 sequence of tokens의 형태로 변환하고, training corpus에서 token의 출현빈도를 계산합니다.** | # 문장을 어절기준으로 보는 split_fn을 작성
def split_eojeol(s: str) -> List[str]:
return s.split(' ')
training_corpus = tr_dataset['document'].apply(lambda sen: split_eojeol(sen)).tolist()
pprint(training_corpus[:5])
count_tokens = Counter(itertools.chain.from_iterable(training_corpus))
print(len(count_tokens)) | 299101
| MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
Build vocab`min_freq`를 설정하고, training corpus에서 출현빈도가 `min_freq` 미만인 token을 제외하여, `model.utils` module에 있는 `Vocab` class를 이용하여 Vocabulary를 구축합니다. `min_freq`보다 낮은 token들은 `unknown`으로 처리됩니다. 아래의 순서로 진행합니다.1. `list_of_tokens`을 만듭니다. `list_of_tokens`은 `min_freq` 이상 출현한 token들을 모아놓은 `list`입니다.2. `Vocab` class의 instance인 `vo... | min_freq = 10
list_of_tokens = [token for token in count_tokens.keys() if count_tokens.get(token) >= min_freq]
list_of_tokens = sorted(list_of_tokens)
print(len(list_of_tokens))
vocab = Vocab(list_of_tokens=list_of_tokens, bos_token=None, eos_token=None, unknown_token_idx=0)
with open('data/morphs_eojeol.pkl', mode='wb... | _____no_output_____ | MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
How to use `Vocab``list_of_tokens`로 생성한 `Vocab` class의 instance인 `vocab`은 아래와 같은 멤버들을 가지고 있습니다. | help(Vocab)
# padding_token, unknown_token, eos_token, bos_token
print(vocab.padding_token)
print(vocab.unknown_token)
print(vocab.eos_token)
print(vocab.bos_token)
# token_to_idx
print(vocab.token_to_idx)
# idx_to_token
print(vocab.idx_to_token)
# to_indices
example_sentence = tr_dataset['document'][0]
tokenized_sente... | ['애들', '<unk>', '<unk>', '뭐', '그렇게', '<unk>', '솔까', '거기', '나오는', '귀여운', '애들이', '<unk>', '훨', '낮다.']
| MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
How to use `Tokenizer`위의 `Vocab` class의 활용 형태를 보면 `split_fn`으로 활용하는 `split_morphs` function의 결과를 input을 기본적으로 받습니다. `Vocab` class의 instance와 `split_fn`으로 활용하는 `split_morphs` function을 parameter로 전달받아 전처리를 통합적인 형태의 `Tokenizer` class를 활용할 수 있습니다. `composition` 형태를 이용하여 구현합니다. | help(Tokenizer)
tokenizer = Tokenizer(vocab, split_eojeol)
# split, transform, split_and_transform
example_sentence = tr_dataset['document'][1]
tokenized_sentence = tokenizer.split(example_sentence)
transformed_sentence = tokenizer.transform(tokenized_sentence)
print(example_sentence)
print(tokenized_sentence)
print(tr... | [5833, 0, 7002, 202, 8988, 2949, 6119, 6546]
| MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
`model.utils` module에 있는 `PadSequence`를 활용, `Tokenizer` class의 instance인 `tokenizer`에 padding 기능을 추가할 수 있습니다. 이 때 padding은 vocabulary에서 ``가 가리키고 있는 정수값을 활용합니다. | padding_value = vocab.to_indices(vocab.padding_token)
print(padding_value)
pad_sequence = PadSequence(length=32, pad_val=padding_value)
tokenizer = Tokenizer(vocab, split_eojeol, pad_sequence)
# split, transform, split_and_transform
example_sentence = tr_dataset['document'][1]
tokenized_sentence = tokenizer.split(examp... | 여전히 반복되고 있는 80년대 한국 멜로 영화의 유치함.
['여전히', '반복되고', '있는', '80년대', '한국', '멜로', '영화의', '유치함.']
[5833, 0, 7002, 202, 8988, 2949, 6119, 6546, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
| MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
Save `vocab` | with open('data/vocab.pkl', mode='wb') as io:
pickle.dump(vocab, io) | _____no_output_____ | MIT | exercise/build_vocab.ipynb | aisolab/first_project_nlp |
A Gentle Introduction to Forecasting in Merlion We begin by importing Merlion's `TimeSeries` class and the data loader for the `M4` dataset. We can then divide a specific time series from this dataset into training and testing splits. | from merlion.utils import TimeSeries
from ts_datasets.forecast import M4
time_series, metadata = M4(subset="Hourly")[0]
train_data = TimeSeries.from_pd(time_series[metadata.trainval])
test_data = TimeSeries.from_pd(time_series[~metadata.trainval]) | 100%|██████████| 414/414 [00:01<00:00, 400.14it/s]
| BSD-3-Clause | examples/forecast/0_ForecastIntro.ipynb | ankitakashyap05/Merlion |
We can then initialize and train Merlion's `DefaultForecaster`, which is an forecasting model that balances performance with efficiency. We also obtain its predictions on the test split. | from merlion.models.defaults import DefaultForecasterConfig, DefaultForecaster
model = DefaultForecaster(DefaultForecasterConfig())
model.train(train_data=train_data)
test_pred, test_err = model.forecast(time_stamps=test_data.time_stamps) | _____no_output_____ | BSD-3-Clause | examples/forecast/0_ForecastIntro.ipynb | ankitakashyap05/Merlion |
Next, we visualize the model's predictions. | import matplotlib.pyplot as plt
fig, ax = model.plot_forecast(time_series=test_data, plot_forecast_uncertainty=True)
plt.show() | _____no_output_____ | BSD-3-Clause | examples/forecast/0_ForecastIntro.ipynb | ankitakashyap05/Merlion |
Finally, we quantitatively evaluate the model. sMAPE measures the error of the prediction on a scale of 0 to 100 (lower is better), while MSIS evaluates the quality of the 95% confidence band on a scale of 0 to 100 (lower is better). | from scipy.stats import norm
from merlion.evaluate.forecast import ForecastMetric
# Compute the sMAPE of the predictions (0 to 100, smaller is better)
smape = ForecastMetric.sMAPE.value(ground_truth=test_data, predict=test_pred)
# Compute the MSIS of the model's 95% confidence interval (0 to 100, smaller is better)
l... | sMAPE: 4.1944, MSIS: 19.1599
| BSD-3-Clause | examples/forecast/0_ForecastIntro.ipynb | ankitakashyap05/Merlion |
Input/text processing needs to be implemented at two different levels:Main level. The objective is not to eliminate/stem/lemma too much data/info, not to lose precious information. Also, the chronological steps are important to be implemented gradually, for different algorithms to use the current state of the processed... | # IN: corrected list with words and characters (pipeline-Corrected words)
# OUT: processed input sent to diffrent pipelines for computation : EER , DER, Grammar-Semantics, CVM | _____no_output_____ | Apache-2.0 | NIU-NLU dummy codes/2_Input_processing_I_dummy_code.ipynb | Cezanne-ai/project-2021 |
Objectives:• Linguistic evaluation. Addressing special characters of the language. (For example: removing hyphens by separating into 2 words; • Removing words/ characters not needed; • Addressing numerical data, punctuation, emoji;Language specificities: yes (check language specificities);Dependencies: DER/CVM/EER/Gram... | # To dos:
# 1. Addressing special characters of the language in order for all words/characters to be included into a list that can be processed further (we will leave this task for linguists to assess).
# 2. All numerical characters found (including in a special vocabulary) are marked for DER.
# 3. Emoji are marked and... | _____no_output_____ | Apache-2.0 | NIU-NLU dummy codes/2_Input_processing_I_dummy_code.ipynb | Cezanne-ai/project-2021 |
Ude existing codessee code example!!! do not use before adapting to objectives |
# import re # library for regular expression operations
# import string # for string operations
# from nltk.corpus import stopwords # module for stop words that come with NLTK
# from nltk.stem import PorterStemmer # module for stemming
# fr... | _____no_output_____ | Apache-2.0 | NIU-NLU dummy codes/2_Input_processing_I_dummy_code.ipynb | Cezanne-ai/project-2021 |
Boolean Operations | import numpy as np
x = np.array([1, 2, 3, 4,5])
x < 3
x > 3
x <= 3
x >= 3
x != 3
# how many values less than 6?
np.count_nonzero(x < 6)
np.sum(x < 6)
# how many values less than 6 in each row?
x = np.arange(1,10).reshape((3,3))
print(x)
np.sum( x < 5, axis=1)
# are there any value greater than 7?
np.any(x > 7)
# are t... | _____no_output_____ | MIT | 6_numpy_boolean_operations.ipynb | ssozuer/numpy |
Word Vector- ExampleThis notebook sets up a word vectorization workflow on a toy example. Sentences are made up of spelled-out even and odd numbers. SetupImport necesarry libraries and configure run settings: | import os, math, csv, re, itertools
import numpy as np
import pandas as pd
from collections import Counter
import jellyfish as jyfs
import datetime, time
import pickle
import matplotlib.pyplot as plt
import inflect
from random import * | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Obtain CorpusWe will import the CAB item description dataset in this section. The names must be contained within a list type object of format:['Sentence 1', 'Sentence 2', 'Sentence 3'] | ItemDescription = {}
ItemPartNumber = {}
i = 0
with open('../Data/Private/CAB_ItemVersion.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
ItemDescription[i] = row['ItemDescription']
ItemPartNumber[i] = row['ItemNumber']
i += 1
# Extract list of part descriptions an... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Text ProcessingSome light text processing will be performed to remove numbers, punctuation, special characters, whitespace and single-character words. | ItemDescriptionList[0:10] | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
BasicsRemove all non-letters, whitespace and single characters, lower space. | sentences = [re.sub('[^A-Za-z]', ' ', e) for e in ItemDescriptionList] # Remove all non-letter characters
sentences = [re.sub('\s+', ' ', e).strip().lower() for e in sentences] # Strip excess whitespace and lower case
sentences = [' '.join( [w for w in sent.split() if len(w)>1] ) for sent in sentences] # Remove single ... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Resolve Typos/AbbreviationsThe next text processing step is to attempt to remediate misspellings and abbreviations with string matching patterns. First step is to generate a list of unique words. First, we need a term frequency list. | wordList = " ".join(sentences).split()
counts = Counter(wordList)
# Make a data frame of words, their counts, and original/new index values (not yet modified)
## Convert counter to data frame
wordcount = pd.DataFrame.from_dict(counts, orient='index').reset_index()
wordcount = wordcount.rename(columns={'index':'word', ... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Now we iterate down the list and examine Jaro Winkler distances as we go. If a distance is above our threshold, we update the "new index" value to the match. | # Prep word matrices
originalWords = wordcount.copy()
condensedWords = pd.DataFrame()
scores = []
# Set distance threshold
def distanceSet(currentCount, maxCount, baseline = 0.9):
# This function sets a logarithmically increasing Jaro-Winkler distance threshold
# based on the number of occurences of a word in ... | Completed: 100
Distance Threshold: 0.9740141148948647
================================================
Completed: 200
Distance Threshold: 0.9684340036458587
================================================
Completed: 300
Distance Threshold: 0.9644319793332964
================================================
Compl... | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Now save the results to a CSV and a pickle. | condensedWords.to_csv("./tempfiles/condensedWords.csv")
condensedWords_filename = ("./tempfiles/condensedWords_final.p")
condensedWords.to_pickle(condensedWords_filename)
condensedWords = pd.read_pickle(condensedWords_filename) | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Next, we need to re-map the part descriptions to use the new words. | # Build the replacement map using original words as indeces
wordReplacement_map = {}
for record in range(0, (len(condensedWords))):
old_word = condensedWords.iloc[record]['word']
new_word = condensedWords.iloc[record]['new_word']
wordReplacement_map[old_word] = new_word
# Apply mappings to sentences using ... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Append Base PNs | # String split base part number. PNs are of format C11-1010-23124, we want the first part (C11)
BasePNList = [x.split("-")[0] for x in ItemPartNumberList]
# Convert numerics to text
BasePNList = [w.replace('0', 'zero') for w in BasePNList]
BasePNList = [w.replace('1', 'one') for w in BasePNList]
BasePNList = [w.replac... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Drop Infrequent TermsFilter out words below an occurence threshold. We will apply this filter to our sentences by replacing "new_word" in our wordReplacement_map dict with a blank. | # Extract the new words into a list, then count the occurences
newWordList = " ".join(new_sentences).split()
newCounts = Counter(newWordList)
## Convert counter to data frame and sort occurence
newwordcount = pd.DataFrame.from_dict(newCounts, orient='index').reset_index()
newwordcount = newwordcount.rename(columns={'i... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Now that we have a map of word-to-frequency, let's run back through the sentences and drop words below some occurence threshold. | # Apply mappings to sentences using the above map.
## Initiate an empty list to catch the new sentences
final_sentences = []
## Set a threshold for occurence
occurence_threshold = 30
## Loop through sentences and replace words
for sent in new_sentences:
# Tokenize the sentence
tokenized_sent = sent.split()
... | word count
0 seat 20169
1 cab 20146
2 brkt 20114
3 bracket 20103
4 eaton 20102
| MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Subsample Frequent Words | # Extract the new words into a list, then count the occurences
newWordList = " ".join(final_sentences).split()
newCounts = Counter(newWordList)
## Convert counter to data frame and sort occurence
newwordcount = pd.DataFrame.from_dict(newCounts, orient='index').reset_index()
newwordcount = newwordcount.rename(columns={... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Now examing the distribution of word frequencies after sub-sampling | # Extract the new words into a list, then count the occurences
finalWordList = " ".join(subsampled_sentences).split()
finalCounts = Counter(finalWordList)
## Convert counter to data frame and sort occurence
newwordcount = pd.DataFrame.from_dict(finalCounts, orient='index').reset_index()
newwordcount = newwordcount.ren... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Skip-GramsNext, the unique words are indexed and our sentences are skip-grammed. | # Map words to indices
word2index_map = {}
index = 0
for sent in final_sentences:
for word in sent.lower().split():
if word not in word2index_map:
word2index_map[word] = index
index += 1
index2word_map = {index: word for word, index in word2index_map.items()}
vocabulary_size = len(i... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Inspect the top of the dictionary: | dict(list(index2word_map.items())[0:5]) | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
We will then generate skip-grams. The skip-gram window will start with the first word in a sentence and iterate through every word to the last in a sentence. The window size is parameterized. Skip-gram pairs are appended to a list. | # Initialize the skip-gram pairs list
skip_gram_pairs = []
# Set the skip-gram window size
window_size = 2
for sent in final_sentences:
tokenized_sent = sent.split()
# Set the target index
for tgt_idx in range(0, len(tokenized_sent)):
# Set range for the sentence
max_idx = len(tokenized_se... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Inspect some skip-gram pairs to ensure the behavior is as expected: | skip_gram_pairs[0:12] | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
For training, we will need to provide the neural network with batches of skip-gram pairs randomly sampled from our population. The next two blocks define the "get batch" function and print out a sample. | def get_skipgram_batch(batch_size):
instance_indices = list(range(len(skip_gram_pairs)))
np.random.shuffle(instance_indices)
batch = instance_indices[:batch_size]
x = [skip_gram_pairs[i][0] for i in batch]
y = [[skip_gram_pairs[i][1]] for i in batch]
return x, y
# batch example
x_batch, y_batch ... | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
TrainingThe embeddings are trained in mini-batches. First, we define placeholders for the inputs and outputs of size batch_size: | batch_size = 64
embedding_dimension = 200
negative_samples = 8
n_iterations = 50000
LOG_DIR = "logs/word2vec_cab" | _____no_output_____ | MIT | Code/Archive/1_PrepCorpus-Copy1.ipynb | DannyGsGit/GTC2018Talk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.