hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
93e0a2267320b4b8d3f124d015a6798150a2011a | 717 | md | Markdown | README.md | dpan331/Google_Ads__Campaign-Tracking--Template-checker | 75a0c32f51faf24cb34a9fca69056c14f63b72f2 | [
"MIT"
] | null | null | null | README.md | dpan331/Google_Ads__Campaign-Tracking--Template-checker | 75a0c32f51faf24cb34a9fca69056c14f63b72f2 | [
"MIT"
] | null | null | null | README.md | dpan331/Google_Ads__Campaign-Tracking--Template-checker | 75a0c32f51faf24cb34a9fca69056c14f63b72f2 | [
"MIT"
] | null | null | null | # Google_Ads__Campaign-Tracking--Template-checker
A short JavaScript Google Ads script that can easily help you check the tracking template that is set up in each campaign of your Ad Account.
There is also an additional script that can be set up on MCC level and iterate all campaigns in all your ad accounts that meet certain conditions (here for ad accounts that have Impressions > 0 for the LAST_30_DAYS).
The third script in the repository, MCC Campaign Tracking Template applier, on the other hand helps you apply a tracking template to all campaigns of the iterated ad accounts of the MCC.
🚸 These scripts are not maintained, so, in time, certain operations or even the entire scripts may not be functional.
| 79.666667 | 216 | 0.804742 | eng_Latn | 0.999321 |
93e12aa8f9ea999248a9afc87731ad0c26a2caf9 | 17,704 | md | Markdown | wisdom-civilization/nan-de-shi-yan-zai-chu-fa-dai-xu/2014.4-zhi-hui-wen-ming-sheng-chan-fang-shi-ji-qi-fa-xian-guo-cheng.md | chzionland/doc | b8e9fd99210c22ca69be279428258a5430fa73c2 | [
"MIT"
] | null | null | null | wisdom-civilization/nan-de-shi-yan-zai-chu-fa-dai-xu/2014.4-zhi-hui-wen-ming-sheng-chan-fang-shi-ji-qi-fa-xian-guo-cheng.md | chzionland/doc | b8e9fd99210c22ca69be279428258a5430fa73c2 | [
"MIT"
] | null | null | null | wisdom-civilization/nan-de-shi-yan-zai-chu-fa-dai-xu/2014.4-zhi-hui-wen-ming-sheng-chan-fang-shi-ji-qi-fa-xian-guo-cheng.md | chzionland/doc | b8e9fd99210c22ca69be279428258a5430fa73c2 | [
"MIT"
] | null | null | null | # 2014.4 智慧文明生产方式及其发现过程
牟其中(在狱中以第三人称撰写)
二0一四年四月
## **一、智慧文明生产方式是社会主义社会的基石**
### 1 ……
南德试验是奉胡耀邦同志“希望四川那几个研究马克思主义的年青人,在新长征中再立新功”的批示,于1980年2月13日开始的一场以社会主义改造已经完成之后如何建立起社会主义商品生产关系为目标的试验。
34年(1980年——2014年)的南德试验获得了两个成果:(一)为中国民营企业的发展不断探索着前进的道路;(二)发现了智慧文明生产方式。
今年1月7日是南德第三次蒙难十五周年忌日,我们用《南德蒙难十五周年祭》(上篇、下篇)和《新春抒怀》的形式总结了南德43年(1971年——2014年)的悲壮试验和预告了下一阶段的试验内容。
牟其中先生认为,工业革命之所以没有在我国发生,使我国长期处于农业文明时代的最根本原因是以“普天之下莫非王土”和从汉代以来已经十分成熟的盐铁专卖制度为代表的单一的国家所有制万古长青的结果。从1840年开始的救亡图存运动的经验反复证明,要实现民族复兴非得实现多种所有制经济平等竞争的市场经济制度不可。
因此,我们自愿以血肉之躯不断为中国民营企业发展探索着前进的道路。
1982年邓小平同志在十二大开幕词的主题词是《走自己的路,建设具有中国特色的社会主义》。
1984年1月,牟其中在四川万县地区收容所的监室中写出了《论中国特色的社会主义学说和我们的历史使命》。
这是至今为止发现的第一篇把“中国特色社会主义”认定为一个理论体系,而非一句时髦政治口号的长篇论文。它比莫干山会议还早八、九个月,比巴山轮会议早一年有余。
在文中牟其中初步提出了这个理论体系的几个特征:一、社会主义商品经济的经济结构;二、中国共产党的领导地位;三、该体系是实现中华民族伟大复兴的唯一正确的指导思想。我们的历史使命就是高举这一理论大旗,去完成中华民族的伟大复兴。
在文中牟其中认为中国特色社会主义是篇大文章,当时仅仅破了题,在有些地方连题也还没有破。
中央给牟其中第二次平反之后(1984年9月)他立刻又恢复了南德试验,用薄一波同志的话说,“一边搞理论,一边搞实践”。
继续试验的结果就是发现了智慧文明生产方式,认定这是中国特色社会主义理论的经济科学基础。虽然其理论尚很不完善,但结构已经搭建起来了,已经有了一个可供继续完善的基础。
从上个世纪七十年代末至今,从托夫勒到里夫金等中外未来学家均感觉到了人类又一次站到了一个全新时代的大门之前,探头探脑,试图猜测大门后面的景象和引起这一变化的原因,写出了一批以“第三”为开头的书籍,什么第三次浪潮、第三波、第三次工业革命等等,牟其中称他们为“第三派”。
### **2** ……
牟其中认为,他们每一个人都看到了不少东西,其探索精神也应得到肯定和鼓励,但不可讳言,第三派的研究仍然停留于对表面现象的观察阶段,即使如里夫金试图对其变化的原理进行解释,他提出人类历史发展阶段变化的原因是因为交流传播手段的发展和能源的应用,例如文字的出现、印刷技术、电报、电话、磨房水车、化石能源、清洁能源等等。
这批“第三派”未来学家的共同缺陷是一致回避马克思主义,特别是历史唯物主义的结果。
从1971年牟其中在四川万县市组织马列主义学习小组,1975年春发展为马克思主义研究会以来,几十年的认真学习,特别是在从1980年开始的南德试验的反复实践比较中,产生了对历史唯物主义和劳动价值论的坚定信心,认定马克思主义,特别是其中的历史唯物主义和劳动价值论,是迄今为止,经受住了无数重大历史事件检验的历史哲学和经济哲学。
回避科学的历史哲学和经济哲学,自然不可能得出科学的结论。
恩格斯在《社会主义从空想到科学的发展》一书中,对于社会变化的终极原因,作过十分精辟、精彩的论述,可惜现在监狱不准向牟其中送一切资料——也包括马克思、恩格斯的著作——加上年深日久,记忆模糊,已无法复述了。
其大意是社会变化的终极原因,不应该去书斋里的经典中寻找,也不应该去哲学家的头脑中寻找,而应该去从身边日常生产关系的变化中寻找;当幸福化为痛苦,真理化为无稽,常识化为荒诞时,就清楚表明其赖以存在的经济基础已经在开始发生变化了。
“第三派”的未来学家们向我们预告,人类将进入人类历史的下一个阶段——即后现代社会。必然变化的根据是什么呢?是传播方式和能源形式的变化。但是,传播方式和能源形式又为什么必然变化呢?未来学家们没有继续深入下去的能力了,因为他们思维的工具库中,没有历史唯物主义和劳动价值论这两件可以解开人类社会发展之谜的利器。
南德试验的最重大成果就是发现了这一必然变化的秘密,即资本主义社会的经济基础已经开始动摇,动摇的原因是因为智能工具的出现和广泛使用,必将引发全世界生产方式的彻底变化,从而推动上层建筑的逐步改变,人类将通过这一生产方式变化的一系列程序,进入一个崭新的时代。
这种变化,大约五百年发生一次。上一次变化的标志性事件是1492年哥伦布发现新大陆,这一次变化的标志性事件是1992年中国共产党第十四次代表大会。在这次历史性的会议上,中国共产党提出了建立社会主义市场经济的改革目标。
孟夫子说,五百年必有王者兴。
南德试验发展了马克思的劳动价值论,将劳动分解为三种形态,即体力劳动、记忆劳动和智慧劳动,认为由于工具的不断进步,依次成为轮流支配人类经济活动的主角。
机械工具的广泛使用,决定了人类在自然资源之外,又学会了使用能源,二者的共同使用,推动了人类告别农业文明时代进入工业文明时代。这一变化的本质是机械工具扩大了人类体力劳动的能力,推动了欧洲文艺复兴(意大利人将其称之为五百年代)时代的到来和在此基础上发生的工业革命。
智能工具——由电脑和电脑网络组成——的使用,决定了人类在自然资源、能源资源之外又学会了使用信息资源。三者的共同使用保证了人类又将必然告别工业文明时代,进入一个全新的新时代,我们把这个新时代称之为智慧文明时代。这一变化的本质是智能工具扩大了人类记忆劳动的能力。
南德试验将智慧定义为创造新知识、新经验、为解决困难问题寻找新方法的一种能力。
马克思将资本主义生产方式——工业文明生产方式——定义为以资本为中心的一个资本不断积累和积聚的过程。
南德试验将智慧文明生产方式定义为,以智慧为中心的一个不断发展员工智慧、完善、发展员工和保证员工实现人生最大价值的过程。
### **3** ……
这是政治经济学的理论创新,是对马克思主义政治经济学的发展。我们将在后面的第二部分中,对南德试验如何发现这一全新生产方式的过程和这一新政治经济学的理论框架,作出简约的记叙和介绍。
但是,也仅仅是一个简约的理论框架而已,因为监狱对牟其中实行特别管理,不允许他与任何人保持联系,即使是诉讼代理人夏宗伟,也仅仅允许他向夏宗伟提出生活必需品的要求,决不允许送给他任何资料——包括马克思、恩格斯著作和十八大文件等等。因此,牟其中实在无法完成哪怕是一种稍稍符合学术规范、能介绍学术渊源的小册子。我们在此前给中央的报告中承诺过,自由后的重要任务之一,是尽快完成这本小册子报送中央。
自从牟其中1984年元旦写出《论中国特色的社会主义学说和我们的历史使命》,将中国特色社会主义定义为一个理论体系之后,南德试验一直在探索什么是中国特色?特别是什么是社会主义?
1987年十三大提出了社会主义初级阶段概念,以后中央一直沿用这一概念,并且不断重申我国将长期处于这一历史阶段。对于什么是社会主义却很少有人探求。正是对社会主义这一概念尚无科学的定义,致使斯大林的伪社会主义得不到有力的批判。值得注意的是,习近平同志在十八届三中全会第二次会议的讲话中指出,提出社会主义市场经济体制改革目标,是一个重大的理论创新,解决了围绕世界上其他社会主义国家长期没有解决的一个重大理论问题。
近年来媒体上介绍、探索社会主义本质的著作开始多了起来,例如西北大学华炳啸在建国六十四周年时出版了《超越自由主义——宪政社会主义思想言说》;于幼军出版了《社会主义四百年》;中宣部最近出版了《社会主义五百年》,并将其列为干部理论读物。
从上个世纪八十年代以来,邓小平、江泽民等同志不断发问,什么是社会主义?如何建设社会主义?
由于对这一重大的理论问题中央至今尚未给出一个决定性的结论,故对执政党的党建理论建设和国家治理体系及治理能力的现代化建设,造成了理论上的混乱。
希拉里讥笑我国是国家资本主义,我国老百姓私下讥讽为中国特色的资本主义。
习近平同志在十八届三中全会第二次全体会议上的讲话指出:“今天,坚持和发展中国特色社会主义,全面深化改革,有效应对前进道路上可以预见和难以预见的各种困难和风险,都会提出新的课题,迫切需要我们从理论上作出新的科学回答。”
传统理论对社会主义社会的解释是,社会主义社会是一个比资本主义社会更富裕、更自由、更平等、更民主的社会。但是,在无产阶级革命的不断实践中,逐渐演变为了一种自上而下的治理方式。目前两个概念的混乱,引发了思想路线领域一次又一次的重大争论,从而导致了执政党执政路线的左右摇摆,成为了国家治理体系和治理能力现代化的拦路虎,也阻碍了社会主义市场经济理论体系的完善和发展。
这一混乱产生的根本原因在于马克思在新的生产工具出现之前,从而在资本主义社会制度的经济基础——工业文明生产方式——被一种更先进生产方式取代之前,提出了当时由于生产力发展水平,归根到底是科学技术水平尚未发展到成为另一种新的生产方式出现的条件之前,就提出了下一个历史阶段才可能实现的历史任务,从而否定了自己在1859年《〈政治经济学批判〉序言》中的结论:“无论哪一种社会形态,在它们所能容纳的全部生产力发挥出来之前,是绝不会灭亡的;而新的更高的生产关系,在它们存在的物质条件在旧社会的胎胞里成熟之前,是绝不会出现的。所以人类始终只能提出自己能够解决的任务。”
南德试验的重大成就就是发现了一种必然取代工业文明生产方式的全新生产方式,也就是智慧文明生产方式。
### **4** ……
工业文明生产方式是以资本为中心的,所以树立其上的必然是维护资本利益的资本主义社会;智慧文明生产方式是以智慧为中心的,而智慧是劳动的一种形式,单个个体人是发生智慧的载体,因此,在智慧文明生产方式中,完善人、发展人就成为了全体社会成员为了保障社会财富增加的一致目标。马克思在《共产党宣言》中对未来社会的憧憬——“在那里,每个人的自由发展是一切人自由发展的条件”——就可以这样通过一种非常市场化的方式,通过承认人人皆具有利己本性的客观现实,以这样一种非常功利、非常世俗的方式得到实现。因为它把全社会的每一个人都动员起来了,表现为一种不可能抗拒的社会力量。
由于单个个体人或曰自然人是发生智慧的载体,在智慧文明生产方式中,智慧又是该生产方式的中心,因此,只有在这种生产方式中,人类才第一次有可能把人类的千年梦想——天赋人权、生而平等——放置于科学的基础之上。
进行真正的理论创新是十分艰难的,甚至是危险的,即使在我们这样一个以马克思主义为指导思想的政党执政的国家,要发展马克思主义,还得冒着杀头的风险。
从1980年正式开始南德试验起计算,至今已经整整33年,加上为之准备试验的一次入狱,其中牟其中共计已经三度入狱,三度与死神擦肩而过,关押时间总计已超过了20年有余。
但是,牟其中至今还健康地活着这一事实本身就可以说明,牟其中的探索并不孤单。
从文革至今,三次与死神擦肩而过,都与党内高层人士力保有关。第一次没有被执行死刑与周恩来有关;第二次没有被执行死刑与赵紫阳有关;第三次——也就是这一次——汪道涵大约2002年左右捎话给我们:“先保住性命,解决问题的条件现在还不成熟。”
可是,即使再重大的理论创新,无论逻辑多么严密,证据多么充分,又具有多么深厚的学术渊源以及多么符合学术规范,这一切并不能证明其创新必然具有真理性,它必须接受实践的反复检验,必须在实践的法庭上辩护自己存在的权利或接受否定的判决。
这就是我们为什么十八大之后不停顿地向十八大中央急促申诉,要求立即恢复湖北中行信用证案件开庭,清理洪山宾馆会议错误决定的原因。
我们自己清楚的了解,只要一公开开庭,牟总立即就会获得自由,因为构成案件的证据全部都属伪造。
只要一获得自由,牟总会抓紧每一天时间恢复南德试验,建立起智慧文明生产方式的企业模型。
如果这个模型获得成功,表现出了比工业文明生产方式企业无可比拟的优越性,其它国内外企业不需要号召,趋利的本性就会促使他们竞相效仿。如果国内外绝大部分企业都采用了这一企业模型,那就表明一种不同于资本主义生产方式的新的企业制度已经取代了资本主义生产方式的企业制度。
这将是一个五百年一遇的伟大事件;是人类历史上最壮观的一刻;人类将从此步入以智慧(劳动的一种形式)为中心的社会主义社会。
这将是中国民营经济在中国共产党改革开放思想路线指导下、保护下,对人类发展做出的最伟大贡献。
我们理解这才可以称得上是中华民族的伟大复兴,因为中国共产党的改革开放思想路线寻找到了有别于西方的另外一条农耕文明现代化的道路。这也必然是东方文明复兴的基础;也将是我国最大的、必定会使其他国家竞相效仿的软实力。
## **二、南德试验发现智慧文明生产方式的过程及其理论框架**
### 1 ……
近20年前,南德试验发表了《我们发现了一种新情况》,向社会各界报告看见了“一个新时代的桅杆”。
可惜这一重大发现与人类历史上许多重大的发现一样,一露头就被斥之为骗子,遭到唾骂,遭到打击,甚至被送上了宗教法庭和宗教裁判所,如哥白尼、布鲁诺和伽利略等人曾经遭受过的苦难一样。赫胥黎将此现象总结为:“历史给人类的告诫是,一种崭新的真理惯常的命运是始于异端,终于迷信。”
南德提出了不以资本为中心,而宣布要建立起一种以智慧为中心的生产方式,在当时,直至现在,都还被视为一种异端邪说。这就是低级下流的非法出版物《大陆首骗牟其中》为什么竟能煽动起全国几乎所有的媒体,一起对南德口诛笔伐,进行舆论围剿,一起落入了《北京地下〈万言书〉》精心策划的用“改革开放培育出了一个新生的资产阶级”来否定改革开放的陷阱;又用“其代表人物牟其中就是大陆首骗”来证明凡民营企业家都是新生资产阶级分子的政治大阴谋的深层次社会根源。
身陷舆论漩涡中心的南德只能选择沉默。与台风的中心是平静的一样,沉默是金,显示着自信的力量。索尔仁尼琴在诺贝尔颁奖词中说:“一句真话可以抵得上整个世界的份量。”
南德拥有这样“真话”:在法庭公布的案件预审材料中,记录着判决南德有罪的三个缺一不可的证据,全部都是伪证。南德需要做的,也是当时仅仅可能做的,就是对已经看见了的,正在向我们驶来的“桅杆”的新时代进行理论上的证明。
十五年与世隔绝的铁窗生涯,意外地解脱了牟其中的商旅繁琐,给予了他一个面壁十年的“达摩圣境”。只有在这样一个囚衣素食、心空性灵的环境中,牟其中才有机会对朦胧的“新时代桅杆”仔细端详,对自1971年开始的,已经进行了43年南德试验中发现的“异端现象”,进行政治经济学意义上的深入研究。
毛泽东在《实践论》中,对从感觉到理解的认识过程,作过精彩的说明,他说,感觉到了的东西并不等于理解了它,只有理解了的东西,才能够更深刻地感觉它。
南德试验对智慧文明生产方式的认识就经历了这样一个过程。
1992年飞机业务成功,美国《时代》周刊将其评为当年世界十大新闻之首。其时恰逢小平南方讲话发表,西方几十家主要媒体一起涌向南德,非得挖出邓小平在幕后如何给四川老乡牟其中“开小灶”,制造出民营企业买飞机的轰动效应,来配合南方讲话政治意义重大的“幕后新闻”不可。
面对西方媒体的“穷追猛打”,牟其中一遍一遍地介绍事实真相,反复说明没有什么政治背景。如果非得找出什么政治背景的话,就是改革开放给予了他这个昔日国有企业工人一个创造奇迹的机会。
真是越描越黑,牟其中越是证明没有背景,西方媒体越是认定一定有重大背景。《华盛顿邮报》的报道文章中说,仅有56美元资金(不知是按什么汇率换算的,300元人民币等于56美元。报道中用的就是56美元),在无资金市场的中国,一个个体户凭什么能购买四架民航飞机?狗咬人不是新闻,人咬狗才是新闻。中国一个个体户把天上的四架飞机咬住了,当然是西方媒体眼中最大的新闻。
媒体可以应付,但自己却不能“应付”自己,总得给自己一个合理的解释。1987年携仅有的2000元人民币进京,住在前门横街打磨场地下室旅馆,每日一元房费的牟其中,凭什么能在短短的五年之内,就创造出了这个被《时代》周刊列为当年世界十大新闻之首的奇迹?
从此,牟其中和他领导的南德研究院的员工们就开始了对12年(1980——1992)南德试验中全部案例的梳理。梳理中发现了一个不断反复出现的异端现象:每当南德资金告急,山穷水尽处于绝境,只有寻找出一种新的经营方法才能死里逃生时,南德总能想出超出常识的“怪招”,最后转危为安,又见财源滚滚;但一当南德资金充盈,富得流油,又会按常识去建厂房、买设备、办实业来摆脱“空手道”的嫌疑,其结果无一例外地就是损兵折将,大败亏输,再次大难临头。如是者至少反复了六次以上。
### 2 ……
当一种异端现象反复出现,背后往往隐藏着我们尚未认识到的某种规律;当常识化为荒谬,真理化为无稽,幸福化为痛苦的时候,那一定是这些概念赖以生存的大地已开始动摇,在地壳的深处,一种新的生产方式正在开始生成,它以无声无息的方式改变着社会的常识、观念和判断。
其实在南德试验中不断冒出来的“异端现象”,早在上个世纪二十年代的美国已经出现了,只不过美国人没有意识到这是一个新时代降临的信号而已。
1923年,沃尔特·迪斯尼和罗伊·迪斯尼开始在好莱坞一间车库里制作动画,这就是今天誉满全球的迪斯尼公司的处女作。1931年,工程师格哈德·费希尔在加利福尼亚州帕洛阿尔一个车库内研发金属探测器。1938年,威廉·休利特和戴维·帕卡德(当今全球电脑大公司惠普公司的两位创始人)在帕洛阿尔租下了一间车库,这里就成了美国历史上最著名的企业的诞生地。
“车库发明家”一词从此出现,但直到上世纪六七十年代才流行起来。美国史密斯学会勒梅尔森发明创新研究中心历史学家埃里克·欣茨认为:“车库企业家实际上是二战后才出现的。”
“车库企业家”的最新现象是“创客”现象的出现,所谓“创客”,就是那些能够仅凭一台电脑,利用互联网就能将自己的各种创意转变为实际产品的人。
从上世纪二十年代的“车库企业家”到今天活跃在互联网上的“创客”,也许他们自己也没有察觉到,他们正在颠覆着资本主义社会制度的根基——资本主义生产方式。
马克思经典地把资本主义生产方式概括为,一个以资本为中心的资本不断积累和集聚的过程。
很显然,无论车库企业家也好,创客也好,南德试验也好,我们的共同特征都是不以资本为中心——虽然是被迫的,因为我们大家都缺乏货币——而是以寻找出一种少花钱、最好不花钱的新方法,并以此为中心来开展经营活动的。
智慧就是对寻找新方法能力的概括或表述。所以,我们将这种新生产方式定义为智慧文明生产方式。
面对二战以后日益发展壮大的“车库企业家”现象,1990年联合国研究机构提出了“知识经济”概念,指出“人类正在步入一个以智力资源的占有、配置,知识的生产、分配、使用(消费)为最重要因素的经济时代”。
1996年联合国经济合作与发展组织在其年度报告中把知识经济定义为“以知识为基础的经济”。显然已经感觉到把这种不以资本为中心的新经济现象定义为“知识经济”太武断。但是,大约经合组织也还停留于感觉阶段,无法对“车库企业家”现象作出理论的说明。
可惜,我国忽略了联合国经合组织1996年的这个决定性的修正,囫囵吞枣的接受了此前的错误概念。“知识经济”概念一时风靡全国。
针对这一重大忽略,南德试验于1996年发表了《我们发现了一种新情况》,筹备组建“智慧量研究所”,提出了智慧文明生产方式的新概念。
但是,当时南德也仅仅只能看见正在向我们驶来的智慧文明生产方式大船的桅杆,无法对智慧文明生产方式作出理论上的论述。
对智慧文明生产方式作出理论论述的任务就这样落到了“夜行不休”面壁十年的牟其中的肩上。
自上个世纪八十年代开始,第三次浪潮、第三波、第三次工业革命等等各种新思潮层出不穷,虽然观点各异,但有一个共同的特征,就是“第三”。他们把人类社会的演进简约地表达为第一次浪潮、第二次浪潮,把目前人类社会的变化概括为第三次浪潮或曰第三次工业革命,但是对浪潮为什么会不断汹涌向前的原因,却没有给予太多的注意。里夫金也仅仅意识到和人与人之间的交流手段,例如印刷术、电报、电话和能源的应用,例如乡村水磨和化石燃料的使用等等有关。
但是,牟其中不满意这些解释,认为所有这些解释都只是注意到了变化展现出来的现象,无法解释为什么会发生变化的原因。牟其中认为,如果不能挖掘出这些变化的原因,就无法掌握其运动的规律,也就无法自觉地利用这些规律去更好、更快、更大规模地在更大范围内谋求发展。
### 3 ……
比较了各种历史哲学之后,牟其中认为还是得回到马克思的历史唯物主义和劳动价值论上来。
牟其中认为,所谓的第一、第二、第三等等不外乎是人类对劳动分工形式认识不断深化的反映。
牟其中认为,伴随科学技术的发展进步,特别是标志性的新工具的出现,劳动的三种分工形式,即体力劳动、记忆劳动和智慧劳动,依次成为人类生产活动中的主要形式,其余两种则退居到辅助地位。
当人类只能使用手工工具时,体力劳动就成为了劳动的主要形式,人类只能处于农业文明生产方式发展阶段,也只能使用一种资源,即自然资源。
由于机械工具的出现——其中包括代替扩大体力劳动的动力机械及应用机械和提高交流效率和范围的通讯工具——记忆劳动上升为劳动的主要形式,人类则可以更容易地在更大范围内以更高的效率吸取前人的知识和当代人的经验,更容易创造出解决困难问题的新方法。人类凭此进入了工业文明时代,在这个时代,人类除自然资源之外,又学会了使用能源。
万吨巨轮取代了一叶扁舟,10万吨级锻压机取代了乡村铁匠,电报、电话取代了八百里加急驿站。但建造巨轮、铺设电缆和制造锻压机都需要大量的资金。为了不断提高生产力的水平,市场经济的竞争规律逼使整个生产方式成为了一个不断以资本为中心的资本不断积累和集聚的疯狂过程。
正因为如此,资本主义生产方式才能够在不到一百年的时间里创造出的财富,超过了此前人类创造出财富的总和。
但历史不可能在此终结,仍以不可抗拒的规律继续着自己的步伐。
上个世纪中叶,一种新的工具出现了,那就是电脑。稍后,电脑网络又出现了。电脑加电脑网络组成了扩大记忆能力的智能工具。
1995年牟其中携南德卫星去华尔街寻找上市的机会。当时美国网络经济也刚刚起步,拥有卫星通讯资源的南德,成为了众多网络公司竞相追逐的对象。超乎寻常的热情把毫无这方面知识准备的牟其中弄得手足无措。听完南德纽约公司负责人的介绍之后,牟其中说:“那么,也就是说,我们可以从网络上取得一切,甚至包括面包吗?”他得到了肯定的回答。
回到北京的牟其中立刻购买了路透社终端。从此南德大楼大厅的电子大屏幕上,每时每刻都保留有不断滚动更新的2000条路透社新闻。又铺设了专门用于上网的光缆,成为了全国第二家铺设此类光缆的单位。第一家是中科院高能所。
可惜,大风起于青萍之末,以后十余年在中国掀起的滔天巨浪,几乎葬送了改革开放路线的《北京地下〈万言书〉》此时已磨刀霍霍,开始动手了。
1996年3月18日准备再赴美国的牟其中——万言书将其定性为改革开放培养出来的新生资产阶级的代表——在北京机场被“边控”,日夜被监视了。
但倔强的牟其中仍然发表了《我们发现了一种新情况》,报告“南德已经看到了一个新时代的桅杆”了。
在“达摩圣境”中参禅悟道十五年的牟其中,终于开始敷座说经了。
### 4 ……
他说,所谓第三次浪潮、第三波、第三次工业革命、车库企业家现象等等,不外乎是一种新的生产方式在旧的生产方式母体胎胞里已经发育成熟,产期将至的产前躁动而已。
牟其中认为,与工业文明生产方式诞生的原因是由于机械工具的出现一样,智慧文明生产方式诞生的原因也是由于智能工具的出现。机械工具取代和扩大的是人类的体力劳动;智能工具取代和扩大的是人类的记忆劳动。
电脑的本质是记忆。
1956年,美国教育心理学家本杰明·布鲁姆发现,美国学校的测试题95%以上是在考学生的记忆力。于是他提出了一个新的学问分类法,该分类法把学问分为知识、理解、应用、分析、综合、评估几个类别。
电脑把人的记忆从对卷轶繁浩的苦读中解放了出来,让人的精力可以集中到理解、应用、分析、综合、评估等领域,因此,更容易创造出新知识和解决问题的新方法。此前需要皓首穷经才能记住的知识,如今只需轻轻一点鼠标立刻就会呈现在你面前。
电脑网络的本质是联系。
1995年南德业务国际化的需求,产生了对与世界各地业务人员联系的巨大需求,为了实现可视电话交流的设想,甚至被一个科技骗子轻易骗走了470万元人民币。但是今天,视频交流已经成为了一件十分平常的交流方式了。
因此,今天每一个人不但可以站在此前人类生产出来的全部知识的肩膀上创造新知识,还可以同时与全世界每一个手持电脑终端的当代人分享彼此的经验,共同讨论一个问题,在激烈争论中创造碰撞出新知识和新方法的火花。
这是一幅多么壮丽的景象:一个人可以隔空与历代风流对话,动员当代全世界的每一个人与自己共同思维。一个超越了时空限制的人难道还不是一位巨人?我们想到了恩格斯在《自然辩证法导言》中“这是一个需要巨人,也必将产生巨人的时代”的期盼。牟其中认为,在这个新时代里,任何人都可能成为巨人。
此外,这些不是文艺复兴时代而是智慧文明时代的巨人,还拥有他们前辈们不可能拥有的优势——资本主义生产方式为他创造出来的资金市场。当资金如任何普通商品一样在市场上可以自由买卖的时候,它当年拥有的唯我独尊的光环也就黯然失色了。
在资金市场上,拿什么与资金这种商品交换呢?或者说拿什么去购买资金呢?资金不能与资金交换,因为这是毫无意义的同义反复。资本必须增值才能生存的本性——这种不增长必然死亡的本性是由于不可抗拒的通货膨胀决定的——逼使它只能与一个可以增值的计划进行交换。
这个增值计划可以是增加劳动力,也可以是增加新设备,但成本最低的方式是在既存条件下找到一种效率更高的方法,这其中就包括使用新技术和创造出一种新的商业模式。
寻找这种新方法的能力就是智慧,也就是我们今天举国上下天天急切呼唤,已经上升为国家战略的创新。
于是一种以智慧(寻找新方法的能力)为中心,而不再以资本为中心的生产方式破土而出的历史条件、政治条件、社会条件也就具备了。新生产方式——智慧文明生产方式——能够产生出比资本主义生产方式高出百十倍效率的秘密在于,在这个新时代里,除开自然资源、能源之外,人类又学会了使用信息这种使用一次数量便翻一番的奇特资源。
牟其中认为,南德试验的下一个目标,就是把这种全新的生产方式——智慧文明生产方式——的一个细胞:智慧文明生产方式中的企业模型设计出来,运转起来,并且向全社会示范出来。一种理论是否具有真理性,不在于是否符合学术逻辑、学术规范,而在于是否能经受住实践的检验。牟其中的下一个目标,就是要让这种生产方式公开接受全社会的实践检验。
牟其中认为,发现资本主义生产方式中的异端现象是难能可贵的,能对这些中外异端现象进行梳理,并作出政治经济学意义上的说明,更是难能可贵的。但是,正如无数人观察到过划破夜空的闪电,是因为电流引发而又无法制造出发电设备结束黑暗一样,只留下了如郭沫若在话剧《屈原》中借屈原之口对当年黑暗政治进行一番控诉却无法结束黑暗政治的无奈。南德试验的下一个抱负,就是要设计出一种全新的企业制度,把偶然发生的闪电现象,变为任何人可以操控、可以重复的电流。在这种企业中,资本主义生产方式中基于劳动与资本对立的所有权制度会发生翻天覆地的变化——当今世界,无论中外,大多实行的是这种产权制度——盈利不再是企业的唯一目标,甚至不是第一目标,企业的首要目标是培养员工的创新能力,为员工提供创新环境和承担必要的创新风险。但是,企业并不会因为放弃了把盈利作为企业唯一目标而减少利润,恰恰相反,智慧经济企业会以资本主义生产方式企业不可想象的盈利能力吸引全社会的注意力。因此,目前以追求利润为主要目标的传统企业会经此途径发展为社会企业:他人发展成为自己企业发展的条件,员工发展成为老板发展,或者说,劳动力发展成为资本发展的条件。一个人人为我、我为人人的道德规范社会将在智慧文明生产方式的基础上逐渐发展起来。
南德试验不但不断遭受到计划经济势力的打击,也遭受到自由主义朋友们的嘲笑,嘲笑牟其中为当代的唐·吉诃德。但牟其中矢志如一。他坚信历史唯物主义是经受住了长期历史检验的人类至今拥有的少数几门大学问之一;他坚信经济发展是世界变化的终极原因。只要我们能不断创造出更新的方法,总有一天可以达到各取所需的社会目标。
牟其中一生至今已经三度入狱,总计时间已超过了二十年。虽然三次面临死刑,但他仍坚持认为自己是幸运的,因为历史让他生活在了一个资本主义生产方式已经气息奄奄,社会主义生产方式已经在敲门的大历史转变时代;他还庆幸自己在年青时代就接受了历史唯物主义,又最先受到了邓小平理论光芒的照耀,从死囚牢里直接奔向了创办改革开放后第一家民营企业的改革开放最前线。
马斯洛理论把人的需求分为五个阶段,第五需求是自我价值的实现。因缘际会,牟其中虽然九死一生,但始终能有机会把自己个人的命运与中华民族伟大复兴的命运、与人类在资本主义生产方式之后的命运联系在一起,他说,人生如此,夫复何求?
他认为这就是历史给予他与他率领的南德员工们的最高奖赏。
## **三、南德试验的下一个目标——建立起智慧文明生产方式的企业模型**
### 1 ……
十八大结束了自小平去世之后爆发的那股企图复辟文化大革命路线的逆流,再一次拨正了中国历史的航向,其政治意义与粉碎“四人帮”仅伯仲之间。虽然中央有意回避,学者也故意淡化——例如吴敬琏仅将过去了的这几十年,总结为“是市场起决定性作用,或政府起决定性作用”的一场二十年大争论——但它在中国现代化进程中的关键性作用却是无法回避的,随着顾雏军案、南德案真相的披露,对小平身后爆发的这场事关中国前途与命运大搏斗性质的定性,必然再次成为中国舆论场的中心课题,也必然再次成为中央政治局的重大议题。
从2004年之后,经常闯入牟其中脑海的是鲁迅的一首小诗:“万家墨面没蒿莱,敢有歌吟动地哀,心事浩茫连广宇,于无声处听惊雷。”
“不是在沉默中爆发,就是在沉默中灭亡。”十八大的胜利召开和习近平在就任总书记之后不到一个月的时间内去深圳莲花山邓小平像前的坚毅亮相,是对小平去世后那股几乎断送了改革开放路线的逆流开始进行清理的一声惊雷。
牟其中当时的感觉与他1976年9月9日下午4时从高墙外村里大树上喇叭里听到毛泽东死讯时一样:一个时代结束了,一个新的时代开始了。
从此,牟其中对改革前景的担忧转化为了信心百倍的对第三次创业蓝图的勾勒。
我们的幸运之处在于15年的牢笼生活并没有遏制住我们对智慧文明生产方式这一种全新的经济增长方式的探索。铁窗镣铐可以锁住手脚,却无法锁住思想穿越时空的自由。梳理南德试验的经验与教训,跟踪企业界的沉浮与创新,偃仰啸歌,与千古风流对话,安步绕庭,省是非于己心,信智慧已经在握,盼明日之飞腾。
十年面壁的结果就是奉献给诸位的这一智慧文明生产方式的理论框架。
虽然自上个世纪八十年代以来直至今天的舆论场,对我们赖以生存的生产方式正在发生着的地震式变动的原因,进行过,并且仍然在热烈地进行着种种猜测与争论——从托夫勒到里夫金——,但我们自信,我们比他们更接近真理。
我们比托夫勒、比里夫金等人的幸运在于我们不是在书斋里通过观察世界得出了某种结论,而是对自己在血与火的实践中经验的总结;更大的幸运在于我们有条件、有能力通过世界范围内大规模的实践,来证明我们的结论。里夫金为了证明自己的假设,与2000多年前的孔老夫子奔波于诸侯列国之间推销自己的主张一样,里夫金为了推销自己的主张游走于欧盟官宦豪门,年前还到了我国游说。
### **2** ……
我们的下一个目标是创建世界上第一家科学的社会企业,它的宗旨不仅仅是追求企业利润,而是履行社会责任。这个社会责任是通过企业经营将员工的成功,作为企业第一目标的制度安排体现出来的。如果这种企业制度在全世界普及,人类不但可以消灭贫困,还会解决贫富差距越来越大的严重问题。
贫富差距越来越大不仅是正在威胁着越来越富裕的我国的最大社会问题,也是威胁世界未来的最严重的社会问题。据最新统计,全球有85个人拥有全世界50%的财富。今年达沃斯经济论坛的主题就是“贫富差距威胁世界未来”。
贫富差距越来越大是以资本为中心的生产方式在高速创造财富时必然产生的副产品。
为了解决这一弊端,在过去的200年里,人们在创造出现代企业制度的同时,还在努力创建非营利组织,企图通过非营利组织来弥补现代企业制度的副产品——贫富差距和环境破坏——给社会带来的危害。
但是,非营利组织(包括慈善和各种自愿者组织)所能提供的资源和整个人类对其的需求比较起来,只是杯水车薪。必须创造出一种在生产财富的同时也生产公平正义和青山绿水的企业制度来。
这不是美妙的幻想,也不是痴人说梦。工业文明生产方式中的美妙幻想和痴人说梦,到了智慧文明生产方式中就成为了顺理成章的普通现实,甚至是维持生产活动正常运转的必备条件。
化腐朽为神奇的秘密在于智慧文明生产方式有着比工业文明生产方式不可比拟的效率,和自然人本身——智慧的载体——成为了生产方式的中心这一事实。与在工业文明生产方式中只有集中和积聚更多的资本才可能获得更大的利润一样,在智慧文明生产方式中,只有完善职工、发展职工才可能获得更多的智慧,从而获得更大利润。
目前全国上上下下都在争说创新,创新甚至已经上升为了基本国策——建立创新型国家。但是,对于什么是创新?却很少有人讨论,至于如何才能培养出创新的能力,更少有人关注;对于设计出创新能力的制度安排,似乎连有资格提出这个问题的人都还没有出现。
在我们的理论中,所谓创新,就是一种创造新知识、新经验,提出解决困难问题新方法的能力。我们把这种能力称之为智慧。
智慧不是知识,不是学问,也不是聪明,它表现为一种灵感、灵机一动或顿悟。
智慧既然首先是“智”,就必然与知识、学问有极其密切的联系。一个没有任何知识的人,绝对不可能产生智慧。所以若要希望产生灵感、顿悟、灵机一动,首先就得学习,并且是“学而时习之”、“格物、致知”,坚持养成天天学习、经常学习的习惯;其次还得坚持养成经常从事体育锻炼的习惯,保持旺盛的思考问题的精力。
“慧”字从心。“彗”,意思为扫除,与“心”合,则意为净心,除去瞻前顾后、患得患失、个人利害等杂念。
大家都了解牟总有节假日去京郊名寺漫游的习惯。避开京城的喧嚣,漫步于名山古刹青山绿水之间;听晨钟暮鼓,看日出日落,人的思想能得到净化和升华。
我们是潭柘寺后山茶园的常客。茶叶不过两种姿态,沉与浮;人生不过两种命运,枯与荣;国运不外两种状态,盛与衰。
若时间稍宽,我们还会驱车五台山。记得1998年的元旦我们夜宿中台,在这里迎来了南德暴风骤雨的1998年。
五台山为我国四大佛教名山之一,供奉的是以智慧为长的文殊菩萨。中台有一条通往东台的大智路。1998年元旦,我们经过大智路,拾级而上,经过了很长很长很窄很陡的一段阶梯,登上了东台寺去看日出。
### 3 ……
牟总为什么不遗余力地去拜访大山名寺呢?除开对大自然的热爱之外,就是希望在参拜中能体会佛经中智慧的真谛。在他有限的知识中,似乎只有佛教和希腊神话把智慧尊奉到了信仰的高度。在佛学中,达到“慧”须靠“时时勤拂拭”的“不舍修行”。什么是修行呢?即我们说的实践。也就是说,要得到智慧,必须坚持“学而时习之”,吸收前人积累的知识;必须“时时勤拂拭”,扫除心灵尘埃,达到诚意正心境界;最后,还必须“不舍修行”,即不断勇敢实践。
由此可见,渊博的知识,心性的修养和勇敢的实践是产生智慧的必备条件,只有同时具备了以上三个条件的人,在某种危机与兴奋时刻,才可能产生灵感,迸发出解决难题的思维火花。
但是,仅仅停留于追求研究智慧如何产生的条件还远远不够,因为它具有极大的偶然性。就如欣赏闪电的美丽不是目的,我们必须控制它、利用它,制造出一种设备,进而形成制造这种设备的制度,使只要处于这种制度中的任何人,都可以把这种设备生产出来,从而使一闪之电成为一种任何人都可以操作、都可以重复的行为结果,而不再是神秘的灵光一现和来无踪去无影的顿悟。
即将出现在全社会面前的就是这样一种企业制度,我们将其命名为社会企业制度,用此与传统的以追求利润为第一目标的现代企业制度相区别。
自1997年冬开始,经营智慧一直是牟总的梦想。
那年的冬天牟总读到了曾任里根总统经济顾问利斯顿先生的文章《第三次工业革命》,真如醍醐灌顶。特别是他的结论:“二十年之后,世界上最强大的国家和地区不再取决于面积大小、物产丰富和人口多寡,而仅仅取决于它吸引和管理人力资本的能力。”
一个国家和地区如此,一个企业也必定如此。
但是,要拥有这种吸引和管理来无踪、去无影,与精灵无异的智慧的能力,真比登天还难。
现在回忆起来,从八十年代初期开始的整个南德试验一直都在锲而不舍地追求这一目标——在监狱中也未中断——如何能用最少的资金、最短的时间,赚更多的钱、办更多的事。
1984年《经济日报》记者王青与国务院办公厅负责落实姚依林、胡启立批示的闫京华同志一同到了四川万县市(现为重庆市万州区)。采访中牟总向王青提出了一个问题:资本是有历史的,商业资本之后是工业资本,工业资本之后是金融资本——即美国目前的经济发展阶段——金融资本之后呢?当时王青也无法回答牟总的提问。
南德1987年进京,1990年牟总在《南德视界》上发表了《人才就是资本》,算是自己对1984年向王青提问的回答。但是,什么是人才呢?牟总也回答不了,只能用一句万能的空话安慰自己:能解决实际问题的人就是人才。为了解决人才标准问题,《南德视界》一口气发表了“人才八论”,但仍然无法解决金融资本之后资本的形式问题。
从某种意义上说,正是我们对金融资本之后资本形式的不断探索被人利用,成为了此次南德浩劫的原因之一。如精灵一样的智慧与货币无关,倒与佛经教义中“空”的概念有着千丝万缕的联系。与货币无关,又可以获得巨大的财富,这不就是诈骗吗?
赫胥黎高度概括了人类智慧产生的过程:“始于异端,终于迷信。”但是,“感觉到了的东西并不等于理解了它,只有理解了的东西,才能更深刻的感觉它。”
### 4 ……
经过43年的不断探索——包括三次牢狱之灾和再一次与死神擦肩而过——我们终于抓住智慧这个精灵了。至此——中国马年新春(甲午年春节)——我们终于可以宣布,南德试验不但感觉到智慧的存在,而且已经可以理解它了。对智慧这个精灵理论框架的说明,在上面第二部分中已做出了阐述。
既然我们已经可以对智慧这个精灵的前世今生作出理论上的论证,自然也就可试图驯服它来为我们服务了。
南德社会企业集团公司——新南德的正式全称——将是一种通过吸引全球智慧和服务全球智慧,从而表现出比目前全世界其他企业——目前全世界任何企业无一例外均为资本主义生产方式企业,因为其活动都是以货币资本为中心的——不可比拟的效率,自然也就可以获得无可比拟的利润。
不可比拟的效率和无可比拟的利润必然导致企业宗旨、组织形式、管理原则、分配制度等等企业基本制度的一系列翻天覆地变化,归根到底是以产权制度为标志的生产关系的变化。
但是,这一翻天覆地的变化仅仅是一个开端,仅是社会主义社会即将降临人间的一只报春小鸟。
虽然荷兰人在十七世纪初期才发明了现代资本主义制度,他们将银行、股票交易所、信用、保险,以及有限责任公司有机地统一成一个相互贯通的金融和商业体系,由此带来了财富爆炸式的增长,但是,资本主义制度的许多基本概念,早在文艺复兴时期的意大利已经出现了(以上资料引用于美国人约翰·戈登著、祁赋翻译的《伟大的博弈——华尔街金融帝国的崛起》)。
目前的我们也只相当于文艺复兴时期那批最早发现了一种新的经济生产方式的先行者,也许至少还需要经过人类两、三百年的共同努力,才可能把我们即将建立的一种企业制度,发展成为一如今天全世界共同遵循的一套经济体系。虽然我们今天在世的任何人——包括伟伟、平川、徐国顺、谢努、牟枫、牟樱这批南德的青年近卫军——都不可能亲眼见到那一天了,但这又有什么关系呢?人类就是在这样薪火相传、生生不息的努力中,从茹毛饮血的丛林时代走过来的。
作为高举邓小平理论旗帜改革开放大军中的一个群体,作为伟大中华民族的优秀子孙,我们尽到了自己的责任,把一种比资本主义生产方式效率更高的生产方式创造出来了,并作为东方文明再度辉煌的基石,奉献给了全人类。
| 47.463807 | 535 | 0.84721 | zho_Hans | 0.576568 |
93e19885c81219844e3a3a5a1360947dd9d3e76e | 1,978 | md | Markdown | articles/defender-for-iot/concept-event-aggregation.md | youngick/azure-docs.ko-kr | b6bc928fc360216bb122e24e225a5b7b0ab51d7e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:18.000Z | 2021-03-12T23:37:18.000Z | articles/defender-for-iot/concept-event-aggregation.md | zoonoo/azure-docs.ko-kr | 3ca44c0720204e9f9a6ef9fe2c73950aa17251c1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/defender-for-iot/concept-event-aggregation.md | zoonoo/azure-docs.ko-kr | 3ca44c0720204e9f9a6ef9fe2c73950aa17251c1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 이벤트 집계(미리 보기)
description: IoT 용 Defender 보안 에이전트는 로컬 장치에서 데이터 및 시스템 이벤트를 수집 하 고 처리 및 분석을 위해 데이터를 Azure 클라우드에 보냅니다.
ms.date: 1/20/2021
ms.topic: conceptual
ms.openlocfilehash: c0280e97549009a1e4911c072a7a8ec052684b4e
ms.sourcegitcommit: f611b3f57027a21f7b229edf8a5b4f4c75f76331
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 03/22/2021
ms.locfileid: "104779327"
---
# <a name="event-aggregation-preview"></a>이벤트 집계(미리 보기)
IoT 용 Defender 보안 에이전트는 로컬 장치에서 데이터 및 시스템 이벤트를 수집 하 고 처리 및 분석을 위해 데이터를 Azure 클라우드에 보냅니다. IoT 마이크로 에이전트의 Defender는 새 프로세스 및 모든 새 연결 이벤트를 포함 하 여 다양 한 유형의 장치 이벤트를 수집 합니다. 새 프로세스와 새 연결 이벤트는 장치에서 1 초 이내에 자주 발생할 수 있습니다. 이 기능은 포괄적인 보안을 위해 중요 하지만, 보안 에이전트에서 전송 하는 메시지 수가 빠르게 충족 되거나 IoT Hub 할당량 및 비용 제한을 초과할 수 있습니다. 그럼에도 불구 하 고 이러한 이벤트는 장치를 보호 하는 데 중요 한 매우 중요 한 보안 정보를 포함 합니다.
장치를 보호 하는 동안 추가 할당량 및 비용을 줄이기 위해 Defender for IoT 에이전트는 다음과 같은 유형의 이벤트를 집계 합니다.
- ProcessCreate (Linux만 해당)
- ConnectionCreate (Azure RTOS 전용)
## <a name="how-does-event-aggregation-work"></a>이벤트 집계는 어떻게 작동 하나요?
IoT 용 Defender 에이전트는 기간 또는 기간에 대 한 이벤트를 집계 합니다. 간격 기간이 지나면 에이전트는 추가 분석을 위해 집계 된 이벤트를 Azure 클라우드로 보냅니다. 집계 된 이벤트는 Azure 클라우드로 전송 될 때까지 메모리에 저장 됩니다.
에이전트가 이미 메모리에 보관 되어 있는 것과 동일한 이벤트를 수집 하는 경우 에이전트는 에이전트의 메모리 사용 공간을 줄이기 위해이 특정 이벤트의 적중 횟수를 늘립니다. 집계 기간이 지나면 에이전트는 발생 한 각 이벤트 유형의 적중 횟수를 보냅니다. 이벤트 집계는 수집 된 각 이벤트 형식의 적중 횟수에 대 한 집계 일 뿐입니다.
## <a name="process-events"></a>이벤트 처리
프로세스 이벤트는 현재 Linux 운영 체제 에서만 지원 됩니다.
*명령줄* 과 *userid* 가 동일할 때 프로세스 이벤트는 동일 하다 고 간주 됩니다 .
프로세스 이벤트의 기본 버퍼는 32 프로세스 이며, 그 후에는 버퍼가 순환 되 고 가장 오래 된 프로세스 이벤트는 새 프로세스 이벤트를 위한 공간을 만들기 위해 무시 됩니다.
## <a name="network-connection-events"></a>네트워크 연결 이벤트
네트워크 연결 이벤트는 현재 Azure RTOS 에서만 지원 됩니다.
네트워크 연결 이벤트는 *로컬 포트*, *원격 포트*, *전송 프로토콜*, *로컬 주소* 및 *원격 주소가* 동일할 때 동일 하 게 간주 됩니다.
네트워크 연결 이벤트의 기본 버퍼는 64입니다. 새 네트워크 이벤트는 다음 수집 주기까지 캐시 되지 않습니다. 캐시 크기를 늘리는 경고가 기록 됩니다.
## <a name="next-steps"></a>다음 단계
Defender에서 [IoT 보안 경고를](concept-security-alerts.md)확인 합니다.
| 41.208333 | 368 | 0.708797 | kor_Hang | 1.00001 |
93e1b0600e76f2d18848e9ebdaf534ec05a98eb9 | 2,229 | md | Markdown | TODO.md | jklymak/dolfyn | eea98fe0021886cf654e25293c385c5c3707ff8d | [
"BSD-3-Clause"
] | null | null | null | TODO.md | jklymak/dolfyn | eea98fe0021886cf654e25293c385c5c3707ff8d | [
"BSD-3-Clause"
] | null | null | null | TODO.md | jklymak/dolfyn | eea98fe0021886cf654e25293c385c5c3707ff8d | [
"BSD-3-Clause"
] | null | null | null | General
=======
Talk w/ @jmcvey3
======
- How to document the dolfyn-view data-objects (`<obj>.velds`)? Is building this even a good idea? ... I started this. I need help from @jmcvey3. Espcially I think we need a page that documents the `velocity.Velocity` class. Maybe this gets added to the 'shortcuts' page?
- Add simplify/standardize functions
Testing
=======
Coverage
- Look at coverage report.
Add tests to confirm that all *scripts* work.
Documentation
=============
Create a 'contributing to DOLfYN' page.
- Email me!
- Create tasks on github 'projects'? or something like [MPL enhacement proposals (MEPs)](https://matplotlib.org/devel/MEP/index.html)?
Packaging
=========
Version++ (1.0?!)
New PyPi entry
Build a conda install
File I/O
========
Support for AWAC waves data (AST)
Support for TRDI Sentinel V instruments
Find faster solution to Nortek burst read hack
Occasional TRDI sampling frequency calculation error - calculation depends on a variable that can be haphazardly written by TRDI software (VMDAS)
Data Processing
===============
Coordinate systems:
- Support for rotating directly from 'inst' to 'principal'
ADV burst mode: need to add checks that turbulence averaging doesn't "cross bursts".
Implement Reynolds stress rotations (e.g. rotating u'w'_ from 'inst' to 'principal' coordinates)
This is in the `reorg-add_omat` branch. The big issue is: `orientmat` is bad (`det != 1`) after averaging data from a moving instrument.
- Do quaternions average better?
- Obviously there are some issues with doing rotations of some data based on the average orientation, but it still seems like we ought to be able to do it if it's moving slowly, right?
- Should we enforce no reverse rotations on averaged objects (unless they are fixed, e.g. principal->earth?, or other check for no motion?)?
What if I want 30-minute turbulence averages spaced 15-minutes apart?
- add `n_pad` option to `ADVBinner.__init__`, or
- Add capability for `n_fft` > `n_bin`?
What about dropping data from averaging? Is this something we should support? Via negative `n_pad`?
ADCPs:
- Support for calculating principal heading by ensemble?
- Support for motion-correcting ADCP data
| 30.958333 | 271 | 0.732167 | eng_Latn | 0.985535 |
93e1f8fbd1fa4173c4ea617efc783116d15aaa5c | 951 | md | Markdown | _definitions/bld-habitancy.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 5 | 2018-08-07T21:57:01.000Z | 2022-02-26T13:29:20.000Z | _definitions/bld-habitancy.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 1 | 2018-08-07T22:29:07.000Z | 2018-08-07T22:45:46.000Z | _definitions/bld-habitancy.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 2 | 2020-12-26T17:22:04.000Z | 2021-02-12T21:35:50.000Z | ---
title: Habitancy
letter: H
permalink: "/definitions/bld-habitancy.html"
body: Settled dwelling in a given place; fixed and permanent residence there. This
term is more cpmprehenslve than “domicile,” for one may he domiciled in a given
place though he does not spend the greater portion of hls time there, or though
he may be absent for long periods. It is also more comprehensive than “residence,”
for one mny reslde in a given place only tem-porarlly or for short periods on the
occasion of repeated visits. Bnt in neither case could he properly be called an
“inhabitant” of that place or be said to have hls “habitancy” there. See Atkinson
v. washington & Jef-ferson College, 54 W. Va. 32, 46 S. B. 253; Hairston v. Hairston,
27 Miss. 711, 61 Am. Dec. 530; Abington v. North Bridgewater, 23 Pick. (Mass.) 170.
And see Domicile; Resi-dencs
published_at: '2018-07-07'
source: Black's Law Dictionary 2nd Ed (1910)
layout: post
--- | 52.833333 | 87 | 0.747634 | eng_Latn | 0.997783 |
93e208acd1e273d1d3a9b395ccb01fa6c26872eb | 17 | md | Markdown | README.md | 1sharanshetty/singadana | bae08c2ee99b378a25ce2a8b0180940a7a5c94cb | [
"Unlicense"
] | null | null | null | README.md | 1sharanshetty/singadana | bae08c2ee99b378a25ce2a8b0180940a7a5c94cb | [
"Unlicense"
] | null | null | null | README.md | 1sharanshetty/singadana | bae08c2ee99b378a25ce2a8b0180940a7a5c94cb | [
"Unlicense"
] | null | null | null | # singadana
test
| 5.666667 | 11 | 0.764706 | eng_Latn | 0.377697 |
93e3039eac240cd5fbb3034f929514f35aa5040e | 724 | md | Markdown | docs/fog-versions.md | zeeid/Project-Fog | b142eab749598b9bd4d8980b3ed67178c62e2d64 | [
"Unlicense"
] | null | null | null | docs/fog-versions.md | zeeid/Project-Fog | b142eab749598b9bd4d8980b3ed67178c62e2d64 | [
"Unlicense"
] | null | null | null | docs/fog-versions.md | zeeid/Project-Fog | b142eab749598b9bd4d8980b3ed67178c62e2d64 | [
"Unlicense"
] | null | null | null | # ☁️ Project Fog version 3.01 ☁️
#
## _Use this "only if" universal installer fails._
##
>For Debian 9 and Debian 10 versions:
```bash
wget https://github.com/korn-sudo/Project-Fog/raw/main/files/installer/debv301a && chmod +x ./debv301a && ./debv301a
```
##
>For Debian 11 and higher:
```bash
wget https://github.com/korn-sudo/Project-Fog/raw/main/files/installer/debv301b && chmod +x ./debv301b && ./debv301b
```
##
>For Ubuntu 18:
```bash
wget https://github.com/korn-sudo/Project-Fog/raw/main/files/installer/ubv301a && chmod +x ./ubv301a && ./ubv301a
```
##
>For Ubuntu 20 and higher:
```bash
wget https://github.com/korn-sudo/Project-Fog/raw/main/files/installer/ubv301b && chmod +x ./ubv301b && ./ubv301b
```
| 22.625 | 116 | 0.685083 | kor_Hang | 0.20279 |
93e320c8a5475896f4f3d7a24e51a69ccd04ba85 | 51 | md | Markdown | README.md | LoveInShenZhen/HelloVertx | dd9cecf186089d673802aed6a4efa366bae8ed98 | [
"Apache-2.0"
] | null | null | null | README.md | LoveInShenZhen/HelloVertx | dd9cecf186089d673802aed6a4efa366bae8ed98 | [
"Apache-2.0"
] | null | null | null | README.md | LoveInShenZhen/HelloVertx | dd9cecf186089d673802aed6a4efa366bae8ed98 | [
"Apache-2.0"
] | null | null | null | # HelloVertx
hello world version of Vertx + Kotlin
| 17 | 37 | 0.784314 | eng_Latn | 0.347009 |
93e3813171a802dea580c5a47fd5f00d6a989514 | 468 | md | Markdown | README.md | seoinkwon/JS-study | 84604f16fa495d67acbf52596f09a0acfc32119b | [
"FSFAP"
] | null | null | null | README.md | seoinkwon/JS-study | 84604f16fa495d67acbf52596f09a0acfc32119b | [
"FSFAP"
] | null | null | null | README.md | seoinkwon/JS-study | 84604f16fa495d67acbf52596f09a0acfc32119b | [
"FSFAP"
] | null | null | null | # John's Kimchi-Chicken-Stew
<br>
> (치킨 여기에 입력) `음~ 맛있다~~ 마트 갔다 오셨어요?`
<br>
#### John's Chicken Stew With-Kimchi (존의 김치묵은지 닭도리탕)은 Java Script공부를 위해 만들어진 Repo입니다
<br>
#### 성실하게 1월의 말부터 2월의 초까지 과정을 끝낼것을 약속합니다 (안 그러면 치킨스튜 없다)
<br>
#### 최후의 1인이 되서 오늘 저녁은 치킨이다 인증샷을 Github에 올리겠습니다 치킨 치킨
<br>
<br>
###### JS스터디는 Nomad Coder사이트의 바닐라JS 코스를 공부합니다. https://repl.it/@seoinkwon <<<repl 링크
<br>
----
###### © seoinkwon, All Rights Reserved.
| 16.714286 | 85 | 0.598291 | kor_Hang | 1.00001 |
93e663178de37e776752577ab865ab681606d6f9 | 4,977 | md | Markdown | doc/plugins/error-log-logger.md | nanamikon/apisix | 1dfde635e3e4f9090be6b1b2254b184e5a12f4b1 | [
"Apache-2.0"
] | 1 | 2021-01-13T08:44:42.000Z | 2021-01-13T08:44:42.000Z | doc/plugins/error-log-logger.md | nanamikon/apisix | 1dfde635e3e4f9090be6b1b2254b184e5a12f4b1 | [
"Apache-2.0"
] | 3 | 2022-02-28T07:30:30.000Z | 2022-03-31T07:31:34.000Z | doc/plugins/error-log-logger.md | nanamikon/apisix | 1dfde635e3e4f9090be6b1b2254b184e5a12f4b1 | [
"Apache-2.0"
] | 1 | 2021-07-05T04:36:10.000Z | 2021-07-05T04:36:10.000Z | <!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
-->
- [中文](../zh-cn/plugins/error-log-logger.md)
# Summary
- [**Name**](#name)
- [**Attributes**](#attributes)
- [**How To Enable And Disable**](#how-to-enable-and-disable)
- [**How To Update**](#how-to-update)
## Name
`error-log-logger` is a plugin which pushes the log data of APISIX's error.log to TCP servers.
This plugin will provide the ability to send the log data which selected by the level to Monitoring tools and other TCP servers.
This plugin provides the ability as a batch to push the log data to your external TCP servers. If not receive the log data, don't worry, it will automatically send the logs after the timer function expires in our Batch Processor.
For more info on Batch-Processor in Apache APISIX please refer.
[Batch-Processor](../batch-processor.md)
## Attributes
| Name | Type | Requirement | Default | Valid | Description |
| ---------------- | ------- | ----------- | ------- | ------- | ---------------------------------------------------------------------------------------- |
| host | string | required | | | IP address or the Hostname of the TCP server. |
| port | integer | required | | [0,...] | Target upstream port. |
| timeout | integer | optional | 3 | [1,...] | Timeout for the upstream to connect and send, unit: second. |
| keepalive | integer | optional | 30 | [1,...] | Time for keeping the cosocket alive, unit: second. |
| level | string | optional | WARN | | The filter's log level, default warn, choose the level in ["STDERR", "EMERG", "ALERT", "CRIT", "ERR", "ERROR", "WARN", "NOTICE", "INFO", "DEBUG"], the value ERR equals ERROR. |
| tls | boolean | optional | false | | Control whether to perform SSL verification |
| tls_server_name | string | optional | | | The server name for the new TLS extension SNI |
| batch_max_size | integer | optional | 1000 | [1,...] | Max size of each batch |
| inactive_timeout | integer | optional | 3 | [1,...] | Maximum age in seconds when the buffer will be flushed if inactive |
| buffer_duration | integer | optional | 60 | [1,...] | Maximum age in seconds of the oldest entry in a batch before the batch must be processed |
| max_retry_count | integer | optional | 0 | [0,...] | Maximum number of retries before removing from the processing pipe line |
| retry_delay | integer | optional | 1 | [0,...] | Number of seconds the process execution should be delayed if the execution fails |
## How To Enable And Disable
The error-log-logger is a global plugin of APISIX.
### Enable plugin
Enable the plug-in `error-log-logger` in `conf/config.yaml`, then this plugin can work fine.
It does not need to be bound in any route or service.
Here is an example of `conf/config.yaml`:
```yaml
plugins: # plugin list
... ...
- request-id
- hmac-auth
- api-breaker
- error-log-logger # enable plugin `error-log-logger
```
### Disable plugin
Remove or comment out the plugin `error-log-logger` from `conf/config.yaml`.
```yaml
plugins: # plugin list
... ...
- request-id
- hmac-auth
- api-breaker
#- error-log-logger # enable plugin `error-log-logger
```
## How to set the TCP server address
Step: update the attributes of the plugin
```shell
curl http://127.0.0.1:9080//apisix/admin/plugin_metadata/error-log-logger -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"host": "127.0.0.1",
"port": 1999,
"inactive_timeout": 1
}'
```
| 48.794118 | 249 | 0.568616 | eng_Latn | 0.950668 |
93e757bae106fe246c0400693f44926036e40a41 | 7,376 | md | Markdown | articles/application-insights/app-insights-java-live.md | OpenLocalizationTestOrg/azure-docs-pr15_de-AT | ca82887d8067662697adba993b87860bdbefea29 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-29T22:55:06.000Z | 2020-11-29T22:55:06.000Z | articles/application-insights/app-insights-java-live.md | Allyn69/azure-docs-pr15_de-CH | 211ef2a7547f43e3b90b3c4e2cb49e88d7fe139f | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-insights/app-insights-java-live.md | Allyn69/azure-docs-pr15_de-CH | 211ef2a7547f43e3b90b3c4e2cb49e88d7fe139f | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | 2 | 2019-07-03T20:05:49.000Z | 2020-11-29T22:55:15.000Z | <properties
pageTitle="Anwendung Einblicke für Java webapps bereits live"
description="Starten Sie die Überwachung einer Anwendung, die bereits auf dem Server ausgeführt wird"
services="application-insights"
documentationCenter="java"
authors="alancameronwills"
manager="douge"/>
<tags
ms.service="application-insights"
ms.workload="tbd"
ms.tgt_pltfrm="ibiza"
ms.devlang="na"
ms.topic="article"
ms.date="08/24/2016"
ms.author="awills"/>
# <a name="application-insights-for-java-web-apps-that-are-already-live"></a>Anwendung Einblicke für Java webapps bereits live
*Anwendung Informationen ist in der Vorschau.*
Haben Sie eine Anwendung, die bereits auf dem J2EE-Server ausgeführt wird, können Sie beginnen, ohne Code ändern oder das Projekt kompilieren [Anwendung](app-insights-overview.md) zum Überwachen. Mit dieser Option erhalten Sie Informationen über HTTP-Anfragen an die Server nicht behandelte Ausnahmen und Leistungsindikatoren.
Sie benötigen ein [Microsoft Azure-](https://azure.com)Abonnement.
> [AZURE.NOTE] Die Schritte auf dieser Seite hinzugefügt Ihrer Anwendung zur Laufzeit SDK. Diese Laufzeitinstrumentation ist nützlich, wenn Sie nicht zum Aktualisieren oder Neuerstellen von Quellcode. Sie können wir empfehlen Ihnen jedoch [das SDK zum Quellcode hinzufügen](app-insights-java-get-started.md) stattdessen. Das gibt Ihnen mehr Optionen wie Code schreiben Benutzeraktivität nachverfolgen.
## <a name="1-get-an-application-insights-instrumentation-key"></a>1. einen Anwendung Einblicke instrumentationsschlüssel abrufen
1. [Microsoft Azure-Portal](https://portal.azure.com) anmelden
2. Erstellen einer neuen Application Insights-Ressource

3. Legen Sie den Anwendungstyp Java Web Application.

4. Suchen Sie den instrumentationsschlüssel der neuen Ressource. Sie müssen diesen Schlüssel kurz in das Codeprojekt einfügen.

## <a name="2-download-the-sdk"></a>2. Laden Sie das SDK
1. [Application Insights SDK für Java](https://aka.ms/aijavasdk)herunterladen
2. Entpacken Sie auf dem Server SDK in das Verzeichnis, aus dem die Projektbinärdateien geladen werden. Wenn Sie Tomcat verwenden, wäre dieses Verzeichnis in der Regel unter`webapps\<your_app_name>\WEB-INF\lib`
## <a name="3-add-an-application-insights-xml-file"></a>3. Hinzufügen von Application Insights XML-Datei
Erstellen Sie ApplicationInsights.xml im Ordner SDK hinzugefügt. Setzen sie den folgenden XML-Code.
Ersetzen Sie den instrumentationsschlüssel, den von Azure-Portal haben.
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
<!-- The key from the portal: -->
<InstrumentationKey>** Your instrumentation key **</InstrumentationKey>
<!-- HTTP request component (not required for bare API) -->
<TelemetryModules>
<Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule"/>
<Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebSessionTrackingTelemetryModule"/>
<Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebUserTrackingTelemetryModule"/>
</TelemetryModules>
<!-- Events correlation (not required for bare API) -->
<!-- These initializers add context data to each event -->
<TelemetryInitializers>
<Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebOperationIdTelemetryInitializer"/>
<Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebOperationNameTelemetryInitializer"/>
<Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebSessionTelemetryInitializer"/>
<Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebUserTelemetryInitializer"/>
<Add type="com.microsoft.applicationinsights.web.extensibility.initializers.WebUserAgentTelemetryInitializer"/>
</TelemetryInitializers>
</ApplicationInsights>
* Instrumentationsschlüssel zusammen mit jedem Element der Telemetrie gesendet und Anwendung Einblicke in die Ressource angezeigt wird.
* Die Komponente HTTP-Anforderung ist optional. Telemetriedaten zu Anfragen und Reaktionszeiten wird automatisch an das Portal gesendet.
* Korrelation von Ereignissen ist eine Erweiterung der HTTP-Anforderung Komponente. Jede Anforderung vom Server empfangenen Bezeichner zugewiesen, und dieser Bezeichner für jedes Element der Telemetrie als Eigenschaft 'Operation.Id' als Eigenschaft hinzugefügt. Sie können Sie durch Setzen von Filtern in [Diagnose Suche](app-insights-diagnostic-search.md)jeder Anforderung zugeordnete Telemetrie korrelieren.
## <a name="4-add-an-http-filter"></a>4. Fügen Sie einen HTTP-filter
Öffnen Sie die Datei web.xml im Projekt und Zusammenführen Sie den folgenden Codeausschnitt unter Web app Knoten, in dem der Anwendungsfilter konfiguriert werden.
Um möglichst genaue Ergebnisse zu erhalten, sollten Filter vor alle anderen Filter zugeordnet.
<filter>
<filter-name>ApplicationInsightsWebFilter</filter-name>
<filter-class>
com.microsoft.applicationinsights.web.internal.WebRequestTrackingFilter
</filter-class>
</filter>
<filter-mapping>
<filter-name>ApplicationInsightsWebFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
## <a name="5-check-firewall-exceptions"></a>5. Überprüfen Sie 5. Firewallausnahmen
Sie müssen [Ausnahmen für ausgehende Daten](app-insights-ip-addresses.md)festgelegt.
## <a name="6-restart-your-web-app"></a>6. Starten Sie Ihrer Anwendung
## <a name="7-view-your-telemetry-in-application-insights"></a>7. Ihre Telemetrie in Application Insights anzeigen
Zurück zu der Ressource Anwendung Einblicke in [Microsoft Azure-Portal](https://portal.azure.com).
Telemetriedaten zu HTTP-Anfragen auf die Übersicht wird angezeigt. (Wenn keine, warten Sie einige Sekunden und klicken Sie auf aktualisieren.)

Klicken Sie auf Diagramme detailliertere Kriterien anzeigen.

Und beim Anzeigen der Eigenschaften einer Anforderung sehen Sie die Telemetrie-Ereignisse Ausnahmen wie Anfragen zugeordnet.

[Erfahren Sie mehr über Metriken.](app-insights-metrics-explorer.md)
## <a name="next-steps"></a>Nächste Schritte
* Monitor Seitenansichten und Benutzer Metriken [Telemetrie zu Ihren Webseiten hinzufügen](app-insights-web-track-usage.md) .
* Um sicherzustellen, dass Ihre Anwendung bleibt, live und Reaktionsfähigkeit [Webtests einrichten](app-insights-monitor-web-app-availability.md) .
* [Protokoll-Traces erfassen](app-insights-java-trace-logs.md)
* [Suche Ereignisse und Protokolle](app-insights-diagnostic-search.md) zu diagnostizieren.
| 50.868966 | 409 | 0.775624 | deu_Latn | 0.921653 |
93e7efe5cbf0b2da313b3b8622e64156e272d369 | 8,598 | md | Markdown | publish-and-consume-content.md | videoDAC/simple-streaming-server | d385ebd766ce268b2e43746c5b706cfbdcfedc6d | [
"MIT"
] | 15 | 2020-05-05T03:11:48.000Z | 2021-01-27T16:33:05.000Z | publish-and-consume-content.md | videoDAC/livepeer-broadcaster | 386f0214c9bf9dd611460f480ddd2c69d8e0357f | [
"MIT"
] | 1 | 2021-08-04T01:23:48.000Z | 2021-08-06T03:09:01.000Z | publish-and-consume-content.md | videoDAC/livepeer-broadcaster | 386f0214c9bf9dd611460f480ddd2c69d8e0357f | [
"MIT"
] | 4 | 2021-05-29T22:50:54.000Z | 2022-03-18T10:11:56.000Z | Contents:
- [Publish content via CLI (`ffmpeg`)](#command-line-interface)
- [Consume content via CLI (`ffplay`)](#consume-content-using-ffplay)
- [Publish content via GUI (_OBS Studio_)](#command-line-interface)
- [Consume content via GUI (_VLC Player_)](#consume-content-using-ffplay)
## Publish and Consume Content
This section explains how to publish and consume content to and from Livepeer Broadcaster.
This can be done via a [command line interface](#command-line-interface) using `FFmpeg`, or from a [graphical user interface](#graphical-user-interface) using **OBS Studio** and **VLC Media Player**.
[Return to main page](./README.md#next-steps)
### Command Line Interface
This section explains how to publish and consume content to and from Livepeer Broadcaster using a command line interface (CLI).
#### Install `FFmpeg`
You can test publishing content into a Livepeer Broadcaster using `ffmpeg`.
Install `FFmpeg` on Linux (Ubuntu) using `sudo apt install ffmpeg`
Install `FFmpeg` on a Mac using instructions on [FFmpeg's website](https://www.ffmpeg.org/download.html#build-mac).
#### Publish a test source
`FFmpeg` can be used to generate and publish a test source of content to Livepeer Broadcaster:
0. Make sure Livepeer Broadcaster is running on localhost `127.0.0.1`.
1. Run the following command:
```
ffmpeg -re -f lavfi -i \
testsrc=size=500x500:rate=30,format=yuv420p \
-f lavfi -i sine -c:v libx264 -b:v 1000k \
-x264-params keyint=60 -c:a aac -f flv \
rtmp://127.0.0.1:1935/test_source
```
- `test_source` is the "stream key" for this publication.
- `size=500x500` defines the dimensions of the test video source in pixels
- `rate=30` defines the frame rate of the test video in frames per second
- `1000k` defines the bitrate for the stream
- `keyint=60` defines the keyframe interval in frames
2. See that Livepeer Broadcaster is receiving a stream called `test_source`.

3. Look in `~/.lpData/offchain` folder to see the segments of video which make up the livestream.

[Return to main page](./README.md#next-steps)
#### Publish a recorded video
`FFmpeg` can be used to publish recorded content to Livepeer Broadcaster:
0. Make sure Livepeer Broadcaster is running on localhost `127.0.0.1`.
1. Run the following command:
```
ffmpeg \
-re \
-i video.mov \
-codec copy \
-f flv rtmp://127.0.0.1:1935/recorded_content
```
- `recorded_content` is the "stream key" for this publication.
2. See that Livepeer Broadcaster is receiving a stream called `recorded_content`.

[Return to main page](./README.md#next-steps)
#### Consume content using ffplay
`ffplay` is part of `FFmpeg`, and can be used to request and playback content from Livepeer Broadcaster.
0. Make sure content is being published into Livepeer Broadcaster.
1. Run `ffplay http://127.0.0.1:8935/stream/test_source.m3u8`
- `test_source` is the "stream key" used when publishing content to Livepeer Broadcaster.
2. See the content from the `test_source` stream being played back:

[Return to main page](./README.md#next-steps)
#### Inspect content metadata
`curl` is command line tool and library for transferring data with URLs, and can be used to inspect metadata of content published by Livepeer Broadcaster.
0. Make sure content is being published into Livepeer Broadcaster.
1. Run `curl http://127.0.0.1:8935/stream/test_source.m3u8`
- `test_source` is the "stream key" used when publishing content to Livepeer Broadcaster.
2. View metadata about the stream(s) of content available for consumption, with `.m3u8` extension(s):

3. Run `curl http://127.0.0.1:8935/stream/test_source/source.m3u8`
4. View metadata about the segment(s) of content available for consumption, with `.ts` extension(s):

5. Run `curl http://127.0.0.1:7935/status`
6. View metadata about the status of the Livepeer Broadcaster, including details of stream(s) being served

[Return to main page](./README.md#next-steps)
### Graphical User Interface
This section explains how to [publish content to](#publish-content-using-obs-studio) and [consume content from](#consume-content-using-vlc-media-player) Livepeer Broadcaster using graphical user interfaces (GUIs).
#### Publish content using OBS Studio
**OBS Studio** can be used to configure and publish streaming content to Livepeer Broadcaster:
1. Download and install [OBS Studio](https://obsproject.com/)
2. Launch OBS Studio, and decline to use the auto-configuration wizard.

3. Go to Settings > Output
4. Set "Output Mode" to "Advanced".
5. Set the Streaming Keyframe interval to `2` seconds.

6. Go to Settings > Stream
7. Set "Service" to `Custom`
8. Set "Server" to `rtmp://127.0.0.1` and "Stream Key" to `obs-studio`

9. Click OK to close "Settings".
10. Under "Sources", click the `+` and select "Text" source.
11. Add some text

12. Make sure Livepeer Broadcaster is running.
13. Click "Start Streaming" (and also "Start Recording" if you also want to record the stream).
14. See that Livepeer Broadcaster is receiving a stream called `obs-studio`.

[Learn about what other sources of content can be configured](#configuring-content-in-obs-studio).
[Return to main page](./README.md#next-steps)
#### Consume content using VLC Media Player
**VLC Media Player** can be used to request and playback content from Livepeer Broadcaster.
0. Make sure content is being published into Livepeer Broadcaster.
1. Download and install [VLC Media Player](https://www.videolan.org/vlc/index.html)
2. Launch VLC Media Player
3. Select Media > Open Network Stream... (Ctrl-N)
4. Enter `http://127.0.0.1:8935/stream/obs-studio.m3u8` as the network URL

5. Click "Play", and see the content from the `obs-studio` stream:

[Return to main page](./README.md#next-steps)
#### Configuring Content in OBS Studio
**OBS Studio** can be used to add video and audio content sources to be published to Livepeer Broadcaster.

- **Configuring Video**:
- The big black box in the middle of the screen is the "canvas" for visual content.
- One or more "Scenes" can be configured, which can be switched between when publishing
- Zero or more "Sources" can be added to each "Scene", to
- Examples of "Sources" are: _static text_, _images_, _recorded videos_, _live videos_ (e.g. from a camera), _screenshares_, _window shares_.
**Configuring Audio**
- Audio being published can be monitored in the "Mixer", both visually and audibly
- Audio sources can be configured in Settings > Audio, and will appear in the "Mixer"
- Examples of sources are _sound cards_, _microphones_ and _audio played by the computer_.
Here is an example of a variety of different content sources configured in **OBS Studio**:

[Return to main page](./README.md#next-steps)
| 39.440367 | 213 | 0.754013 | eng_Latn | 0.854725 |
93e96777708a8629273fd3c8aa635de4a1b8a1e4 | 213 | md | Markdown | doc-food/ideas/notebooks-rendering.md | aerosol/wb | d55f97efb1b48076e93210071b61d4bb3e1376f3 | [
"Apache-2.0"
] | 18 | 2021-01-01T01:54:52.000Z | 2022-01-26T08:47:13.000Z | doc-food/ideas/notebooks-rendering.md | aerosol/wb | d55f97efb1b48076e93210071b61d4bb3e1376f3 | [
"Apache-2.0"
] | 2 | 2021-01-01T17:20:12.000Z | 2021-01-01T23:05:14.000Z | doc-food/ideas/notebooks-rendering.md | aerosol/wb | d55f97efb1b48076e93210071b61d4bb3e1376f3 | [
"Apache-2.0"
] | 1 | 2021-01-11T02:29:32.000Z | 2021-01-11T02:29:32.000Z | # Notebooks rendering
To implement the ability to render Jupyter Notebooks and/or Live Markdown
used by [Livebook](https://dashbit.co/blog/announcing-livebook?new=1).
See [[contribute|contribution guidelines]].
| 30.428571 | 73 | 0.793427 | eng_Latn | 0.488897 |
93e9a9344f7ed971ee35d6730c8150ec520bdcc1 | 1,083 | md | Markdown | README.md | gbentaieb/pip-polyfill | 96dcbb0a99678a9eacf4a1a10a02056f037b4011 | [
"Apache-2.0"
] | 22 | 2018-06-20T21:12:12.000Z | 2021-06-21T18:33:01.000Z | README.md | gbentaieb/pip-polyfill | 96dcbb0a99678a9eacf4a1a10a02056f037b4011 | [
"Apache-2.0"
] | 2 | 2021-05-09T20:00:29.000Z | 2022-02-12T14:59:33.000Z | README.md | gbentaieb/pip-polyfill | 96dcbb0a99678a9eacf4a1a10a02056f037b4011 | [
"Apache-2.0"
] | 3 | 2019-05-23T15:25:49.000Z | 2020-11-19T16:27:39.000Z | # pip-polyfill
[](https://gbentaieb.github.io/pip-polyfill/test/)
## Why a picture in picture polyfill
The w3c has described a standard API that should be implemented by every browser willing to add picture-in-picture (pip) features on the video element. However, **Safari has to this date implemented an other api**, making it difficult for web developers to develop and maintain pip features on their website.
We therefore want to provide a polyfill that will add the w3c pip API to safari.
## How do I use it ?
The polyfill is a ~100 lines single file, that you can find [here](https://github.com/gbentaieb/pip-polyfill/blob/master/pip.js). Feel free to use this in your own code !
## Tests
The polyfill is fully tested.
If you want to see the results of the tests, go [here](https://gbentaieb.github.io/pip-polyfill/test/)
## Demo
If you want to play around with the w3c api in safari, open [this page](https://gbentaieb.github.io/pip-polyfill/demo/) in your browser, and use the api in the console ! | 63.705882 | 308 | 0.759926 | eng_Latn | 0.991423 |
93e9c861601e7dc7acc80fbbbd21c5f989d4a9f3 | 746 | md | Markdown | README.md | ebubekirbastama/ebubekirbastama | 0f4c86bf26a930f29e68fa78fa3631eb387d2986 | [
"Apache-2.0"
] | null | null | null | README.md | ebubekirbastama/ebubekirbastama | 0f4c86bf26a930f29e68fa78fa3631eb387d2986 | [
"Apache-2.0"
] | null | null | null | README.md | ebubekirbastama/ebubekirbastama | 0f4c86bf26a930f29e68fa78fa3631eb387d2986 | [
"Apache-2.0"
] | null | null | null | MessageBox.Show("EBS Yararlı Coding evrenine Hoş Geldiniz :)","EBS Time");
<hr>
<marquee direction=right>EBS Time</marquee>
<p>Neler Yaptık <p/>
<ul class="container float">
<li class="item float-item">Siber Güvenlik Tools</li>
<li class="item float-item">Siber Saldırı Tespit Tools</li>
<li class="item float-item">Bir çok alanda Web Bot</li>
<li class="item float-item">Excel Hunter Excel Virüs Tespit Programı</li>
<li class="item float-item">Bekra Siem Tools...</li>
<li class="item float-item"><a href="https://github.com/ebubekirbastama/TexttxtDosyasi-Bolme-Programi">.Txt Dosyası Bölme Programı</a> </li>
</ul>

| 49.733333 | 143 | 0.734584 | tur_Latn | 0.255338 |
93ea1560621007ec721b19887ef04952488d4104 | 2,016 | md | Markdown | docs/src/wallet-guide/hardware-wallets.md | solvia-labs/solvia | 573c25de36cc3680e99e7e33aa290fae7c18ece8 | [
"Apache-2.0"
] | null | null | null | docs/src/wallet-guide/hardware-wallets.md | solvia-labs/solvia | 573c25de36cc3680e99e7e33aa290fae7c18ece8 | [
"Apache-2.0"
] | 7 | 2022-03-18T11:51:52.000Z | 2022-03-21T08:58:22.000Z | docs/src/wallet-guide/hardware-wallets.md | solvia-labs/solvia | 573c25de36cc3680e99e7e33aa290fae7c18ece8 | [
"Apache-2.0"
] | null | null | null | ---
title: Using Hardware Wallets on the Solvia CLI
---
Signing a transaction requires a private key, but storing a private
key on your personal computer or phone leaves it subject to theft.
Adding a password to your key adds security, but many people prefer
to take it a step further and move their private keys to a separate
physical device called a _hardware wallet_. A hardware wallet is a
small handheld device that stores private keys and provides some
interface for signing transactions.
The Solvia CLI has first class support for hardware wallets. Anywhere
you use a keypair filepath (denoted as `<KEYPAIR>` in usage docs), you
can pass a _keypair URL_ that uniquely identifies a keypair in a
hardware wallet.
## Supported Hardware Wallets
The Solvia CLI supports the following hardware wallets:
- [Ledger Nano S and Ledger Nano X](hardware-wallets/ledger.md)
## Specify a Keypair URL
Solvia defines a keypair URL format to uniquely locate any Solvia keypair on a
hardware wallet connected to your computer.
The keypair URL has the following form, where square brackets denote optional
fields:
```text
usb://<MANUFACTURER>[/<WALLET_ID>][?key=<DERIVATION_PATH>]
```
`WALLET_ID` is a globally unique key used to disambiguate multiple devices.
`DERVIATION_PATH` is used to navigate to Solvia keys within your hardware wallet.
The path has the form `<ACCOUNT>[/<CHANGE>]`, where each `ACCOUNT` and `CHANGE`
are positive integers.
For example, a fully qualified URL for a Ledger device might be:
```text
usb://ledger/BsNsvfXqQTtJnagwFWdBS7FBXgnsK8VZ5CmuznN85swK?key=0/0
```
All derivation paths implicitly include the prefix `44'/501'`, which indicates
the path follows the [BIP44 specifications](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
and that any derived keys are Solvia keys (Coin type 501). The single quote
indicates a "hardened" derivation. Because Solvia uses Ed25519 keypairs, all
derivations are hardened and therefore adding the quote is optional and
unnecessary.
| 37.333333 | 107 | 0.789683 | eng_Latn | 0.997559 |
93eaafe8a0ff73f31cc4011d91c8d43c0e09c413 | 552 | md | Markdown | README.md | katiekatieb/Responsive-Portfolio | e26e608dd4eb1b8b79c91ab59d251e0278a629af | [
"MIT"
] | null | null | null | README.md | katiekatieb/Responsive-Portfolio | e26e608dd4eb1b8b79c91ab59d251e0278a629af | [
"MIT"
] | null | null | null | README.md | katiekatieb/Responsive-Portfolio | e26e608dd4eb1b8b79c91ab59d251e0278a629af | [
"MIT"
] | null | null | null | # Responsive-Portfolio
## Contributors
@katiekatieb
## Technology
* HTML%, CSS
* Live demo: https://katiekatieb.github.io/Responsive-Portfolio/index.html
## About
* Responsive-Portfolio is a basic portfolio site.
## License
* This project is licensed under The MIT License (MIT).
## How-to use this code
* Fork the repo and add your content.
## Contributing Guidelines
All contributions and suggestions are welcome!
For direct contributions, please fork the repository and file a pull request.
## Contact
* e-mail: katiebrasfield@gmail.com
| 19.714286 | 78 | 0.755435 | eng_Latn | 0.899496 |
93eb95a38a1421f048223601c6666b636b62c522 | 4,775 | md | Markdown | content/blog/rawgraphs2-is-coming/rawgraphs2-is-coming.md | adrienbrault/rawgraphs.github.io | 791f8b4cad14d467ffed7e3c81f1ef7b2eaf1414 | [
"MIT"
] | 14 | 2019-07-29T08:47:17.000Z | 2022-02-27T14:55:08.000Z | content/blog/rawgraphs2-is-coming/rawgraphs2-is-coming.md | adrienbrault/rawgraphs.github.io | 791f8b4cad14d467ffed7e3c81f1ef7b2eaf1414 | [
"MIT"
] | 34 | 2019-07-10T22:10:20.000Z | 2021-01-04T08:32:04.000Z | content/blog/rawgraphs2-is-coming/rawgraphs2-is-coming.md | adrienbrault/rawgraphs.github.io | 791f8b4cad14d467ffed7e3c81f1ef7b2eaf1414 | [
"MIT"
] | 6 | 2019-12-18T17:50:22.000Z | 2022-02-21T20:03:15.000Z | ---
title: RAWGraphs 2.0 (alpha) is coming!
date: 2020-08-25T09:00:00.000Z
author: RAW Graphs Team
layout: post
subtitle:
- ""
secondary_title:
- ""
discover_more_description:
- ""
background_image:
- "0"
page_background_image:
- ""
featured_video: ""
discover_more_left:
- "null"
discover_more_right:
- "null"
image: ./cover-raw2.gif
categories:
- RAWGraphs 2.0
tags:
- RAWGraphs 2.0
- update
path: /crowdfunding-campaign/rawgraphs2-is-coming/
---
Yes, it’s true, it took a little bit more time than expected...but the first release of RAWGraphs 2.0 is coming!
In September we will drop the biggest update of RAWGraphs since 2017. As you know, it won’t be a simple update of the software with some new features, but a completely refactored version, from the core, the RAWGraphs library, to the interface.
*****
### What to expect in this first release?
## RAWGraphs Library
In the latest months, we have spent a lot of time defining and implementing a brand new RAWGraphs library. The new library is written in ES6 and makes use of updated releases of third parties libraries (like d3.js). Our goal is to have a more robust and flexible library to better handle the data input and ease the implementation of new features and new charts.
Thanks to the fact that the RAWGraphs library will be independent from the RAW application, it will be possible to use it in any web application created with html and javascript and to bring the simplicity, data mapping concepts and visual models of RAWGraphs to any other project.
The library will be released at a later time in a separate and dedicated repository and documentation about how to use it will follow.
*****
## Loading your data

In this first release we improved existing features and introduced some new ones to handle the data parsing. Here is what we will include in this first release:
- Possibility to load tabular data (through copy and paste or from a file) and JSON files
- Possibility to define the column, decimal and thousand separators in the parsing
- Possibility to specify date locale to better handle dates that don’t follow english standards
- Possibility to stack a data on a column
- Possibility to specify and force the data type (dates, numbers, strings) of each column if needed
- New data samples to test the app.
Other data inputs (URL, SPARQL queries and from a project) will be implemented in the next releases.
*****
## Charts

<br />
For this very first release we will include four charts, two of which are not in RAWGraphs 1.0:
- Line chart (finally, right?)
- Matrix plot
- Bubble chart (previously called scatter plot)
- Sunburst
New and “old” charts will be included in the coming releases, don’t worry!
*****
## Mapping the data

The brand new library unlocked new possibilities in the mapping process:
- Possibility to define different ways to aggregate data (sum, average, median, min)
- Possibility to create series (small multiples alike - check the image below)
We are also working hard to include the possibility to filter data in the future releases.
<br />

<br />
*****
## Charts options

<br />
In the past we have received a lot of feedback to improve the control over the graphical options of the charts. In this first release we will include:
- The possibility to add a legend that can be exported with the chart
- More control over margins, artboard and other graphic details
*****
## Exporting
For this first release, we won’t include new ways to export the charts, beside .svg and .png, but soon we will introduce the possibility to export the entire project. It means that you will be able to export, in one single file, the data and the settings so that you can share it with other users or open it again without losing the work done.
*****
## RAWGraphs front-end and UX/UI new design
With so many new features we had to redesign and develop a new front-end from scratch, keeping the ease of use of the previous versions. To do so, we went through a very iterative and agile process of design that will continue in the coming months while we introduce new features.
In terms of technological stack, for the front-end we decided to use React.js and Bootstrap.
*****
## Who can access RAWGraphs 2.0 alpha?
This version, and the following updates that will be released in the coming months, will be available only to backers who contributed with at least 50€ to our campaign. Once all the new features we promised will be implemented, RAWGraphs 2.0 will be available to everyone, for free of course.
If you don’t want to wait and get access to the new version, you can support us on Indiegogo.
| 43.018018 | 362 | 0.756021 | eng_Latn | 0.999202 |
93ec2c3a9ebd206349f5f39b3792ec585f095073 | 3,462 | md | Markdown | config-engine-lite/README.md | kalmanoharan/vrnetlab | 004d1aa26559c06f243a4b8b7566567e4b640bcf | [
"MIT"
] | 799 | 2016-07-08T10:19:49.000Z | 2021-09-19T03:49:33.000Z | config-engine-lite/README.md | kalmanoharan/vrnetlab | 004d1aa26559c06f243a4b8b7566567e4b640bcf | [
"MIT"
] | 235 | 2016-07-15T19:44:49.000Z | 2021-09-22T06:43:27.000Z | config-engine-lite/README.md | kalmanoharan/vrnetlab | 004d1aa26559c06f243a4b8b7566567e4b640bcf | [
"MIT"
] | 219 | 2016-07-19T19:13:37.000Z | 2021-09-14T18:44:39.000Z | vrnetlab Config Engine lite
===========================
Config Engine lite is a small provisioning system shipped with vrnetlab,
primarily written for three use cases:
* configure routers in a vrnetlab topology such that the functionality of
vrnetlab itself can be tested, for example, we want to make sure that
interfaces are correctly mapped
* accelerate labing. If you want to do some specific iBGP testing you might
not be all too interested in setting IP addresses on the 7 routers required
for your test or configure an entire IGP - use config engine to quickly
provision the basics and do the rest by hand!
* serve as inspiration for how you can write a provisioning system running
It's called 'lite' since it doesn't aspire to become a full blown provisioning
system. While it might grow and gain new functionality it will always be
targeted for the requirements of the above, in particular the testing of
vrnetlab itself.
Usage
-----
After building the docker image, you run it like this. There are two modes of operation, topology mode and single-router-mode.
### Topology mode
Use config-engine-lite and jinja2 templates to configure your topomachine topology.
```
docker run -v $(pwd)/templates:/templates -v $(pwd)/topology:/topology --link router1 --link router2 vr-configengine --topo /topology/lltopo.json --xr /templates/xr.j2 --junos /templates/junos.j2 --run
```
* -v $(pwd)/templates:/templates - Mount a directory containing your templates inside the container
* -v $(pwd)/topology/topology - Mount a directory containing your topology files inside the container
* --link router1 --link router2 - Link all routers specified in your topology, enabling config-engine-lite to configure them
* --topo /topology/lltopo.json - The low level topology built by topology-machine, This references to the /topology mountpoint
* --ios /templates/ios.j2 - Configuration template for IOS (CSR 1000v), this references to the /templates mountpoint
* --xr /templates/xr.j2 - Configuration template for ios-xr, this references to the /templates mountpoint
* --junos /templates/junos.j2 - Configuration template for JunOS, this refrences to the /templates mountpoint
* --run - Actually deploy the configuration. If this is not specified, the configuration changes will not be committed and config diff will be printed.
### Single-router mode
Apply a configuration template to a single router, useful for bootstrapping a router for use with vr-bgp for instance.
```
docker run -v $(pwd)/templates:/templates --link router1 vr-configengine --type xrv --router router1 --config /templates/router1.j2 --attrs "key1=value1,key2=value2"
```
* -v $(pwd)/templates:/templates - Mount a directory containing your templates inside the container
* --link router1 - Link the router you want to configure
* --config /templates/router1.j2 - Your router configuration, references /templates moutpoint
* --type vmx - Type of router to configure (valid values are vmx, xrv and csr)
* --attr "key=value" - A key/value pair available in the template, can be specified multiple times.
### Common parameters
These parameters are available in both modes
* --wait-for-boot - Block until we can connect to the router via SSH. If neither --diff or --run is used this option will simply block until all your routers are started
* --diff - Print configuration diff and discard the configuration
* --run - Commit the configuration to the router
| 61.821429 | 201 | 0.761698 | eng_Latn | 0.993933 |
93ecfa658f7064793ee076e1a304d0783f4bb3d3 | 29,066 | markdown | Markdown | _posts/2020-06-11-Logistic-Regression-with-a-Neural-Network.markdown | Lynfs/lynfs.github.io | ef6497366f254ebc175ad63e6bb196bf90879b1e | [
"CC0-1.0"
] | 1 | 2021-03-17T00:03:06.000Z | 2021-03-17T00:03:06.000Z | _posts/2020-06-11-Logistic-Regression-with-a-Neural-Network.markdown | Lynfs/lynfs.github.io | ef6497366f254ebc175ad63e6bb196bf90879b1e | [
"CC0-1.0"
] | null | null | null | _posts/2020-06-11-Logistic-Regression-with-a-Neural-Network.markdown | Lynfs/lynfs.github.io | ef6497366f254ebc175ad63e6bb196bf90879b1e | [
"CC0-1.0"
] | 1 | 2021-01-03T15:03:21.000Z | 2021-01-03T15:03:21.000Z |
# Logistic Regression with a Neural Network mindset
**Note**: Some math parts are in lateX syntax, you can run it online to get a nice view of formulas (for some reason, markdown isn't running so well)
**You will learn to:**
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
## Updates
This notebook has been updated over the past few months.
The prior version was named "v5", and the current versionis now named '6a'
#### If you were working on a previous version:
* You can find your prior work by looking in the file directory for the older files (named by version name).
* To view the file directory, click on the "Coursera" icon in the top left corner of this notebook.
* Please copy your work from the older versions to the new version, in order to submit your work for grading.
#### List of Updates
* Forward propagation formula, indexing now starts at 1 instead of 0.
* Optimization function comment now says "print cost every 100 training iterations" instead of "examples".
* Fixed grammar in the comments.
* Y_prediction_test variable name is used consistently.
* Plot's axis label now says "iterations (hundred)" instead of "iterations".
* When testing the model, the test image is normalized by dividing by 255.
## 1 - Packages ##
First, let's run the cell below to import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
```python
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
```
## 2 - Overview of the Problem set ##
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
```python
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
```
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
```python
# Example of a picture
index = 2
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") +
"' picture.")
```
y = [1], it's a 'cat' picture.

Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
**Exercise:** Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
```python
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)
**Expected Output for m_train, m_test and num_px**:
| m_train | 209 |
|---|---|
| m_test | 50 |
| num_px | 64 |
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use:
```python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
```
```python
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
```
train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]
**Expected Output**:
<table style="width:35%">
<tr>
<td>train_set_x_flatten shape</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>train_set_y shape</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>test_set_x_flatten shape</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>test_set_y shape</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>sanity check after reshaping</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode.
Let's standardize our dataset.
```python
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
```
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data
## 3 - General Architecture of the learning algorithm ##
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**

**Mathematical expression of the algorithm**:
For one example x^i:

The cost is then computed by summing over all training examples:

**Key steps**:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
## 4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call `model()`.
### 4.1 - Helper functions
**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
```python
#: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
```
```python
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
```
sigmoid([0, 2]) = [ 0.5 0.88079708]
**Expected Output**:
<table>
<tr>
<td>sigmoid([0, 2])</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
### 4.2 - Initializing parameters
**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
```python
#: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
```
```python
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
```
w = [[ 0.]
[ 0.]]
b = 0
**Expected Output**:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
### 4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.
**Hints**:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```python
#: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X)+b) # compute activation
cost = -1/m*(np.sum(Y*np.log(A)+(1-Y)*np.log(1-A))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m)*np.dot(X,(A-Y).T)
db = (1/m)*np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
```
```
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
```
dw = [[ 0.99845601]
[ 2.39507239]]
db = 0.00145557813678
cost = 5.80154531939
**Expected Output**:
<table style="width:50%">
<tr>
<td> dw </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> db </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> cost </td>
<td> 5.801545319394553 </td>
</tr>
</table>
### 4.4 - Optimization
- You have initialized your parameters.
- You are also able to compute a cost function and its gradient.
- Now, you want to update the parameters using gradient descent.
**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
```
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w,b,X,Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate*dw
b = b - learning_rate*db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
```
```
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
```
w = [[-0.08608643]
[ 0.10971233]]
b = -0.14427426648
dw = [[ 0.12311093]
[ 0.13629247]]
db = -0.149239158846
**Expected Output**:
| w | [[ 0.19033591] [ 0.12259159]] |
|---|---|
| b | 1.92535983008 |
|dw | 0.67752042 |
| db | 0.219194504541 |
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:
1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$
2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
```
#: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X)+b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0,i] <= 0.5:
Y_prediction[0,i] = 0
else:
Y_prediction[0,i] = 1
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
```
```
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
```
predictions = [[ 1. 1. 0.]]
**Expected Output**:
<table style="width:30%">
<tr>
<td>
predictions
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
**What to remember**:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
## 5 - Merge all functions into a model ##
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
**Exercise:** Implement the model function. Use the following notation:
- Y_prediction_test for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
```
#: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w = np.zeros((64*64*3,1))
b= 0
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = False)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, test_set_x)
Y_prediction_train = predict(w, b, train_set_x)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
```
Run the following cell to train your model.
```
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
```
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %
**Expected Output**:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
```
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
```
Let's also plot the cost function and the gradients.
```
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
```

**Interpretation**:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
## 6 - Further analysis (optional/ungraded exercise) ##
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
#### Choice of learning rate ####
**Reminder**:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
```
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
```
learning rate is: 0.01
train accuracy: 99.52153110047847 %
test accuracy: 68.0 %
-------------------------------------------------------
learning rate is: 0.001
train accuracy: 88.99521531100478 %
test accuracy: 64.0 %
-------------------------------------------------------
learning rate is: 0.0001
train accuracy: 68.42105263157895 %
test accuracy: 36.0 %
-------------------------------------------------------

**Interpretation**:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting.
**What to remember from this assignment:**
1. Preprocessing the dataset is important.
2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm.
- Play with the learning rate and the number of iterations
- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its standard deviation)
Bibliography:
- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c | 42.063676 | 427 | 0.653857 | eng_Latn | 0.980296 |
93edcadc7dc8e4ed34c438fcdcecfd0cc17144c4 | 392 | md | Markdown | README.md | wei355300/sync-gitlab-projects | 02d217cf912421264978aa818fcd7d9426b8c72d | [
"Apache-2.0"
] | null | null | null | README.md | wei355300/sync-gitlab-projects | 02d217cf912421264978aa818fcd7d9426b8c72d | [
"Apache-2.0"
] | null | null | null | README.md | wei355300/sync-gitlab-projects | 02d217cf912421264978aa818fcd7d9426b8c72d | [
"Apache-2.0"
] | null | null | null | [TOC]
# 用法
## 切换git分支[git_branch_switch.py]
```python
# 将本地分支切换为 production 分支
python3 git_branch_switch.py -u ~/workspace/petkit-chain/com-petkit-food production
# 将目录及子目录(-i 参数)都切换为 production 分支
python3 git_branch_switch.py -u -i ~/workspace/petkit-chain/com-petkit-food production
```
## 拉取 gitlab 上所有代码
```python
python3 GitlabSourceCodeCounter.py
```
# 安装依赖
## Gitlab
`todo`
| 14.518519 | 86 | 0.737245 | eng_Latn | 0.177016 |
93eec20d66dfa8693d2ced4b3e3b46a9da9310e8 | 7,797 | md | Markdown | articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 12 | 2017-08-28T07:45:55.000Z | 2022-03-07T21:35:48.000Z | articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 441 | 2017-11-08T13:15:56.000Z | 2021-06-02T10:39:53.000Z | articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md | flexray/azure-docs.pl-pl | bfb8e5d5776d43b4623ce1c01dc44c8efc769c78 | [
"CC-BY-4.0",
"MIT"
] | 27 | 2017-11-13T13:38:31.000Z | 2022-02-17T11:57:33.000Z | ---
title: VPN Gateway klasyczny do Menedżer zasobów migracji | Microsoft Docs
description: Ta strona zawiera omówienie VPN Gateway klasycznej do Menedżer zasobów migracji.
documentationcenter: na
services: vpn-gateway
author: amsriva
manager: rossort
editor: amsriva
ms.assetid: caa8eb19-825a-4031-8b49-18fbf3ebc04e
ms.service: vpn-gateway
ms.devlang: na
ms.topic: how-to
ms.tgt_pltfrm: na
ms.workload: infrastructure-services
ms.date: 02/06/2020
ms.author: amsriva
ms.openlocfilehash: c9d7fb8be1894ffa5f8c35e16e1ed3aa0949b3ff
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/29/2021
ms.locfileid: "96488208"
---
# <a name="vpn-gateway-classic-to-resource-manager-migration"></a>VPN Gateway klasyczny do Menedżer zasobów migracji
Bramy sieci VPN można teraz migrować z klasycznego do Menedżer zasobów model wdrażania. Więcej informacji o Azure Resource Manager [funkcjach i korzyściach](../azure-resource-manager/management/overview.md)można znaleźć w artykule. W tym artykule szczegółowo opisano sposób migrowania z wdrożeń klasycznych do nowszej wersji Menedżer zasobów opartej na modelu.
Bramy sieci VPN są migrowane w ramach migracji wirtualnej z klasycznej do Menedżer zasobów. Ta migracja wykonuje jedną sieć wirtualną jednocześnie. Nie ma dodatkowych wymagań w zakresie narzędzi lub wymagań wstępnych migracji. Kroki migracji są identyczne z istniejącą migracją sieci wirtualnej i są udokumentowane na [stronie migracji zasobów IaaS](../virtual-machines/migration-classic-resource-manager-ps.md). Podczas migracji nie ma żadnego przestoju ścieżki danych i w ten sposób istniejące obciążenia byłyby nadal działać bez utraty łączności lokalnej podczas migracji. Publiczny adres IP skojarzony z bramą sieci VPN nie zmienia się w trakcie procesu migracji. Oznacza to, że nie trzeba ponownie konfigurować routera lokalnego po zakończeniu migracji.
Model w Menedżer zasobów różni się od modelu klasycznego i składa się z bram sieci wirtualnych, bram sieci lokalnej i zasobów połączeń. Reprezentują one samą bramę sieci VPN, lokację lokalną reprezentującą lokalnie lokalną przestrzeń adresową i łączność między nimi. Po zakończeniu migracji bramy nie będą dostępne w modelu klasycznym, a wszystkie operacje zarządzania na bramach sieci wirtualnej, bramach sieci lokalnej i obiektach połączeń należy wykonać przy użyciu modelu Menedżer zasobów.
## <a name="supported-scenarios"></a>Obsługiwane scenariusze
Najczęstsze scenariusze łączności sieci VPN są objęte klasycznym Menedżer zasobów migracją. Obsługiwane scenariusze obejmują:
* Wskaż łączność z lokacją
* Łączność między lokacjami i VPN Gateway podłączonymi do lokalizacji lokalnej
* Łączność między dwiema sieci wirtualnych przy użyciu bram sieci VPN
* Wiele sieci wirtualnych połączonych z tą samą lokalizacją lokalną
* Łączność między lokacjami
* Wymuszone tunelowanie włączone sieci wirtualnych
Scenariusze, które nie są obsługiwane, to m.in.:
* Sieć wirtualna z usługą ExpressRoute Gateway i VPN Gateway nie jest obecnie obsługiwana.
* Scenariusze tranzytowe, w przypadku których rozszerzenia maszyn wirtualnych są połączone z serwerami lokalnymi. Ograniczenia dotyczące łączności między sieciami VPN są szczegółowo opisane poniżej.
> [!NOTE]
> Walidacja CIDR w modelu Menedżer zasobów jest bardziej rygorystyczna niż w modelu klasycznym. Przed przeprowadzeniem migracji upewnij się, że klasyczne zakresy adresów są zgodne z prawidłowym formatem CIDR przed rozpoczęciem migracji. Można sprawdzić poprawność routingu CIDR przy użyciu wszelkich typowych modułów sprawdzania poprawności CIDR. Po przeprowadzeniu migracji Lokacje wirtualne lub lokalne z nieprawidłowymi zakresami CIDR spowodują niepowodzenie stanu.
>
>
## <a name="vnet-to-vnet-connectivity-migration"></a>Migracja połączeń z sieci wirtualnej do sieci wirtualnej
Połączenie między sieciami wirtualnymi w klasycznych zostało osiągnięte przez utworzenie reprezentacji lokacji lokalnej połączonej sieci wirtualnej. Klienci musieli utworzyć dwie lokacje lokalne, które reprezentują dwie sieci wirtualnych, które muszą być połączone ze sobą. Następnie zostały one połączone z odpowiednim sieci wirtualnych przy użyciu tunelu IPsec w celu ustanowienia łączności między tymi dwoma sieci wirtualnych. Ten model ma problemy związane z zarządzaniem, ponieważ wszystkie zmiany zakresu adresów w jednej sieci wirtualnej muszą być również utrzymywane w odpowiedniej reprezentacji lokacji lokalnej. W modelu Menedżer zasobów to obejście nie jest już potrzebne. Połączenie między tymi dwoma sieci wirtualnych można uzyskać bezpośrednio przy użyciu typu połączenia "Vnet2Vnet" w zasobie połączenia.

Podczas migracji sieci wirtualnej wykryjemy, że połączona jednostka z bieżącą wirtualną siecią VNet jest inną sieć wirtualną i upewnij się, że po zakończeniu migracji obydwu sieci wirtualnych nie będą już wyświetlane dwie lokacje lokalne reprezentujące drugą sieć wirtualną. Klasyczny model dwóch bram sieci VPN, dwie lokacje lokalne i dwa połączenia między nimi są przekształcane w model Menedżer zasobów z dwoma bramami sieci VPN i dwoma połączeniami typu Vnet2Vnet.
## <a name="transit-vpn-connectivity"></a>Tranzytowa łączność z siecią VPN
Bramy sieci VPN można skonfigurować w topologii, tak aby łączność lokalna dla sieci wirtualnej była osiągana przez połączenie z inną siecią wirtualną, która jest bezpośrednio podłączona do lokalnego. Jest to tranzytowa łączność z siecią VPN, w której wystąpienia w pierwszej sieci wirtualnej są połączone z zasobami lokalnymi przez przetransferowanie do bramy sieci VPN w połączonej podsieci wirtualnej, która jest bezpośrednio podłączona do lokalnego. Aby osiągnąć tę konfigurację w klasycznym modelu wdrażania, należy utworzyć lokację lokalną, która ma zagregowane prefiksy reprezentujące zarówno połączoną sieć wirtualną, jak i lokalną przestrzeń adresową. Ta reprezentacja lokacji lokalnej jest następnie połączona z siecią wirtualną w celu uzyskania łączności tranzytowej. Ten model klasyczny ma także podobne wyzwania związane z zarządzaniem, ponieważ wszelkie zmiany w zakresie adresów lokalnych muszą być również utrzymywane w lokacji lokalnej reprezentującej zagregowaną sieć wirtualną i lokalną. Wprowadzenie obsługi protokołu BGP w Menedżer zasobów obsługiwanych bram upraszcza zarządzanie, ponieważ połączone bramy mogą uczyć się tras od lokalnych, bez konieczności ręcznej modyfikacji prefiksów.

Ze względu na to, że przekształcamy sieć wirtualną na łączność sieci wirtualnej bez konieczności używania lokacji lokalnych, scenariusz przesyłania utraci łączność lokalną dla sieci wirtualnej, która jest połączona bezpośrednio z lokalną. Utrata łączności można wyeliminować na dwa sposoby, po zakończeniu migracji:
* Włącz protokół BGP na bramach sieci VPN połączonych ze sobą i lokalnymi. Włączenie protokołu BGP przywraca łączność bez żadnej innej zmiany konfiguracji od momentu, gdy trasy są rozwiązane i anonsowane między bramami sieci wirtualnej. Należy zauważyć, że opcja protokołu BGP jest dostępna tylko w jednostkach SKU w wersji Standard i wyższych.
* Nawiąż jawne połączenie z bramą sieci lokalnej, która reprezentuje lokalizację lokalną. Wymaga to również zmiany konfiguracji na routerze lokalnym w celu utworzenia i skonfigurowania tunelu IPsec.
## <a name="next-steps"></a>Następne kroki
Po zapoznaniu się z obsługą migracji za pomocą usługi VPN Gateway przejdź do obszaru [migracja zasobów IaaS z klasycznej do Menedżer zasobów](../virtual-machines/migration-classic-resource-manager-ps.md) , aby rozpocząć pracę. | 113 | 1,208 | 0.835963 | pol_Latn | 0.999999 |
93f01d0c8a6ad4402b60d744b81de11c597a73b9 | 43,384 | md | Markdown | docs/index-en.md | okjay/Alink | c54b567a4a64fc1376a64c31db58fb1e160b1ae0 | [
"Apache-2.0"
] | 3,301 | 2018-10-01T16:30:44.000Z | 2022-03-30T08:07:16.000Z | docs/index-en.md | okjay/Alink | c54b567a4a64fc1376a64c31db58fb1e160b1ae0 | [
"Apache-2.0"
] | 206 | 2019-11-27T14:04:42.000Z | 2022-03-28T08:02:05.000Z | docs/index-en.md | okjay/Alink | c54b567a4a64fc1376a64c31db58fb1e160b1ae0 | [
"Apache-2.0"
] | 765 | 2018-10-09T02:02:19.000Z | 2022-03-31T12:06:21.000Z | # Alink Operator List
## Operators
### Data Sources
* [AkSourceBatchOp](en/operator/source/AkSourceBatchOp.md)
* [AkSourceStreamOp](en/operator/source/AkSourceStreamOp.md)
* [CatalogSourceBatchOp](en/operator/source/CatalogSourceBatchOp.md)
* [CatalogSourceStreamOp](en/operator/source/CatalogSourceStreamOp.md)
* [CsvSourceBatchOp](en/operator/source/CsvSourceBatchOp.md)
* [CsvSourceStreamOp](en/operator/source/CsvSourceStreamOp.md)
* [KafkaSourceStreamOp](en/operator/source/KafkaSourceStreamOp.md)
* [LibSvmSourceBatchOp](en/operator/source/LibSvmSourceBatchOp.md)
* [LibSvmSourceStreamOp](en/operator/source/LibSvmSourceStreamOp.md)
* [MemSourceBatchOp](en/operator/source/MemSourceBatchOp.md)
* [MemSourceStreamOp](en/operator/source/MemSourceStreamOp.md)
* [NumSeqSourceBatchOp](en/operator/source/NumSeqSourceBatchOp.md)
* [NumSeqSourceStreamOp](en/operator/source/NumSeqSourceStreamOp.md)
* [RandomTableSourceBatchOp](en/operator/source/RandomTableSourceBatchOp.md)
* [RandomTableSourceStreamOp](en/operator/source/RandomTableSourceStreamOp.md)
* [RandomVectorSourceBatchOp](en/operator/source/RandomVectorSourceBatchOp.md)
* [RandomVectorSourceStreamOp](en/operator/source/RandomVectorSourceStreamOp.md)
* [TableSourceBatchOp](en/operator/source/TableSourceBatchOp.md)
* [TableSourceStreamOp](en/operator/source/TableSourceStreamOp.md)
* [TextSourceBatchOp](en/operator/source/TextSourceBatchOp.md)
* [TextSourceStreamOp](en/operator/source/TextSourceStreamOp.md)
* [TsvSourceBatchOp](en/operator/source/TsvSourceBatchOp.md)
* [TsvSourceStreamOp](en/operator/source/TsvSourceStreamOp.md)
### Data Sinks
* [AkSinkBatchOp](en/operator/sink/AkSinkBatchOp.md)
* [AkSinkStreamOp](en/operator/sink/AkSinkStreamOp.md)
* [CatalogSinkBatchOp](en/operator/sink/CatalogSinkBatchOp.md)
* [CatalogSinkStreamOp](en/operator/sink/CatalogSinkStreamOp.md)
* [CsvSinkBatchOp](en/operator/sink/CsvSinkBatchOp.md)
* [CsvSinkStreamOp](en/operator/sink/CsvSinkStreamOp.md)
* [KafkaSinkStreamOp](en/operator/sink/KafkaSinkStreamOp.md)
* [LibSvmSinkBatchOp](en/operator/sink/LibSvmSinkBatchOp.md)
* [LibSvmSinkStreamOp](en/operator/sink/LibSvmSinkStreamOp.md)
* [TextSinkBatchOp](en/operator/sink/TextSinkBatchOp.md)
* [TextSinkStreamOp](en/operator/sink/TextSinkStreamOp.md)
* [TsvSinkBatchOp](en/operator/sink/TsvSinkBatchOp.md)
* [TsvSinkStreamOp](en/operator/sink/TsvSinkStreamOp.md)
### Data Processing
* [AppendIdBatchOp](en/operator/dataproc/AppendIdBatchOp.md)
* [FirstNBatchOp](en/operator/dataproc/FirstNBatchOp.md)
* [ImputerPredictBatchOp](en/operator/dataproc/ImputerPredictBatchOp.md)
* [ImputerPredictStreamOp](en/operator/dataproc/ImputerPredictStreamOp.md)
* [ImputerTrainBatchOp](en/operator/dataproc/ImputerTrainBatchOp.md)
* [IndexToStringPredictBatchOp](en/operator/dataproc/IndexToStringPredictBatchOp.md)
* [IndexToStringPredictStreamOp](en/operator/dataproc/IndexToStringPredictStreamOp.md)
* [JsonValueBatchOp](en/operator/dataproc/JsonValueBatchOp.md)
* [JsonValueStreamOp](en/operator/dataproc/JsonValueStreamOp.md)
* [KeyToValueBatchOp](en/operator/dataproc/KeyToValueBatchOp.md)
* [KeyToValueStreamOp](en/operator/dataproc/KeyToValueStreamOp.md)
* [KeyToValuesBatchOp](en/operator/dataproc/KeyToValuesBatchOp.md)
* [KeyToValuesDynamicStreamOp](en/operator/dataproc/KeyToValuesDynamicStreamOp.md)
* [KeyToValuesStreamOp](en/operator/dataproc/KeyToValuesStreamOp.md)
* [MaxAbsScalerPredictBatchOp](en/operator/dataproc/MaxAbsScalerPredictBatchOp.md)
* [MaxAbsScalerPredictStreamOp](en/operator/dataproc/MaxAbsScalerPredictStreamOp.md)
* [MaxAbsScalerTrainBatchOp](en/operator/dataproc/MaxAbsScalerTrainBatchOp.md)
* [MinMaxScalerPredictBatchOp](en/operator/dataproc/MinMaxScalerPredictBatchOp.md)
* [MinMaxScalerPredictStreamOp](en/operator/dataproc/MinMaxScalerPredictStreamOp.md)
* [MinMaxScalerTrainBatchOp](en/operator/dataproc/MinMaxScalerTrainBatchOp.md)
* [MultiStringIndexerPredictBatchOp](en/operator/dataproc/MultiStringIndexerPredictBatchOp.md)
* [MultiStringIndexerPredictStreamOp](en/operator/dataproc/MultiStringIndexerPredictStreamOp.md)
* [MultiStringIndexerTrainBatchOp](en/operator/dataproc/MultiStringIndexerTrainBatchOp.md)
* [SampleBatchOp](en/operator/dataproc/SampleBatchOp.md)
* [SampleStreamOp](en/operator/dataproc/SampleStreamOp.md)
* [SampleWithSizeBatchOp](en/operator/dataproc/SampleWithSizeBatchOp.md)
* [ShuffleBatchOp](en/operator/dataproc/ShuffleBatchOp.md)
* [SplitBatchOp](en/operator/dataproc/SplitBatchOp.md)
* [SplitStreamOp](en/operator/dataproc/SplitStreamOp.md)
* [StandardScalerPredictBatchOp](en/operator/dataproc/StandardScalerPredictBatchOp.md)
* [StandardScalerPredictStreamOp](en/operator/dataproc/StandardScalerPredictStreamOp.md)
* [StandardScalerTrainBatchOp](en/operator/dataproc/StandardScalerTrainBatchOp.md)
* [StratifiedSampleBatchOp](en/operator/dataproc/StratifiedSampleBatchOp.md)
* [StratifiedSampleStreamOp](en/operator/dataproc/StratifiedSampleStreamOp.md)
* [StratifiedSampleWithSizeBatchOp](en/operator/dataproc/StratifiedSampleWithSizeBatchOp.md)
* [StringIndexerPredictBatchOp](en/operator/dataproc/StringIndexerPredictBatchOp.md)
* [StringIndexerPredictStreamOp](en/operator/dataproc/StringIndexerPredictStreamOp.md)
* [StringIndexerTrainBatchOp](en/operator/dataproc/StringIndexerTrainBatchOp.md)
* [WeightSampleBatchOp](en/operator/dataproc/WeightSampleBatchOp.md)
#### Data Format
* [ColumnsToCsvBatchOp](en/operator/dataproc/format/ColumnsToCsvBatchOp.md)
* [ColumnsToCsvStreamOp](en/operator/dataproc/format/ColumnsToCsvStreamOp.md)
* [ColumnsToJsonBatchOp](en/operator/dataproc/format/ColumnsToJsonBatchOp.md)
* [ColumnsToJsonStreamOp](en/operator/dataproc/format/ColumnsToJsonStreamOp.md)
* [ColumnsToKvBatchOp](en/operator/dataproc/format/ColumnsToKvBatchOp.md)
* [ColumnsToKvStreamOp](en/operator/dataproc/format/ColumnsToKvStreamOp.md)
* [ColumnsToTripleBatchOp](en/operator/dataproc/format/ColumnsToTripleBatchOp.md)
* [ColumnsToTripleStreamOp](en/operator/dataproc/format/ColumnsToTripleStreamOp.md)
* [ColumnsToVectorBatchOp](en/operator/dataproc/format/ColumnsToVectorBatchOp.md)
* [ColumnsToVectorStreamOp](en/operator/dataproc/format/ColumnsToVectorStreamOp.md)
* [CsvToColumnsBatchOp](en/operator/dataproc/format/CsvToColumnsBatchOp.md)
* [CsvToColumnsStreamOp](en/operator/dataproc/format/CsvToColumnsStreamOp.md)
* [CsvToJsonBatchOp](en/operator/dataproc/format/CsvToJsonBatchOp.md)
* [CsvToJsonStreamOp](en/operator/dataproc/format/CsvToJsonStreamOp.md)
* [CsvToKvBatchOp](en/operator/dataproc/format/CsvToKvBatchOp.md)
* [CsvToKvStreamOp](en/operator/dataproc/format/CsvToKvStreamOp.md)
* [CsvToTripleBatchOp](en/operator/dataproc/format/CsvToTripleBatchOp.md)
* [CsvToTripleStreamOp](en/operator/dataproc/format/CsvToTripleStreamOp.md)
* [CsvToVectorBatchOp](en/operator/dataproc/format/CsvToVectorBatchOp.md)
* [CsvToVectorStreamOp](en/operator/dataproc/format/CsvToVectorStreamOp.md)
* [JsonToColumnsBatchOp](en/operator/dataproc/format/JsonToColumnsBatchOp.md)
* [JsonToColumnsStreamOp](en/operator/dataproc/format/JsonToColumnsStreamOp.md)
* [JsonToCsvBatchOp](en/operator/dataproc/format/JsonToCsvBatchOp.md)
* [JsonToCsvStreamOp](en/operator/dataproc/format/JsonToCsvStreamOp.md)
* [JsonToKvBatchOp](en/operator/dataproc/format/JsonToKvBatchOp.md)
* [JsonToKvStreamOp](en/operator/dataproc/format/JsonToKvStreamOp.md)
* [JsonToTripleBatchOp](en/operator/dataproc/format/JsonToTripleBatchOp.md)
* [JsonToTripleStreamOp](en/operator/dataproc/format/JsonToTripleStreamOp.md)
* [JsonToVectorBatchOp](en/operator/dataproc/format/JsonToVectorBatchOp.md)
* [JsonToVectorStreamOp](en/operator/dataproc/format/JsonToVectorStreamOp.md)
* [KvToColumnsBatchOp](en/operator/dataproc/format/KvToColumnsBatchOp.md)
* [KvToColumnsStreamOp](en/operator/dataproc/format/KvToColumnsStreamOp.md)
* [KvToCsvBatchOp](en/operator/dataproc/format/KvToCsvBatchOp.md)
* [KvToCsvStreamOp](en/operator/dataproc/format/KvToCsvStreamOp.md)
* [KvToJsonBatchOp](en/operator/dataproc/format/KvToJsonBatchOp.md)
* [KvToJsonStreamOp](en/operator/dataproc/format/KvToJsonStreamOp.md)
* [KvToTripleBatchOp](en/operator/dataproc/format/KvToTripleBatchOp.md)
* [KvToTripleStreamOp](en/operator/dataproc/format/KvToTripleStreamOp.md)
* [KvToVectorBatchOp](en/operator/dataproc/format/KvToVectorBatchOp.md)
* [KvToVectorStreamOp](en/operator/dataproc/format/KvToVectorStreamOp.md)
* [TripleToColumnsBatchOp](en/operator/dataproc/format/TripleToColumnsBatchOp.md)
* [TripleToCsvBatchOp](en/operator/dataproc/format/TripleToCsvBatchOp.md)
* [TripleToJsonBatchOp](en/operator/dataproc/format/TripleToJsonBatchOp.md)
* [TripleToKvBatchOp](en/operator/dataproc/format/TripleToKvBatchOp.md)
* [TripleToVectorBatchOp](en/operator/dataproc/format/TripleToVectorBatchOp.md)
* [VectorToColumnsBatchOp](en/operator/dataproc/format/VectorToColumnsBatchOp.md)
* [VectorToColumnsStreamOp](en/operator/dataproc/format/VectorToColumnsStreamOp.md)
* [VectorToCsvBatchOp](en/operator/dataproc/format/VectorToCsvBatchOp.md)
* [VectorToCsvStreamOp](en/operator/dataproc/format/VectorToCsvStreamOp.md)
* [VectorToJsonBatchOp](en/operator/dataproc/format/VectorToJsonBatchOp.md)
* [VectorToJsonStreamOp](en/operator/dataproc/format/VectorToJsonStreamOp.md)
* [VectorToKvBatchOp](en/operator/dataproc/format/VectorToKvBatchOp.md)
* [VectorToKvStreamOp](en/operator/dataproc/format/VectorToKvStreamOp.md)
* [VectorToTripleBatchOp](en/operator/dataproc/format/VectorToTripleBatchOp.md)
* [VectorToTripleStreamOp](en/operator/dataproc/format/VectorToTripleStreamOp.md)
#### Vector
* [VectorAssemblerBatchOp](en/operator/dataproc/vector/VectorAssemblerBatchOp.md)
* [VectorAssemblerStreamOp](en/operator/dataproc/vector/VectorAssemblerStreamOp.md)
* [VectorElementwiseProductBatchOp](en/operator/dataproc/vector/VectorElementwiseProductBatchOp.md)
* [VectorElementwiseProductStreamOp](en/operator/dataproc/vector/VectorElementwiseProductStreamOp.md)
* [VectorImputerPredictBatchOp](en/operator/dataproc/vector/VectorImputerPredictBatchOp.md)
* [VectorImputerPredictStreamOp](en/operator/dataproc/vector/VectorImputerPredictStreamOp.md)
* [VectorImputerTrainBatchOp](en/operator/dataproc/vector/VectorImputerTrainBatchOp.md)
* [VectorInteractionBatchOp](en/operator/dataproc/vector/VectorInteractionBatchOp.md)
* [VectorInteractionStreamOp](en/operator/dataproc/vector/VectorInteractionStreamOp.md)
* [VectorMaxAbsScalerPredictBatchOp](en/operator/dataproc/vector/VectorMaxAbsScalerPredictBatchOp.md)
* [VectorMaxAbsScalerPredictStreamOp](en/operator/dataproc/vector/VectorMaxAbsScalerPredictStreamOp.md)
* [VectorMaxAbsScalerTrainBatchOp](en/operator/dataproc/vector/VectorMaxAbsScalerTrainBatchOp.md)
* [VectorMinMaxScalerPredictBatchOp](en/operator/dataproc/vector/VectorMinMaxScalerPredictBatchOp.md)
* [VectorMinMaxScalerPredictStreamOp](en/operator/dataproc/vector/VectorMinMaxScalerPredictStreamOp.md)
* [VectorMinMaxScalerTrainBatchOp](en/operator/dataproc/vector/VectorMinMaxScalerTrainBatchOp.md)
* [VectorNormalizeBatchOp](en/operator/dataproc/vector/VectorNormalizeBatchOp.md)
* [VectorNormalizeStreamOp](en/operator/dataproc/vector/VectorNormalizeStreamOp.md)
* [VectorPolynomialExpandBatchOp](en/operator/dataproc/vector/VectorPolynomialExpandBatchOp.md)
* [VectorPolynomialExpandStreamOp](en/operator/dataproc/vector/VectorPolynomialExpandStreamOp.md)
* [VectorSizeHintBatchOp](en/operator/dataproc/vector/VectorSizeHintBatchOp.md)
* [VectorSizeHintStreamOp](en/operator/dataproc/vector/VectorSizeHintStreamOp.md)
* [VectorSliceBatchOp](en/operator/dataproc/vector/VectorSliceBatchOp.md)
* [VectorSliceStreamOp](en/operator/dataproc/vector/VectorSliceStreamOp.md)
* [VectorStandardScalerPredictBatchOp](en/operator/dataproc/vector/VectorStandardScalerPredictBatchOp.md)
* [VectorStandardScalerPredictStreamOp](en/operator/dataproc/vector/VectorStandardScalerPredictStreamOp.md)
* [VectorStandardScalerTrainBatchOp](en/operator/dataproc/vector/VectorStandardScalerTrainBatchOp.md)
### SQL
* [AsBatchOp](en/operator/sql/AsBatchOp.md)
* [AsStreamOp](en/operator/sql/AsStreamOp.md)
* [DistinctBatchOp](en/operator/sql/DistinctBatchOp.md)
* [FilterBatchOp](en/operator/sql/FilterBatchOp.md)
* [FilterStreamOp](en/operator/sql/FilterStreamOp.md)
* [FullOuterJoinBatchOp](en/operator/sql/FullOuterJoinBatchOp.md)
* [GroupByBatchOp](en/operator/sql/GroupByBatchOp.md)
* [IntersectAllBatchOp](en/operator/sql/IntersectAllBatchOp.md)
* [IntersectBatchOp](en/operator/sql/IntersectBatchOp.md)
* [JoinBatchOp](en/operator/sql/JoinBatchOp.md)
* [LeftOuterJoinBatchOp](en/operator/sql/LeftOuterJoinBatchOp.md)
* [MinusAllBatchOp](en/operator/sql/MinusAllBatchOp.md)
* [MinusBatchOp](en/operator/sql/MinusBatchOp.md)
* [OrderByBatchOp](en/operator/sql/OrderByBatchOp.md)
* [RightOuterJoinBatchOp](en/operator/sql/RightOuterJoinBatchOp.md)
* [SelectBatchOp](en/operator/sql/SelectBatchOp.md)
* [SelectStreamOp](en/operator/sql/SelectStreamOp.md)
* [UnionAllBatchOp](en/operator/sql/UnionAllBatchOp.md)
* [UnionAllStreamOp](en/operator/sql/UnionAllStreamOp.md)
* [UnionBatchOp](en/operator/sql/UnionBatchOp.md)
* [WhereBatchOp](en/operator/sql/WhereBatchOp.md)
* [WhereStreamOp](en/operator/sql/WhereStreamOp.md)
* [WindowGroupByStreamOp](en/operator/sql/WindowGroupByStreamOp.md)
### Feature Engineering
* [BinarizerBatchOp](en/operator/feature/BinarizerBatchOp.md)
* [BinarizerStreamOp](en/operator/feature/BinarizerStreamOp.md)
* [BucketizerBatchOp](en/operator/feature/BucketizerBatchOp.md)
* [BucketizerStreamOp](en/operator/feature/BucketizerStreamOp.md)
* [ChiSqSelectorBatchOp](en/operator/feature/ChiSqSelectorBatchOp.md)
* [DCTBatchOp](en/operator/feature/DCTBatchOp.md)
* [DCTStreamOp](en/operator/feature/DCTStreamOp.md)
* [EqualWidthDiscretizerPredictBatchOp](en/operator/feature/EqualWidthDiscretizerPredictBatchOp.md)
* [EqualWidthDiscretizerPredictStreamOp](en/operator/feature/EqualWidthDiscretizerPredictStreamOp.md)
* [EqualWidthDiscretizerTrainBatchOp](en/operator/feature/EqualWidthDiscretizerTrainBatchOp.md)
* [FeatureHasherBatchOp](en/operator/feature/FeatureHasherBatchOp.md)
* [FeatureHasherStreamOp](en/operator/feature/FeatureHasherStreamOp.md)
* [OneHotPredictBatchOp](en/operator/feature/OneHotPredictBatchOp.md)
* [OneHotPredictStreamOp](en/operator/feature/OneHotPredictStreamOp.md)
* [OneHotTrainBatchOp](en/operator/feature/OneHotTrainBatchOp.md)
* [PcaPredictBatchOp](en/operator/feature/PcaPredictBatchOp.md)
* [PcaPredictStreamOp](en/operator/feature/PcaPredictStreamOp.md)
* [PcaTrainBatchOp](en/operator/feature/PcaTrainBatchOp.md)
* [QuantileDiscretizerPredictBatchOp](en/operator/feature/QuantileDiscretizerPredictBatchOp.md)
* [QuantileDiscretizerPredictStreamOp](en/operator/feature/QuantileDiscretizerPredictStreamOp.md)
* [QuantileDiscretizerTrainBatchOp](en/operator/feature/QuantileDiscretizerTrainBatchOp.md)
* [VectorChiSqSelectorBatchOp](en/operator/feature/VectorChiSqSelectorBatchOp.md)
### Text Processing
* [DocCountVectorizerPredictBatchOp](en/operator/nlp/DocCountVectorizerPredictBatchOp.md)
* [DocCountVectorizerPredictStreamOp](en/operator/nlp/DocCountVectorizerPredictStreamOp.md)
* [DocCountVectorizerTrainBatchOp](en/operator/nlp/DocCountVectorizerTrainBatchOp.md)
* [DocHashCountVectorizerPredictBatchOp](en/operator/nlp/DocHashCountVectorizerPredictBatchOp.md)
* [DocHashCountVectorizerPredictStreamOp](en/operator/nlp/DocHashCountVectorizerPredictStreamOp.md)
* [DocHashCountVectorizerTrainBatchOp](en/operator/nlp/DocHashCountVectorizerTrainBatchOp.md)
* [DocWordCountBatchOp](en/operator/nlp/DocWordCountBatchOp.md)
* [DocWordCountStreamOp](en/operator/nlp/DocWordCountStreamOp.md)
* [KeywordsExtractionBatchOp](en/operator/nlp/KeywordsExtractionBatchOp.md)
* [KeywordsExtractionStreamOp](en/operator/nlp/KeywordsExtractionStreamOp.md)
* [NGramBatchOp](en/operator/nlp/NGramBatchOp.md)
* [NGramStreamOp](en/operator/nlp/NGramStreamOp.md)
* [RegexTokenizerBatchOp](en/operator/nlp/RegexTokenizerBatchOp.md)
* [RegexTokenizerStreamOp](en/operator/nlp/RegexTokenizerStreamOp.md)
* [SegmentBatchOp](en/operator/nlp/SegmentBatchOp.md)
* [SegmentStreamOp](en/operator/nlp/SegmentStreamOp.md)
* [StopWordsRemoverBatchOp](en/operator/nlp/StopWordsRemoverBatchOp.md)
* [StopWordsRemoverStreamOp](en/operator/nlp/StopWordsRemoverStreamOp.md)
* [TokenizerBatchOp](en/operator/nlp/TokenizerBatchOp.md)
* [TokenizerStreamOp](en/operator/nlp/TokenizerStreamOp.md)
* [Word2VecPredictBatchOp](en/operator/nlp/Word2VecPredictBatchOp.md)
* [Word2VecPredictStreamOp](en/operator/nlp/Word2VecPredictStreamOp.md)
* [Word2VecTrainBatchOp](en/operator/nlp/Word2VecTrainBatchOp.md)
* [WordCountBatchOp](en/operator/nlp/WordCountBatchOp.md)
### Statistics
* [ChiSquareTestBatchOp](en/operator/statistics/ChiSquareTestBatchOp.md)
* [CorrelationBatchOp](en/operator/statistics/CorrelationBatchOp.md)
* [SummarizerBatchOp](en/operator/statistics/SummarizerBatchOp.md)
* [VectorChiSquareTestBatchOp](en/operator/statistics/VectorChiSquareTestBatchOp.md)
* [VectorCorrelationBatchOp](en/operator/statistics/VectorCorrelationBatchOp.md)
* [VectorSummarizerBatchOp](en/operator/statistics/VectorSummarizerBatchOp.md)
### Classification
* [C45PredictBatchOp](en/operator/classification/C45PredictBatchOp.md)
* [C45PredictStreamOp](en/operator/classification/C45PredictStreamOp.md)
* [C45TrainBatchOp](en/operator/classification/C45TrainBatchOp.md)
* [CartPredictBatchOp](en/operator/classification/CartPredictBatchOp.md)
* [CartPredictStreamOp](en/operator/classification/CartPredictStreamOp.md)
* [CartTrainBatchOp](en/operator/classification/CartTrainBatchOp.md)
* [DecisionTreePredictBatchOp](en/operator/classification/DecisionTreePredictBatchOp.md)
* [DecisionTreePredictStreamOp](en/operator/classification/DecisionTreePredictStreamOp.md)
* [DecisionTreeTrainBatchOp](en/operator/classification/DecisionTreeTrainBatchOp.md)
* [FmClassifierPredictBatchOp](en/operator/classification/FmClassifierPredictBatchOp.md)
* [FmClassifierPredictStreamOp](en/operator/classification/FmClassifierPredictStreamOp.md)
* [FmClassifierTrainBatchOp](en/operator/classification/FmClassifierTrainBatchOp.md)
* [GbdtPredictBatchOp](en/operator/classification/GbdtPredictBatchOp.md)
* [GbdtPredictStreamOp](en/operator/classification/GbdtPredictStreamOp.md)
* [GbdtTrainBatchOp](en/operator/classification/GbdtTrainBatchOp.md)
* [Id3PredictBatchOp](en/operator/classification/Id3PredictBatchOp.md)
* [Id3PredictStreamOp](en/operator/classification/Id3PredictStreamOp.md)
* [Id3TrainBatchOp](en/operator/classification/Id3TrainBatchOp.md)
* [KnnPredictBatchOp](en/operator/classification/KnnPredictBatchOp.md)
* [KnnTrainBatchOp](en/operator/classification/KnnTrainBatchOp.md)
* [LinearSvmPredictBatchOp](en/operator/classification/LinearSvmPredictBatchOp.md)
* [LinearSvmPredictStreamOp](en/operator/classification/LinearSvmPredictStreamOp.md)
* [LinearSvmTrainBatchOp](en/operator/classification/LinearSvmTrainBatchOp.md)
* [LogisticRegressionPredictBatchOp](en/operator/classification/LogisticRegressionPredictBatchOp.md)
* [LogisticRegressionPredictStreamOp](en/operator/classification/LogisticRegressionPredictStreamOp.md)
* [LogisticRegressionTrainBatchOp](en/operator/classification/LogisticRegressionTrainBatchOp.md)
* [MultilayerPerceptronPredictBatchOp](en/operator/classification/MultilayerPerceptronPredictBatchOp.md)
* [MultilayerPerceptronPredictStreamOp](en/operator/classification/MultilayerPerceptronPredictStreamOp.md)
* [MultilayerPerceptronTrainBatchOp](en/operator/classification/MultilayerPerceptronTrainBatchOp.md)
* [NaiveBayesPredictBatchOp](en/operator/classification/NaiveBayesPredictBatchOp.md)
* [NaiveBayesPredictStreamOp](en/operator/classification/NaiveBayesPredictStreamOp.md)
* [NaiveBayesTextPredictBatchOp](en/operator/classification/NaiveBayesTextPredictBatchOp.md)
* [NaiveBayesTextPredictStreamOp](en/operator/classification/NaiveBayesTextPredictStreamOp.md)
* [NaiveBayesTextTrainBatchOp](en/operator/classification/NaiveBayesTextTrainBatchOp.md)
* [NaiveBayesTrainBatchOp](en/operator/classification/NaiveBayesTrainBatchOp.md)
* [RandomForestPredictBatchOp](en/operator/classification/RandomForestPredictBatchOp.md)
* [RandomForestPredictStreamOp](en/operator/classification/RandomForestPredictStreamOp.md)
* [RandomForestTrainBatchOp](en/operator/classification/RandomForestTrainBatchOp.md)
* [SoftmaxPredictBatchOp](en/operator/classification/SoftmaxPredictBatchOp.md)
* [SoftmaxPredictStreamOp](en/operator/classification/SoftmaxPredictStreamOp.md)
* [SoftmaxTrainBatchOp](en/operator/classification/SoftmaxTrainBatchOp.md)
### Regression
* [AftSurvivalRegPredictBatchOp](en/operator/regression/AftSurvivalRegPredictBatchOp.md)
* [AftSurvivalRegPredictStreamOp](en/operator/regression/AftSurvivalRegPredictStreamOp.md)
* [AftSurvivalRegTrainBatchOp](en/operator/regression/AftSurvivalRegTrainBatchOp.md)
* [CartRegPredictBatchOp](en/operator/regression/CartRegPredictBatchOp.md)
* [CartRegPredictStreamOp](en/operator/regression/CartRegPredictStreamOp.md)
* [CartRegTrainBatchOp](en/operator/regression/CartRegTrainBatchOp.md)
* [DecisionTreeRegPredictBatchOp](en/operator/regression/DecisionTreeRegPredictBatchOp.md)
* [DecisionTreeRegPredictStreamOp](en/operator/regression/DecisionTreeRegPredictStreamOp.md)
* [DecisionTreeRegTrainBatchOp](en/operator/regression/DecisionTreeRegTrainBatchOp.md)
* [FmRegressorPredictBatchOp](en/operator/regression/FmRegressorPredictBatchOp.md)
* [FmRegressorPredictStreamOp](en/operator/regression/FmRegressorPredictStreamOp.md)
* [FmRegressorTrainBatchOp](en/operator/regression/FmRegressorTrainBatchOp.md)
* [GBRankPredictStreamOp](en/operator/regression/GBRankPredictStreamOp.md)
* [GbdtRegPredictBatchOp](en/operator/regression/GbdtRegPredictBatchOp.md)
* [GbdtRegPredictStreamOp](en/operator/regression/GbdtRegPredictStreamOp.md)
* [GbdtRegTrainBatchOp](en/operator/regression/GbdtRegTrainBatchOp.md)
* [GlmEvaluationBatchOp](en/operator/regression/GlmEvaluationBatchOp.md)
* [GlmPredictBatchOp](en/operator/regression/GlmPredictBatchOp.md)
* [GlmPredictStreamOp](en/operator/regression/GlmPredictStreamOp.md)
* [GlmTrainBatchOp](en/operator/regression/GlmTrainBatchOp.md)
* [IsotonicRegPredictBatchOp](en/operator/regression/IsotonicRegPredictBatchOp.md)
* [IsotonicRegPredictStreamOp](en/operator/regression/IsotonicRegPredictStreamOp.md)
* [IsotonicRegTrainBatchOp](en/operator/regression/IsotonicRegTrainBatchOp.md)
* [LassoRegPredictBatchOp](en/operator/regression/LassoRegPredictBatchOp.md)
* [LassoRegPredictStreamOp](en/operator/regression/LassoRegPredictStreamOp.md)
* [LassoRegTrainBatchOp](en/operator/regression/LassoRegTrainBatchOp.md)
* [LinearRegPredictBatchOp](en/operator/regression/LinearRegPredictBatchOp.md)
* [LinearRegPredictStreamOp](en/operator/regression/LinearRegPredictStreamOp.md)
* [LinearRegTrainBatchOp](en/operator/regression/LinearRegTrainBatchOp.md)
* [RandomForestRegPredictBatchOp](en/operator/regression/RandomForestRegPredictBatchOp.md)
* [RandomForestRegPredictStreamOp](en/operator/regression/RandomForestRegPredictStreamOp.md)
* [RandomForestRegTrainBatchOp](en/operator/regression/RandomForestRegTrainBatchOp.md)
* [RidgeRegPredictBatchOp](en/operator/regression/RidgeRegPredictBatchOp.md)
* [RidgeRegPredictStreamOp](en/operator/regression/RidgeRegPredictStreamOp.md)
* [RidgeRegTrainBatchOp](en/operator/regression/RidgeRegTrainBatchOp.md)
### Clustering
* [BisectingKMeansPredictBatchOp](en/operator/clustering/BisectingKMeansPredictBatchOp.md)
* [BisectingKMeansPredictStreamOp](en/operator/clustering/BisectingKMeansPredictStreamOp.md)
* [BisectingKMeansTrainBatchOp](en/operator/clustering/BisectingKMeansTrainBatchOp.md)
* [GeoKMeansPredictBatchOp](en/operator/clustering/GeoKMeansPredictBatchOp.md)
* [GeoKMeansPredictStreamOp](en/operator/clustering/GeoKMeansPredictStreamOp.md)
* [GeoKMeansTrainBatchOp](en/operator/clustering/GeoKMeansTrainBatchOp.md)
* [GmmPredictBatchOp](en/operator/clustering/GmmPredictBatchOp.md)
* [GmmPredictStreamOp](en/operator/clustering/GmmPredictStreamOp.md)
* [GmmTrainBatchOp](en/operator/clustering/GmmTrainBatchOp.md)
* [KMeansPredictBatchOp](en/operator/clustering/KMeansPredictBatchOp.md)
* [KMeansPredictStreamOp](en/operator/clustering/KMeansPredictStreamOp.md)
* [KMeansTrainBatchOp](en/operator/clustering/KMeansTrainBatchOp.md)
* [LdaPredictBatchOp](en/operator/clustering/LdaPredictBatchOp.md)
* [LdaPredictStreamOp](en/operator/clustering/LdaPredictStreamOp.md)
* [LdaTrainBatchOp](en/operator/clustering/LdaTrainBatchOp.md)
* [StreamingKMeansStreamOp](en/operator/clustering/StreamingKMeansStreamOp.md)
### Association Rule
* [FpGrowthBatchOp](en/operator/associationrule/FpGrowthBatchOp.md)
* [PrefixSpanBatchOp](en/operator/associationrule/PrefixSpanBatchOp.md)
### Recommendation
* [AlsImplicitTrainBatchOp](en/operator/recommendation/AlsImplicitTrainBatchOp.md)
* [AlsItemsPerUserRecommBatchOp](en/operator/recommendation/AlsItemsPerUserRecommBatchOp.md)
* [AlsItemsPerUserRecommStreamOp](en/operator/recommendation/AlsItemsPerUserRecommStreamOp.md)
* [AlsRateRecommBatchOp](en/operator/recommendation/AlsRateRecommBatchOp.md)
* [AlsRateRecommStreamOp](en/operator/recommendation/AlsRateRecommStreamOp.md)
* [AlsSimilarItemsRecommBatchOp](en/operator/recommendation/AlsSimilarItemsRecommBatchOp.md)
* [AlsSimilarItemsRecommStreamOp](en/operator/recommendation/AlsSimilarItemsRecommStreamOp.md)
* [AlsSimilarUsersRecommBatchOp](en/operator/recommendation/AlsSimilarUsersRecommBatchOp.md)
* [AlsSimilarUsersRecommStreamOp](en/operator/recommendation/AlsSimilarUsersRecommStreamOp.md)
* [AlsTrainBatchOp](en/operator/recommendation/AlsTrainBatchOp.md)
* [AlsUsersPerItemRecommBatchOp](en/operator/recommendation/AlsUsersPerItemRecommBatchOp.md)
* [AlsUsersPerItemRecommStreamOp](en/operator/recommendation/AlsUsersPerItemRecommStreamOp.md)
* [FlattenKObjectBatchOp](en/operator/recommendation/FlattenKObjectBatchOp.md)
* [FlattenKObjectStreamOp](en/operator/recommendation/FlattenKObjectStreamOp.md)
* [FmItemsPerUserRecommBatchOp](en/operator/recommendation/FmItemsPerUserRecommBatchOp.md)
* [FmItemsPerUserRecommStreamOp](en/operator/recommendation/FmItemsPerUserRecommStreamOp.md)
* [FmRateRecommBatchOp](en/operator/recommendation/FmRateRecommBatchOp.md)
* [FmRecommBinaryImplicitTrainBatchOp](en/operator/recommendation/FmRecommBinaryImplicitTrainBatchOp.md)
* [FmRecommTrainBatchOp](en/operator/recommendation/FmRecommTrainBatchOp.md)
* [FmUsersPerItemRecommBatchOp](en/operator/recommendation/FmUsersPerItemRecommBatchOp.md)
* [FmUsersPerItemRecommStreamOp](en/operator/recommendation/FmUsersPerItemRecommStreamOp.md)
* [ItemCfItemsPerUserRecommBatchOp](en/operator/recommendation/ItemCfItemsPerUserRecommBatchOp.md)
* [ItemCfItemsPerUserRecommStreamOp](en/operator/recommendation/ItemCfItemsPerUserRecommStreamOp.md)
* [ItemCfRateRecommBatchOp](en/operator/recommendation/ItemCfRateRecommBatchOp.md)
* [ItemCfRateRecommStreamOp](en/operator/recommendation/ItemCfRateRecommStreamOp.md)
* [ItemCfSimilarItemsRecommBatchOp](en/operator/recommendation/ItemCfSimilarItemsRecommBatchOp.md)
* [ItemCfSimilarItemsRecommStreamOp](en/operator/recommendation/ItemCfSimilarItemsRecommStreamOp.md)
* [ItemCfTrainBatchOp](en/operator/recommendation/ItemCfTrainBatchOp.md)
* [ItemCfUsersPerItemRecommBatchOp](en/operator/recommendation/ItemCfUsersPerItemRecommBatchOp.md)
* [ItemCfUsersPerItemRecommStreamOp](en/operator/recommendation/ItemCfUsersPerItemRecommStreamOp.md)
* [LeaveKObjectOutBatchOp](en/operator/recommendation/LeaveKObjectOutBatchOp.md)
* [LeaveTopKObjectOutBatchOp](en/operator/recommendation/LeaveTopKObjectOutBatchOp.md)
* [NegativeItemSamplingBatchOp](en/operator/recommendation/NegativeItemSamplingBatchOp.md)
* [UserCfItemsPerUserRecommBatchOp](en/operator/recommendation/UserCfItemsPerUserRecommBatchOp.md)
* [UserCfItemsPerUserRecommStreamOp](en/operator/recommendation/UserCfItemsPerUserRecommStreamOp.md)
* [UserCfRateRecommBatchOp](en/operator/recommendation/UserCfRateRecommBatchOp.md)
* [UserCfRateRecommStreamOp](en/operator/recommendation/UserCfRateRecommStreamOp.md)
* [UserCfSimilarUsersRecommBatchOp](en/operator/recommendation/UserCfSimilarUsersRecommBatchOp.md)
* [UserCfSimilarUsersRecommStreamOp](en/operator/recommendation/UserCfSimilarUsersRecommStreamOp.md)
* [UserCfTrainBatchOp](en/operator/recommendation/UserCfTrainBatchOp.md)
* [UserCfUsersPerItemRecommBatchOp](en/operator/recommendation/UserCfUsersPerItemRecommBatchOp.md)
* [UserCfUsersPerItemRecommStreamOp](en/operator/recommendation/UserCfUsersPerItemRecommStreamOp.md)
### Evaluation
* [EvalBinaryClassBatchOp](en/operator/evaluation/EvalBinaryClassBatchOp.md)
* [EvalBinaryClassStreamOp](en/operator/evaluation/EvalBinaryClassStreamOp.md)
* [EvalClusterBatchOp](en/operator/evaluation/EvalClusterBatchOp.md)
* [EvalMultiClassBatchOp](en/operator/evaluation/EvalMultiClassBatchOp.md)
* [EvalMultiClassStreamOp](en/operator/evaluation/EvalMultiClassStreamOp.md)
* [EvalMultiLabelBatchOp](en/operator/evaluation/EvalMultiLabelBatchOp.md)
* [EvalRankingBatchOp](en/operator/evaluation/EvalRankingBatchOp.md)
* [EvalRegressionBatchOp](en/operator/evaluation/EvalRegressionBatchOp.md)
### Outlier Detection
* [SosBatchOp](en/operator/outlier/SosBatchOp.md)
### Online Learning
* [FtrlModelFilterStreamOp](en/operator/onlinelearning/FtrlModelFilterStreamOp.md)
* [FtrlPredictStreamOp](en/operator/onlinelearning/FtrlPredictStreamOp.md)
* [FtrlTrainStreamOp](en/operator/onlinelearning/FtrlTrainStreamOp.md)
### Similarity
* [StringApproxNearestNeighborPredictBatchOp](en/operator/similarity/StringApproxNearestNeighborPredictBatchOp.md)
* [StringApproxNearestNeighborTrainBatchOp](en/operator/similarity/StringApproxNearestNeighborTrainBatchOp.md)
* [StringNearestNeighborPredictBatchOp](en/operator/similarity/StringNearestNeighborPredictBatchOp.md)
* [StringNearestNeighborTrainBatchOp](en/operator/similarity/StringNearestNeighborTrainBatchOp.md)
* [StringSimilarityPairwiseBatchOp](en/operator/similarity/StringSimilarityPairwiseBatchOp.md)
* [StringSimilarityPairwiseStreamOp](en/operator/similarity/StringSimilarityPairwiseStreamOp.md)
* [TextApproxNearestNeighborPredictBatchOp](en/operator/similarity/TextApproxNearestNeighborPredictBatchOp.md)
* [TextApproxNearestNeighborTrainBatchOp](en/operator/similarity/TextApproxNearestNeighborTrainBatchOp.md)
* [TextNearestNeighborPredictBatchOp](en/operator/similarity/TextNearestNeighborPredictBatchOp.md)
* [TextNearestNeighborTrainBatchOp](en/operator/similarity/TextNearestNeighborTrainBatchOp.md)
* [TextSimilarityPairwiseBatchOp](en/operator/similarity/TextSimilarityPairwiseBatchOp.md)
* [TextSimilarityPairwiseStreamOp](en/operator/similarity/TextSimilarityPairwiseStreamOp.md)
* [VectorApproxNearestNeighborPredictBatchOp](en/operator/similarity/VectorApproxNearestNeighborPredictBatchOp.md)
* [VectorApproxNearestNeighborTrainBatchOp](en/operator/similarity/VectorApproxNearestNeighborTrainBatchOp.md)
* [VectorNearestNeighborPredictBatchOp](en/operator/similarity/VectorNearestNeighborPredictBatchOp.md)
* [VectorNearestNeighborTrainBatchOp](en/operator/similarity/VectorNearestNeighborTrainBatchOp.md)
### Utilities
* [PrintBatchOp](en/operator/utils/PrintBatchOp.md)
* [PrintStreamOp](en/operator/utils/PrintStreamOp.md)
* [UDFBatchOp](en/operator/utils/UDFBatchOp.md)
* [UDFStreamOp](en/operator/utils/UDFStreamOp.md)
* [UDTFBatchOp](en/operator/utils/UDTFBatchOp.md)
* [UDTFStreamOp](en/operator/utils/UDTFStreamOp.md)
## Pipeline
### Data Processing
* [Imputer](en/pipeline/dataproc/Imputer.md)
* [ImputerModel](en/pipeline/dataproc/ImputerModel.md)
* [IndexToString](en/pipeline/dataproc/IndexToString.md)
* [KeyToValue](en/pipeline/dataproc/KeyToValue.md)
* [KeyToValues](en/pipeline/dataproc/KeyToValues.md)
* [MaxAbsScaler](en/pipeline/dataproc/MaxAbsScaler.md)
* [MaxAbsScalerModel](en/pipeline/dataproc/MaxAbsScalerModel.md)
* [MinMaxScaler](en/pipeline/dataproc/MinMaxScaler.md)
* [MinMaxScalerModel](en/pipeline/dataproc/MinMaxScalerModel.md)
* [MultiStringIndexer](en/pipeline/dataproc/MultiStringIndexer.md)
* [MultiStringIndexerModel](en/pipeline/dataproc/MultiStringIndexerModel.md)
* [StandardScaler](en/pipeline/dataproc/StandardScaler.md)
* [StandardScalerModel](en/pipeline/dataproc/StandardScalerModel.md)
* [StringIndexer](en/pipeline/dataproc/StringIndexer.md)
* [StringIndexerModel](en/pipeline/dataproc/StringIndexerModel.md)
#### Data Format
* [ColumnsToCsv](en/pipeline/dataproc/format/ColumnsToCsv.md)
* [ColumnsToJson](en/pipeline/dataproc/format/ColumnsToJson.md)
* [ColumnsToKv](en/pipeline/dataproc/format/ColumnsToKv.md)
* [ColumnsToVector](en/pipeline/dataproc/format/ColumnsToVector.md)
* [CsvToColumns](en/pipeline/dataproc/format/CsvToColumns.md)
* [CsvToJson](en/pipeline/dataproc/format/CsvToJson.md)
* [CsvToKv](en/pipeline/dataproc/format/CsvToKv.md)
* [CsvToVector](en/pipeline/dataproc/format/CsvToVector.md)
* [JsonToColumns](en/pipeline/dataproc/format/JsonToColumns.md)
* [JsonToCsv](en/pipeline/dataproc/format/JsonToCsv.md)
* [JsonToKv](en/pipeline/dataproc/format/JsonToKv.md)
* [JsonToVector](en/pipeline/dataproc/format/JsonToVector.md)
* [KvToColumns](en/pipeline/dataproc/format/KvToColumns.md)
* [KvToCsv](en/pipeline/dataproc/format/KvToCsv.md)
* [KvToJson](en/pipeline/dataproc/format/KvToJson.md)
* [KvToVector](en/pipeline/dataproc/format/KvToVector.md)
* [VectorToColumns](en/pipeline/dataproc/format/VectorToColumns.md)
* [VectorToCsv](en/pipeline/dataproc/format/VectorToCsv.md)
* [VectorToJson](en/pipeline/dataproc/format/VectorToJson.md)
* [VectorToKv](en/pipeline/dataproc/format/VectorToKv.md)
#### Vector
* [VectorAssembler](en/pipeline/dataproc/vector/VectorAssembler.md)
* [VectorElementwiseProduct](en/pipeline/dataproc/vector/VectorElementwiseProduct.md)
* [VectorImputer](en/pipeline/dataproc/vector/VectorImputer.md)
* [VectorImputerModel](en/pipeline/dataproc/vector/VectorImputerModel.md)
* [VectorInteraction](en/pipeline/dataproc/vector/VectorInteraction.md)
* [VectorMaxAbsScaler](en/pipeline/dataproc/vector/VectorMaxAbsScaler.md)
* [VectorMaxAbsScalerModel](en/pipeline/dataproc/vector/VectorMaxAbsScalerModel.md)
* [VectorMinMaxScaler](en/pipeline/dataproc/vector/VectorMinMaxScaler.md)
* [VectorMinMaxScalerModel](en/pipeline/dataproc/vector/VectorMinMaxScalerModel.md)
* [VectorNormalizer](en/pipeline/dataproc/vector/VectorNormalizer.md)
* [VectorPolynomialExpand](en/pipeline/dataproc/vector/VectorPolynomialExpand.md)
* [VectorSizeHint](en/pipeline/dataproc/vector/VectorSizeHint.md)
* [VectorSlicer](en/pipeline/dataproc/vector/VectorSlicer.md)
* [VectorStandardScaler](en/pipeline/dataproc/vector/VectorStandardScaler.md)
* [VectorStandardScalerModel](en/pipeline/dataproc/vector/VectorStandardScalerModel.md)
### SQL
* [Select](en/pipeline/sql/Select.md)
### Feature Engineering
* [Binarizer](en/pipeline/feature/Binarizer.md)
* [Bucketizer](en/pipeline/feature/Bucketizer.md)
* [DCT](en/pipeline/feature/DCT.md)
* [EqualWidthDiscretizer](en/pipeline/feature/EqualWidthDiscretizer.md)
* [EqualWidthDiscretizerModel](en/pipeline/feature/EqualWidthDiscretizerModel.md)
* [FeatureHasher](en/pipeline/feature/FeatureHasher.md)
* [OneHotEncoder](en/pipeline/feature/OneHotEncoder.md)
* [OneHotEncoderModel](en/pipeline/feature/OneHotEncoderModel.md)
* [PCA](en/pipeline/feature/PCA.md)
* [PCAModel](en/pipeline/feature/PCAModel.md)
* [QuantileDiscretizer](en/pipeline/feature/QuantileDiscretizer.md)
* [QuantileDiscretizerModel](en/pipeline/feature/QuantileDiscretizerModel.md)
### Text Processing
* [DocCountVectorizer](en/pipeline/nlp/DocCountVectorizer.md)
* [DocCountVectorizerModel](en/pipeline/nlp/DocCountVectorizerModel.md)
* [DocHashCountVectorizer](en/pipeline/nlp/DocHashCountVectorizer.md)
* [DocHashCountVectorizerModel](en/pipeline/nlp/DocHashCountVectorizerModel.md)
* [NGram](en/pipeline/nlp/NGram.md)
* [RegexTokenizer](en/pipeline/nlp/RegexTokenizer.md)
* [Segment](en/pipeline/nlp/Segment.md)
* [StopWordsRemover](en/pipeline/nlp/StopWordsRemover.md)
* [Tokenizer](en/pipeline/nlp/Tokenizer.md)
* [Word2Vec](en/pipeline/nlp/Word2Vec.md)
* [Word2VecModel](en/pipeline/nlp/Word2VecModel.md)
### Classification
* [C45](en/pipeline/classification/C45.md)
* [C45Model](en/pipeline/classification/C45Model.md)
* [Cart](en/pipeline/classification/Cart.md)
* [CartModel](en/pipeline/classification/CartModel.md)
* [DecisionTreeClassificationModel](en/pipeline/classification/DecisionTreeClassificationModel.md)
* [DecisionTreeClassifier](en/pipeline/classification/DecisionTreeClassifier.md)
* [FmClassifier](en/pipeline/classification/FmClassifier.md)
* [FmModel](en/pipeline/classification/FmModel.md)
* [GbdtClassificationModel](en/pipeline/classification/GbdtClassificationModel.md)
* [GbdtClassifier](en/pipeline/classification/GbdtClassifier.md)
* [Id3](en/pipeline/classification/Id3.md)
* [Id3Model](en/pipeline/classification/Id3Model.md)
* [KnnClassificationModel](en/pipeline/classification/KnnClassificationModel.md)
* [KnnClassifier](en/pipeline/classification/KnnClassifier.md)
* [LinearSvm](en/pipeline/classification/LinearSvm.md)
* [LinearSvmModel](en/pipeline/classification/LinearSvmModel.md)
* [LogisticRegression](en/pipeline/classification/LogisticRegression.md)
* [LogisticRegressionModel](en/pipeline/classification/LogisticRegressionModel.md)
* [MultilayerPerceptronClassificationModel](en/pipeline/classification/MultilayerPerceptronClassificationModel.md)
* [MultilayerPerceptronClassifier](en/pipeline/classification/MultilayerPerceptronClassifier.md)
* [NaiveBayes](en/pipeline/classification/NaiveBayes.md)
* [NaiveBayesModel](en/pipeline/classification/NaiveBayesModel.md)
* [NaiveBayesTextClassifier](en/pipeline/classification/NaiveBayesTextClassifier.md)
* [NaiveBayesTextModel](en/pipeline/classification/NaiveBayesTextModel.md)
* [OneVsRest](en/pipeline/classification/OneVsRest.md)
* [OneVsRestModel](en/pipeline/classification/OneVsRestModel.md)
* [RandomForestClassificationModel](en/pipeline/classification/RandomForestClassificationModel.md)
* [RandomForestClassifier](en/pipeline/classification/RandomForestClassifier.md)
* [Softmax](en/pipeline/classification/Softmax.md)
* [SoftmaxModel](en/pipeline/classification/SoftmaxModel.md)
### Regression
* [AftSurvivalRegression](en/pipeline/regression/AftSurvivalRegression.md)
* [AftSurvivalRegressionModel](en/pipeline/regression/AftSurvivalRegressionModel.md)
* [CartReg](en/pipeline/regression/CartReg.md)
* [CartRegModel](en/pipeline/regression/CartRegModel.md)
* [DecisionTreeRegressionModel](en/pipeline/regression/DecisionTreeRegressionModel.md)
* [DecisionTreeRegressor](en/pipeline/regression/DecisionTreeRegressor.md)
* [FmRegressor](en/pipeline/regression/FmRegressor.md)
* [GbdtRegressionModel](en/pipeline/regression/GbdtRegressionModel.md)
* [GbdtRegressor](en/pipeline/regression/GbdtRegressor.md)
* [GeneralizedLinearRegression](en/pipeline/regression/GeneralizedLinearRegression.md)
* [GeneralizedLinearRegressionModel](en/pipeline/regression/GeneralizedLinearRegressionModel.md)
* [IsotonicRegression](en/pipeline/regression/IsotonicRegression.md)
* [IsotonicRegressionModel](en/pipeline/regression/IsotonicRegressionModel.md)
* [LassoRegression](en/pipeline/regression/LassoRegression.md)
* [LassoRegressionModel](en/pipeline/regression/LassoRegressionModel.md)
* [LinearRegression](en/pipeline/regression/LinearRegression.md)
* [LinearRegressionModel](en/pipeline/regression/LinearRegressionModel.md)
* [RandomForestRegressionModel](en/pipeline/regression/RandomForestRegressionModel.md)
* [RandomForestRegressor](en/pipeline/regression/RandomForestRegressor.md)
* [RidgeRegression](en/pipeline/regression/RidgeRegression.md)
* [RidgeRegressionModel](en/pipeline/regression/RidgeRegressionModel.md)
### Clustering
* [BisectingKMeans](en/pipeline/clustering/BisectingKMeans.md)
* [BisectingKMeansModel](en/pipeline/clustering/BisectingKMeansModel.md)
* [GaussianMixture](en/pipeline/clustering/GaussianMixture.md)
* [GaussianMixtureModel](en/pipeline/clustering/GaussianMixtureModel.md)
* [GeoKMeans](en/pipeline/clustering/GeoKMeans.md)
* [GeoKMeansModel](en/pipeline/clustering/GeoKMeansModel.md)
* [KMeans](en/pipeline/clustering/KMeans.md)
* [KMeansModel](en/pipeline/clustering/KMeansModel.md)
* [Lda](en/pipeline/clustering/Lda.md)
* [LdaModel](en/pipeline/clustering/LdaModel.md)
### Recommendation
* [AlsItemsPerUserRecommender](en/pipeline/recommendation/AlsItemsPerUserRecommender.md)
* [AlsRateRecommender](en/pipeline/recommendation/AlsRateRecommender.md)
* [AlsSimilarItemsRecommender](en/pipeline/recommendation/AlsSimilarItemsRecommender.md)
* [AlsSimilarUsersRecommender](en/pipeline/recommendation/AlsSimilarUsersRecommender.md)
* [AlsUsersPerItemRecommender](en/pipeline/recommendation/AlsUsersPerItemRecommender.md)
* [FlattenKObject](en/pipeline/recommendation/FlattenKObject.md)
* [FmItemsPerUserRecommender](en/pipeline/recommendation/FmItemsPerUserRecommender.md)
* [FmRateRecommender](en/pipeline/recommendation/FmRateRecommender.md)
* [FmUsersPerItemRecommender](en/pipeline/recommendation/FmUsersPerItemRecommender.md)
* [ItemCfItemsPerUserRecommender](en/pipeline/recommendation/ItemCfItemsPerUserRecommender.md)
* [ItemCfRateRecommender](en/pipeline/recommendation/ItemCfRateRecommender.md)
* [ItemCfSimilarItemsRecommender](en/pipeline/recommendation/ItemCfSimilarItemsRecommender.md)
* [ItemCfUsersPerItemRecommender](en/pipeline/recommendation/ItemCfUsersPerItemRecommender.md)
* [UserCfItemsPerUserRecommender](en/pipeline/recommendation/UserCfItemsPerUserRecommender.md)
* [UserCfRateRecommender](en/pipeline/recommendation/UserCfRateRecommender.md)
* [UserCfSimilarUsersRecommender](en/pipeline/recommendation/UserCfSimilarUsersRecommender.md)
* [UserCfUsersPerItemRecommender](en/pipeline/recommendation/UserCfUsersPerItemRecommender.md)
### Model Selection and Tuning
* [GridSearchCV](en/pipeline/tuning/GridSearchCV.md)
* [GridSearchCVModel](en/pipeline/tuning/GridSearchCVModel.md)
* [GridSearchTVSplit](en/pipeline/tuning/GridSearchTVSplit.md)
* [GridSearchTVSplitModel](en/pipeline/tuning/GridSearchTVSplitModel.md)
* [RandomSearchCV](en/pipeline/tuning/RandomSearchCV.md)
* [RandomSearchCVModel](en/pipeline/tuning/RandomSearchCVModel.md)
* [RandomSearchTVSplit](en/pipeline/tuning/RandomSearchTVSplit.md)
* [RandomSearchTVSplitModel](en/pipeline/tuning/RandomSearchTVSplitModel.md)
### Similarity
* [StringApproxNearestNeighbor](en/pipeline/similarity/StringApproxNearestNeighbor.md)
* [StringApproxNearestNeighborModel](en/pipeline/similarity/StringApproxNearestNeighborModel.md)
* [StringNearestNeighbor](en/pipeline/similarity/StringNearestNeighbor.md)
* [StringNearestNeighborModel](en/pipeline/similarity/StringNearestNeighborModel.md)
* [TextApproxNearestNeighbor](en/pipeline/similarity/TextApproxNearestNeighbor.md)
* [TextApproxNearestNeighborModel](en/pipeline/similarity/TextApproxNearestNeighborModel.md)
* [TextNearestNeighbor](en/pipeline/similarity/TextNearestNeighbor.md)
* [TextNearestNeighborModel](en/pipeline/similarity/TextNearestNeighborModel.md)
* [VectorApproxNearestNeighbor](en/pipeline/similarity/VectorApproxNearestNeighbor.md)
* [VectorApproxNearestNeighborModel](en/pipeline/similarity/VectorApproxNearestNeighborModel.md)
* [VectorNearestNeighbor](en/pipeline/similarity/VectorNearestNeighbor.md)
* [VectorNearestNeighborModel](en/pipeline/similarity/VectorNearestNeighborModel.md)
| 67.7875 | 114 | 0.846971 | yue_Hant | 0.467688 |
93f052ec0c03ddccf0c82551588575fb27bca603 | 547 | md | Markdown | en/masternode/start.md | scryptachain/scrypta-docs | 5e98f2f65d046c3893697fa32cbdd14473c99e07 | [
"MIT"
] | null | null | null | en/masternode/start.md | scryptachain/scrypta-docs | 5e98f2f65d046c3893697fa32cbdd14473c99e07 | [
"MIT"
] | 8 | 2020-07-08T16:16:42.000Z | 2021-11-09T10:18:22.000Z | en/masternode/start.md | scryptachain/scrypta-docs | 5e98f2f65d046c3893697fa32cbdd14473c99e07 | [
"MIT"
] | 2 | 2020-11-30T15:53:17.000Z | 2021-11-09T10:16:32.000Z | # Masternode Setup
To activate one or more Scrypta masternodes you will need the [official desktop wallet](../wallet/fullnode.md) and the collateral required of **15,000 LYRA** for each masternode.
If you want to proceed for a manual installation, you must keep in mind that the process requires technical skills, a VPS server with a dedicated IP address and time for installation.
Alternatively, you can use masternode hosting services such as [The Hub](../masternode-setup/hosting-service.md).
See also the section
[Masternode FAQ](faq.md). | 49.727273 | 183 | 0.784278 | eng_Latn | 0.997524 |
93f0e4201532abd56fe670ccece72c780372d20a | 78,385 | md | Markdown | articles/defender-for-iot/organizations/how-to-install-software.md | MicrosoftDocs/azure-docs.de-de | b1077b11bf831a03ea28b5a1535cadc4293668be | [
"CC-BY-4.0",
"MIT"
] | 63 | 2017-08-28T07:43:47.000Z | 2022-02-24T03:04:04.000Z | articles/defender-for-iot/organizations/how-to-install-software.md | MicrosoftDocs/azure-docs.de-de | b1077b11bf831a03ea28b5a1535cadc4293668be | [
"CC-BY-4.0",
"MIT"
] | 704 | 2017-08-04T09:45:07.000Z | 2021-12-03T05:49:08.000Z | articles/defender-for-iot/organizations/how-to-install-software.md | MicrosoftDocs/azure-docs.de-de | b1077b11bf831a03ea28b5a1535cadc4293668be | [
"CC-BY-4.0",
"MIT"
] | 178 | 2017-07-05T10:56:47.000Z | 2022-03-18T12:25:19.000Z | ---
title: Installation von Defender für IoT
description: Hier erfahren Sie, wie Sie einen Sensor und die lokale Verwaltungskonsole für Azure Defender für IoT installieren.
ms.date: 10/09/2021
ms.topic: how-to
ms.openlocfilehash: f23c5fe84959045ba0446ad03c9a990a7bbb2014
ms.sourcegitcommit: af303268d0396c0887a21ec34c9f49106bb0c9c2
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/11/2021
ms.locfileid: "129754537"
---
# <a name="defender-for-iot-installation"></a>Installation von Defender für IoT
Dieser Artikel beschreibt, wie Sie die folgenden Azure Defender für IoT-Komponenten installieren:
- **Sensor**: Defender für IoT-Sensoren erfassen den ICS-Netzwerkdatenverkehr mithilfe von passiver Überwachung (ohne Agents). Da sie passiv und nicht intrusiv arbeiten, haben die Sensoren keinerlei Auswirkungen auf OT- und IoT-Netzwerke und -Geräte. Der Sensor wird mit einem SPAN-Port oder Netzwerk-TAP verbunden und beginnt sofort mit der Überwachung Ihres Netzwerks. Erkennungen werden in der Sensorkonsole angezeigt. Dort können Sie sie in einer Netzwerkübersicht, einem Geräteinventar sowie einer umfangreichen Palette von Berichten anzeigen, untersuchen und analysieren. Zu den Beispielen zählen Risikobewertungsberichte, Data Mining-Abfragen und Angriffsvektoren. Weitere Informationen zu Sensorfunktionen finden Sie im [Defender für IoT Sensor-Benutzerhandbuch (direkter Download)](./getting-started.md).
- **Lokale Verwaltungskonsole**: Über die lokale Verwaltungskonsole können Sie Geräteverwaltung, Risikomanagement und Verwaltung von Sicherheitsrisiken durchführen. Außerdem können Sie damit die Bedrohungsüberwachung und Reaktion auf Vorfälle im gesamten Unternehmen durchführen. So erhalten Sie eine einheitliche Übersicht über alle Netzwerkgeräte, Ihnen werden wichtige IoT- und OT-Risikoindikatoren sowie Warnungen angezeigt, die in Einrichtungen erkannt wurden, in denen Sensoren bereitgestellt werden. Über die lokale Verwaltungskonsole können Sie Sensoren in Air-Gap-Netzwerken anzeigen und verwalten.
In diesem Artikel werden die folgenden Installationsinformationen behandelt:
- **Hardware:** Details zur physischen Dell- und HPE-Appliance.
- **Software:** Installation der Software für Sensor und lokale Verwaltungskonsole
- **Virtuelle Geräte:** Details zum virtuellen Computer und zur Softwareinstallation.
Stellen Sie nach der Installation eine Verbindung zwischen Ihrem Sensor und Ihrem Netzwerk her.
## <a name="about-defender-for-iot-appliances"></a>Informationen zu Defender für IoT-Appliances
In den folgenden Abschnitten finden Sie Informationen zu Defender für IoT-Sensorappliances und zur Appliance für die lokale Defender für IoT-Verwaltungskonsole.
### <a name="physical-appliances"></a>Physische Appliances
Der Defender für IoT-Appliancesensor stellt eine Verbindung mit einem SPAN-Port oder einem Netzwerk-TAP her und beginnt mithilfe von passiver Überwachung (ohne Agents) sofort mit der Erfassung von ICS-Netzwerkdatenverkehr. Dieser Prozess hat keinerlei Auswirkungen auf OT-Netzwerke und -Geräte, weil er sich nicht im Datenpfad befindet und OT-Geräte nicht aktiv überprüft.
Die folgenden Appliances zum Einbau in ein Rack stehen zur Verfügung:
| **Bereitstellungstyp** | **Unternehmen** | **Enterprise** | **SMB** |**SMB, robust** |
|--|--|--|--|--|
| **Modell** | HPE ProLiant DL360 | HPE ProLiant DL20 | HPE ProLiant DL20 | HPE EL300 |
| **Überwachungsports** | bis zu 15 RJ45 oder 8 OPT | bis zu 8 RJ45 oder 6 OPT | bis zu 4 RJ45 | bis zu 5 RJ45 |
| **Maximale Bandbreite\*** | 3 GB/s | 1 GB/s | 200 MBit/s | 100 MB/s |
| **Maximale Anzahl geschützter Geräte** | 10.000 | 10.000 | 1\.000 | 800 |
\* Die maximale Bandbreitenkapazität könnte je nach Protokollverteilung variieren.
### <a name="virtual-appliances"></a>Virtuelle Geräte
Die folgenden virtuellen Geräte stehen zur Verfügung:
| **Bereitstellungstyp** | **Unternehmen** | **Enterprise** | **SMB** |
|--|--|--|--|
| **Beschreibung** | Virtuelles Gerät für Unternehmensbereitstellungen | Virtuelles Gerät für Unternehmensbereitstellungen | Virtuelles Gerät für SMB-Bereitstellungen |
| **Maximale Bandbreite\*** | 2,5 GB/s | 800 MB/s | 160 MB/s |
| **Maximale Anzahl geschützter Geräte** | 10.000 | 10.000 | 800 |
| **Bereitstellungstyp** | Unternehmen | Enterprise | SMB |
\* Die maximale Bandbreitenkapazität könnte je nach Protokollverteilung variieren.
### <a name="hardware-specifications-for-the-on-premises-management-console"></a>Hardwarespezifikationen für die lokale Verwaltungskonsole
| Element | BESCHREIBUNG |
|----|--|
**Beschreibung** | In einer Architektur mit mehreren Ebenen bietet die lokale Verwaltungskonsole Transparenz und Kontrolle über geografisch verteilte Standorte. Sie ist in SOC-Sicherheitsstapel integriert, darunter SIEMs, Ticketsysteme, Firewalls der nächsten Generation, Plattformen für sicheren Remotezugriff und Defender für IoT ICS Malware-Sandbox. |
**Bereitstellungstyp** | Enterprise |
**Appliancetyp** | Dell R340, VM |
**Anzahl der verwalteten Sensoren** | Unbegrenzt |
## <a name="prepare-for-the-installation"></a>Vorbereiten der Installation
### <a name="access-the-iso-installation-image"></a>Zugreifen auf das ISO-Installationsimage
Auf das Installationsimage kann über das Defender für IoT-Portal zugegriffen werden.
So greifen Sie auf die Datei zu:
1. Melden Sie sich bei Ihrem Defender für IoT-Konto an.
1. Wechseln Sie zur Seite **Netzwerksensor** oder **Lokale Verwaltungskonsole**, und wählen Sie die Version aus, die heruntergeladen werden soll.
### <a name="install-from-dvd"></a>Installieren von einer DVD
Stellen Sie vor der Installation sicher, dass Sie Folgendes zur Verfügung haben:
- Ein tragbares DVD-Laufwerk mit dem USB-Connector.
- Ein ISO-Installationsimage.
So führen Sie die Installation durch:
1. Kopieren Sie das Image auf eine DVD, oder bereiten Sie einen Datenträger auf einem Schlüssel vor. Verbinden Sie ein tragbares DVD-Laufwerk mit Ihrem Computer, klicken Sie mit der rechten Maustaste auf das ISO-Image, und wählen Sie **Burn to disk** (Auf Datenträger kopieren) aus.
1. Verbinden Sie die DVD oder den Datenträger auf einem Schlüssel, und konfigurieren Sie die Appliance für den Start von einer DVD oder einem Datenträger auf einem Schlüssel.
### <a name="install-from-disk-on-a-key"></a>Installieren von einem Datenträger auf einem Schlüssel
Stellen Sie vor der Installation sicher, dass Sie Folgendes zur Verfügung haben:
- Rufus wurde installiert.
- Ein Datenträger auf einem Schlüssel mit USB-Version 3.0 und höher. Die Mindestgröße beträgt 4 GB.
- Eine ISO-Imagedatei des Installationsprogramms.
Der Datenträger auf einem Schlüssel wird in diesem Vorgang gelöscht.
So bereiten Sie einen Datenträger auf einem Schlüssel vor:
1. Führen Sie „Rufus“ aus, und wählen Sie **SENSOR-ISO** aus.
1. Verbinden Sie den Datenträger auf einem Schlüssel mit der Vorderseite.
1. Legen Sie fest, dass das BIOS des Servers über den USB-Stick gestartet werden soll.
## <a name="dell-poweredger340xl-installation"></a>Installation von Dell PowerEdge R340XL
Vor der Installation der Software auf der Dell-Appliance müssen Sie deren BIOS-Konfiguration anpassen:
- [Dell PowerEdge R340 – Vorderseite](#dell-poweredge-r340-front-panel) und [Dell PowerEdge R340 – Rückseite](#dell-poweredge-r340-back-panel) enthält die Beschreibung von Vorder- und Rückseite sowie Informationen, die für die Installation erforderlich sind (z. B. Treiber und Ports).
- [Dell BIOS-Konfiguration](#dell-bios-configuration) bietet Informationen zum Herstellen einer Verbindung mit der Dell-Appliance-Verwaltungsschnittstelle und zum Konfigurieren des BIOS.
- In [Softwareinstallation (Dell R340)](#software-installation-dell-r340) wird das Verfahren beschrieben, das zum Installieren der Defender für IoT-Sensorsoftware erforderlich ist.
### <a name="dell-poweredge-r340xl-requirements"></a>Anforderungen für Dell PowerEdge R340XL
Zum Installieren der Dell PowerEdge R340XL-Appliance benötigen Sie Folgendes:
- Unternehmenslizenz für Dell Remote Access Controller (iDrac)
- XML für BIOS-Konfiguration
- Serverfirmware-Versionen:
- BIOS-Version 2.1.6
- iDrac-Version 3.23.23.23
### <a name="dell-poweredge-r340-front-panel"></a>Dell PowerEdge R340 – Vorderseite
:::image type="content" source="media/tutorial-install-components/view-of-dell-poweredge-r340-front-panel.jpg" alt-text="Dell PowerEdge R340 – Vorderseite":::
1. Linkes Kontrollfeld
1. Optisches Laufwerk (optional)
1. Rechtes Bedienfeld
1. Informationstag
1. Laufwerke
### <a name="dell-poweredge-r340-back-panel"></a>Dell PowerEdge R340 – Rückseite
:::image type="content" source="media/tutorial-install-components/view-of-dell-poweredge-r340-back-panel.jpg" alt-text="Dell PowerEdge R340 – Rückseite":::
1. Serieller Anschluss
1. NIC-Port (GB 1)
1. NIC-Port (GB 1)
1. PCIe – halbe Höhe
1. PCIe-Erweiterungskartenslot – volle Höhe
1. Stromversorgungseinheit 1
1. Stromversorgungseinheit 2
1. Systemidentifikation
1. Kabelanschluss für die Systemstatusanzeige (CMA) – Taste
1. USB 3.0-Anschluss (2)
1. Dedizierter iDRAC9-Netzwerkport
1. VGA-Anschluss
### <a name="dell-bios-configuration"></a>Dell-BIOS-Konfiguration
Die Dell-BIOS-Konfiguration ist erforderlich, um die Dell-Appliance für die Zusammenarbeit mit der Software anzupassen.
Die Dell-Appliance wird durch einen integrierten iDRAC mit Lifecycle Controller (LC) verwaltet. Der LC ist in jedem Dell PowerEdge-Server eingebettet und bietet Funktionen, mit deren Hilfe Sie Ihre Dell PowerEdge-Appliances bereitstellen, aktualisieren, überwachen und warten können.
Zum Einrichten der Kommunikation zwischen der Dell-Appliance und dem Verwaltungscomputer müssen Sie die iDRAC-IP-Adresse und die IP-Adresse des Verwaltungscomputers in demselben Subnetz definieren.
Wenn die Verbindung hergestellt wird, kann das BIOS konfiguriert werden.
**So konfigurieren Sie das Dell-BIOS**:
1. [Konfigurieren Sie die iDRAC-IP-Adresse.](#configure-idrac-ip-address)
1. [Konfigurieren des BIOS](#configuring-the-bios)
#### <a name="configure-idrac-ip-address"></a>Konfigurieren der iDRAC-IP-Adresse
1. Schalten Sie den Sensor ein.
1. Wenn das Betriebssystem bereits installiert wurde, drücken Sie die Taste F2, um die BIOS-Konfiguration einzugeben.
1. Wählen Sie **iDRAC-Einstellungen** aus.
1. Wählen Sie **Netzwerk** aus.
> [!NOTE]
> Während der Installation müssen Sie die standardmäßige iDRAC-IP-Adresse und das Kennwort konfigurieren, die in den folgenden Schritten angegeben werden. Nach der Installation müssen Sie diese Definitionen ändern.
1. Ändern Sie die statische IPv4-Adresse in **10.100.100.250**.
1. Ändern Sie die statische Subnetzmaske in **255.255.255.0**.
:::image type="content" source="media/tutorial-install-components/idrac-network-settings-screen-v2.png" alt-text="Screenshot der statischen Subnetzmaske":::
1. Wählen Sie **Zurück** > **Fertigstellen** aus.
#### <a name="configuring-the-bios"></a>Konfigurieren des BIOS
Sie müssen das Appliance-BIOS in folgenden Fällen konfigurieren:
- Sie haben Ihre Appliance nicht von Arrow erworben.
- Sie haben zwar eine Appliance, aber keinen Zugriff auf die XML-Konfigurationsdatei.
Wechseln Sie nach dem Zugriff auf das BIOS zu **Geräteeinstellungen**.
**So konfigurieren Sie das BIOS**:
1. Greifen Sie direkt über eine Tastatur und einen Bildschirm auf das Appliance-BIOS zu, oder verwenden Sie iDRAC.
- Wenn die Appliance keine Defender für IoT-Appliance ist, öffnen Sie einen Browser, und navigieren Sie zu der zuvor konfigurierten IP-Adresse. Melden Sie sich mit den Dell-Standardadministratorrechten an. Verwenden Sie **root** als Benutzername und **calvin** als Kennwort.
- Wenn die Appliance eine Defender für IoT-Appliance ist, melden Sie sich mit **XXX** als Benutzername und **XXX** als Kennwort an.
1. Wechseln Sie nach dem Zugriff auf das BIOS zu **Geräteeinstellungen**.
1. Wählen Sie die RAID-gesteuerte Konfiguration aus, indem Sie **Integrierter RAID-Controller 1: Dell PERC\<PERC H330 Adapter\> Konfigurationshilfsprogramm** auswählen.
1. Wählen Sie **Konfigurationsverwaltung** aus.
1. Wählen Sie **Virtuellen Datenträger erstellen** aus.
1. Wählen Sie im Feld **RAID-Stufe auswählen** die Option **RAID5** aus. Geben Sie im Feld **Name des virtuellen Datenträgers** den Namen **root** ein, und wählen Sie **Physische Datenträger** aus.
1. Wählen Sie **Alle überprüfen** und dann **Änderungen übernehmen** aus.
1. Klicken Sie auf **OK**.
1. Scrollen Sie nach unten, und wählen Sie **Virtuellen Datenträger erstellen** aus.
1. Aktivieren Sie das Kontrollkästchen **Bestätigen**, und wählen Sie **Ja** aus.
1. Klicken Sie auf **OK**.
1. Kehren Sie zum Hauptbildschirm zurück, und wählen Sie **System-BIOS** aus.
1. Wählen Sie **Starteinstellungen** aus.
1. Wählen Sie für **Startmodus** die Option **BIOS** aus.
1. Wählen Sie **Zurück** und dann **Fertigstellen** aus, um die BIOS-Einstellungen zu beenden.
### <a name="software-installation-dell-r340"></a>Softwareinstallation (Dell R340)
Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
So führen Sie die Installation durch:
1. Überprüfen Sie auf eine der folgenden Arten, ob die Versionsmedien in die Appliance eingebunden werden:
- Verbinden Sie die externe CD oder den Datenträger auf einem Schlüssel mit dem Release.
- Binden Sie das ISO-Image mithilfe von iDRAC ein. Wählen Sie nach der Anmeldung bei iDRAC die virtuelle Konsole und dann **Virtuelle Medien** aus.
1. Wählen Sie im Abschnitt **Map CD/DVD** (CD/DVD zuordnen) **Datei auswählen** aus.
1. Wählen Sie im daraufhin angezeigten Dialogfeld die ISO-Imagedatei der Version für diese Version aus.
1. Wählen Sie die Schaltfläche **Map Device** (Gerät zuordnen) aus.
:::image type="content" source="media/tutorial-install-components/mapped-device-on-virtual-media-screen-v2.png" alt-text="Screenshot eines zugeordneten Geräts":::
1. Die Medien werden eingebunden. Klicken Sie auf **Schließen**.
1. Starten Sie die Appliance. Wenn Sie iDRAC verwenden, können Sie die Server neu starten, indem Sie die Schaltfläche **Consul Control** (Consul-Steuerung) auswählen. Wählen Sie dann in den **Tastaturmakros** die Schaltfläche **Anwenden** aus, wodurch die Tastenfolge STRG+ALT+ENTF gestartet wird.
1. Wählen Sie **Englisch** (?Deutsch?) aus.
1. Wählen Sie **SENSOR-RELEASE-\<version\> Enterprise** aus.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Wählen Sie Ihre Sensorversion und Ihren Unternehmenstyp aus.":::
1. Definieren Sie das Applianceprofil und die Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/appliance-profile-screen-v2.png" alt-text="Screenshot mit dem Applianceprofil und den Netzwerkeigenschaften":::
| Parameter | Konfiguration |
|--|--|
| **Hardwareprofil** | **enterprise** |
| **Verwaltungsschnittstelle** | **eno1** |
| **Netzwerkparameter (vom Kunden bereitgestellt)** | - |
|**IP-Adresse des Verwaltungsnetzwerks:** | - |
| **Subnetzmaske:** | - |
| **Appliance-Hostname:** | - |
| **DNS:** | - |
| **IP-Adresse des Standardgateways:** | - |
| **Eingabeschnittstellen:** | Das System generiert automatisch die Liste der Eingabeschnittstellen. Kopieren Sie zum Spiegeln der Eingabeschnittstellen alle in der Liste mit einem Komma als Trennzeichen dargestellten Elemente. Sie müssen die Bridge-Schnittstelle nicht konfigurieren. Diese Option wird nur bei besonderen Anwendungsfällen verwendet. |
1. Nach ungefähr 10 Minuten werden die beiden Anmeldeinformationssätze angezeigt. Einer ist für einen **CyberX**-Benutzer und einer für einen **Support**-Benutzer bestimmt.
1. Speichern Sie die Appliance-ID und die Kennwörter. Sie benötigen diese Anmeldeinformationen für den Zugriff auf die Plattform, wenn Sie sie zum ersten Mal verwenden.
1. Wählen Sie **Eingeben** aus, um den Vorgang fortzusetzen.
## <a name="hpe-proliant-dl20-installation"></a>HPE ProLiant DL20-Installation
In diesem Abschnitt wird der Vorgang zur Installation von HPE ProLiant DL20 beschrieben, der die folgenden Schritte umfasst:
- Aktivieren Sie den Remotezugriff, und aktualisieren Sie das Standardadministratorkennwort.
- Konfigurieren Sie die BIOS- und RAID-Einstellungen.
- Installieren Sie die Software.
### <a name="about-the-installation"></a>Informationen zur Installation
- Enterprise- und SMB-Appliances können installiert werden. Der Installationsvorgang ist bei beiden Appliancetypen identisch – mit Ausnahme der Arraykonfiguration.
- Ein Standardadministrator wird bereitgestellt. Wir empfehlen, dass Sie das Kennwort während des Netzwerkkonfigurationsvorgangs ändern.
- Während des Netzwerkkonfigurationsvorgangs werden Sie den iLO-Port am Netzwerkport 1 konfigurieren.
- Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
### <a name="hpe-proliant-dl20-front-panel"></a>HPE ProLiant DL20 – Vorderseite
:::image type="content" source="media/tutorial-install-components/hpe-proliant-dl20-front-panel-v2.png" alt-text="HPE ProLiant DL20 – Vorderseite":::
### <a name="hpe-proliant-dl20-back-panel"></a>HPE ProLiant DL20 – Rückseite
:::image type="content" source="media/tutorial-install-components/hpe-proliant-dl20-back-panel-v2.png" alt-text="Die Rückseite des HPE ProLiant DL20":::
### <a name="enable-remote-access-and-update-the-password"></a>Aktivieren des Remotezugriffs und Aktualisieren des Kennworts
Mit dem folgenden Verfahren können Sie Netzwerkoptionen einrichten und das Standardkennwort aktualisieren.
So aktivieren und aktualisieren Sie das Kennwort:
1. Schließen Sie einen Bildschirm und eine Tastatur an die HP-Appliance an, schalten Sie die Appliance ein, und drücken Sie **F9**.
:::image type="content" source="media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot des HPE ProLiant-Fensters":::
1. Wechseln Sie zu **Systemdienstprogramme** > **Systemkonfiguration** > **iLO 5-Konfigurationshilfsprogramm** > **Netzwerkoptionen**.
:::image type="content" source="media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot des Fensters „Systemkonfiguration“":::
1. Wählen Sie im Feld **Netzwerkschnittstellenadapter** die Option **Freigegebene Netzwerkport-LOM** aus.
1. Deaktivieren Sie DHCP.
1. Geben Sie die IP-Adresse, die Subnetzmaske und die IP-Adresse des Gateways ein.
1. Wählen Sie **F10: Speichern** aus.
1. Drücken Sie **ESC**, um zum **iLO 5-Konfigurationsdienstprogramm** zurückzukehren, und wählen Sie **Benutzerverwaltung** aus.
1. Wählen Sie **Benutzer bearbeiten/entfernen** aus. Der Administrator ist der einzige definierte Standardbenutzer.
1. Ändern Sie das Standardkennwort, und wählen Sie **F10: Speichern** aus.
### <a name="configure-the-hpe-bios"></a>Konfigurieren des HPE-BIOS
Im folgenden Verfahren wird beschrieben, wie Sie das HPE-BIOS für die Enterprise- und SMB-Appliances konfigurieren.
**So konfigurieren Sie das HPE-BIOS**:
1. Wählen Sie **Systemdienstprogramme** > **Systemkonfiguration** > **BIOS/Plattformkonfiguration (RBSU)** aus.
1. Wählen Sie im Formular **BIOS/Plattformkonfiguration (RBSU)** die Option **Startoptionen** aus.
1. Ändern Sie **Startmodus** in **Legacy-BIOS-Modus**, und wählen Sie **F10: Speichern** aus.
1. Drücken Sie zweimal **ESC**, um das Formular **Systemkonfiguration** zu schließen.
#### <a name="for-the-enterprise-appliance"></a>Für die Enterprise-Appliance
1. Wählen Sie **Eingebettetes RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Arraykonfiguration** > **Array erstellen** aus.
1. Wählen Sie im Formular **Create Array** (Array erstellen) alle Optionen aus. Für die **Enterprise**-Appliance gibt es drei Optionen.
#### <a name="for-the-smb-appliance"></a>Für die SMB-Appliance
1. Wählen Sie **Eingebettetes RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Arraykonfiguration** > **Array erstellen** aus.
1. Wählen Sie **Proceed to Next Form** (Mit nächstem Formular fortfahren) aus.
1. Legen Sie im Formular **Set RAID Level** (RAID-Ebene festlegen) die Ebene auf **RAID 5** für Unternehmensbereitstellungen und **RAID 1** für SMB-Bereitstellungen fest.
1. Wählen Sie **Proceed to Next Form** (Mit nächstem Formular fortfahren) aus.
1. Geben Sie im Formular **Logical Drive Label** (Bezeichnung des logischen Laufwerks) **Logisches Laufwerk 1** ein.
1. Wählen Sie **Änderungen senden** aus.
1. Wählen Sie im Formular **Senden** die Option **Zurück zum Hauptmenü** aus.
1. Wählen Sie **F10: Speichern** aus, und drücken Sie zweimal **ESC**.
1. Wählen Sie im Fenster **Systemdienstprogramme** die Option **Menü für einmaliges Anmelden** aus.
1. Wählen Sie im Formular **Menü für einmaliges Anmelden** die Option **Legacy-BIOS – Menü für einmaliges Anmelden** aus.
1. Daraufhin werden die Fenster **Starten in Legacy** und **Start außer Kraft setzen** angezeigt. Wählen Sie eine Option für „Start außer Kraft setzen“ aus, beispielsweise eine CD-ROM, ein USB-Stick, ein Festplattenlaufwerk oder eine UEFI-Shell.
:::image type="content" source="media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot des ersten Fensters für „Start außer Kraft setzen“":::
:::image type="content" source="media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot des zweiten Fensters für „Start außer Kraft setzen“":::
### <a name="software-installation-hpe-proliant-dl20-appliance"></a>Softwareinstallation (HPE ProLiant DL20-Appliance)
Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
So installieren Sie die Software:
1. Schließen Sie den Bildschirm und die Tastatur an die Appliance an, und stellen Sie eine Verbindung mit der CLI her.
1. Verbinden Sie eine externe CD oder einen Datenträger auf dem Schlüssel mit dem ISO-Image, das Sie im Defender für IoT-Portal über die Seite **Updates** heruntergeladen haben.
1. Starten Sie die Appliance.
1. Wählen Sie **Englisch** (?Deutsch?) aus.
:::image type="content" source="media/tutorial-install-components/select-english-screen.png" alt-text="Auswahl von „Englisch“ (?Deutsch?) im CLI-Fenster":::
1. Wählen Sie **SENSOR-RELEASE-\<version> Enterprise** aus.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot des Bildschirms zum Auswählen einer Version":::
1. Definieren Sie im Installations-Assistenten das Hardwareprofil und die Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot des Installations-Assistenten":::
| Parameter | Konfiguration |
| ----------| ------------- |
| **Hardwareprofil** | Wählen Sie **Enterprise** oder **Office** für SMB-Bereitstellungen aus. |
| **Verwaltungsschnittstelle** | **eno2** |
| **Standardnetzwerkparameter (normalerweise werden die Parameter vom Kunden bereitgestellt)** | **IP-Adresse des Verwaltungsnetzwerks:** <br/> <br/>**Appliance-Hostname:** <br/>**DNS:** <br/>**die IP-Adresse des Standardgateways:**|
| **Eingabeschnittstellen:** | Das System generiert automatisch die Liste der Eingabeschnittstellen.<br/><br/>Kopieren Sie zum Spiegeln der Eingabeschnittstellen alle in der Liste mit einem Komma als Trennzeichen dargestellten Elemente: **eno5, eno3, eno1, eno6, eno4**.<br/><br/>**Für HPE DL20: Listen Sie „eno1“ und „enp1s0f4u4“ (iLo-Schnittstellen) nicht auf**.<br/><br/>**BRIDGE**: Die Bridge-Schnittstelle muss nicht konfiguriert werden. Diese Option wird nur bei besonderen Anwendungsfällen verwendet. Drücken Sie die **EINGABETASTE**, um fortzufahren. |
1. Nach ungefähr 10 Minuten werden die beiden Anmeldeinformationssätze angezeigt. Einer ist für einen **CyberX**-Benutzer und einer für einen **Support**-Benutzer bestimmt.
1. Speichern Sie die Appliance-ID und die Kennwörter. Sie benötigen die Anmeldeinformationen für den ersten Zugriff auf die Plattform.
1. Wählen Sie **Eingeben** aus, um den Vorgang fortzusetzen.
## <a name="hpe-proliant-dl360-installation"></a>HPE ProLiant DL360-Installation
- Ein Standardadministrator wird bereitgestellt. Wir empfehlen, dass Sie das Kennwort während der Netzwerkkonfiguration ändern.
- Während der Netzwerkkonfiguration werden Sie den iLO-Port konfigurieren.
- Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
### <a name="hpe-proliant-dl360-front-panel"></a>HPE ProLiant DL360 – Vorderseite
:::image type="content" source="media/tutorial-install-components/hpe-proliant-dl360-front-panel.png" alt-text="HPE ProLiant DL360 – Vorderseite":::
### <a name="hpe-proliant-dl360-back-panel"></a>HPE ProLiant DL360 – Rückseite
:::image type="content" source="media/tutorial-install-components/hpe-proliant-dl360-back-panel.png" alt-text="HPE ProLiant DL360 – Rückseite":::
### <a name="enable-remote-access-and-update-the-password"></a>Aktivieren des Remotezugriffs und Aktualisieren des Kennworts
Weitere Informationen finden Sie in den vorhergehenden Abschnitten zur HPE ProLiant DL20-Installation:
- „Aktivieren des Remotezugriffs und Aktualisieren des Kennworts“
- „Konfigurieren des HPE-BIOS“
Die Unternehmenskonfiguration ist identisch.
> [!Note]
> Im Arrayformular müssen Sie alle Optionen auswählen.
### <a name="ilo-remote-installation-from-a-virtual-drive"></a>iLO-Remoteinstallation (von einem virtuellen Laufwerk)
In diesem Verfahren wird die iLO-Installation von einem virtuellen Laufwerk beschrieben.
So führen Sie die Installation durch:
1. Melden Sie sich bei der iLO-Konsole an, und klicken Sie mit der rechten Maustaste auf den Bildschirm des Servers.
1. Wählen Sie **HTML5-Konsole** aus.
1. Wählen Sie in der Konsole das CD-Symbol und dann die Option „CD/DVD“ aus.
1. Wählen Sie **Lokale ISO-Datei** aus.
1. Wählen Sie im Dialogfeld die relevante ISO-Datei aus.
1. Wechseln Sie zum linken Symbol, wählen Sie **Power** und dann **Zurücksetzen** aus.
1. Die Appliance wird neu gestartet und führt den Sensorinstallationsvorgang aus.
### <a name="software-installation-hpe-dl360"></a>Softwareinstallation (HPE DL360)
Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
So führen Sie die Installation durch:
1. Schließen Sie den Bildschirm und die Tastatur an die Appliance an, und stellen Sie eine Verbindung mit der CLI her.
1. Verbinden Sie eine externe CD oder einen Datenträger auf einem Schlüssel mit dem ISO-Image, das Sie im Defender für IoT-Portal über die Seite **Updates** heruntergeladen haben.
1. Starten Sie die Appliance.
1. Wählen Sie **Englisch** (?Deutsch?) aus.
1. Wählen Sie **SENSOR-RELEASE-\<version> Enterprise** aus.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot, in dem die Auswahl der Version gezeigt wird.":::
1. Definieren Sie im Installations-Assistenten das Applianceprofil und die Netzwerkeigenschaften.
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot des Installations-Assistenten":::
| Parameter | Konfiguration |
| ----------| ------------- |
| **Hardwareprofil** | Wählen Sie **Unternehmen** aus. |
| **Verwaltungsschnittstelle** | **eno2** |
| **Standardnetzwerkparameter (vom Kunden bereitgestellt)** | **IP-Adresse des Verwaltungsnetzwerks:** <br> **Subnetzmaske:** <br/>**Appliance-Hostname:** <br/>**DNS:** <br/>**die IP-Adresse des Standardgateways:**|
| **Eingabeschnittstellen:** | Das System generiert automatisch eine Liste der Eingabeschnittstellen.<br/><br/>Kopieren Sie zum Spiegeln der Eingabeschnittstellen alle in der Liste mit einem Komma als Trennzeichen dargestellten Elemente.<br/><br/> Sie müssen die Bridge-Schnittstelle nicht konfigurieren. Diese Option wird nur bei besonderen Anwendungsfällen verwendet. |
1. Nach ungefähr 10 Minuten werden die beiden Anmeldeinformationssätze angezeigt. Einer ist für einen **CyberX**-Benutzer und einer für einen **Support**-Benutzer bestimmt.
1. Speichern Sie die Appliance-ID und die Kennwörter. Sie benötigen diese Anmeldeinformationen für den ersten Zugriff auf die Plattform.
1. Wählen Sie **Eingeben** aus, um den Vorgang fortzusetzen.
## <a name="hp-edgeline-300-installation"></a>Installation von HP EdgeLine 300
- Ein Standardadministrator wird bereitgestellt. Wir empfehlen, dass Sie das Kennwort während der Netzwerkkonfiguration ändern.
- Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
### <a name="hp-edgeline-300-back-panel"></a>Rückseite von HP EdgeLine 300
:::image type="content" source="media/tutorial-install-components/edgeline-el300-panel.png" alt-text="Ansicht der Rückseite von EL300":::
### <a name="enable-remote-access"></a>Aktivieren des Remotezugriffs
1. Geben Sie die ISM-IP-Adresse in Ihren Webbrowser ein.
1. Melden Sie sich mit dem Standardbenutzernamen und -kennwort an, das Sie auf Ihrer Appliance finden.
1. Navigieren Sie zu **Wired and Wireless Network**(Verkabeltes und drahtloses Netzwerk) > **IPV4**.
:::image type="content" source="media/tutorial-install-components/wired-and-wireless.png" alt-text="Navigieren Sie zu hervorgehobenen Abschnitten.":::
1. Deaktivieren Sie den **DHCP-Umschalter**.
1. Konfigurieren Sie die IPv4-Adressen so:
- **IPV4-Adresse**: `192.168.1.125`
- **IPV4-Subnetzmaske**: `255.255.255.0`
- **IPV4-Gateway**: `192.168.1.1`
1. Wählen Sie **Übernehmen**.
1. Melden Sie sich ab, und starten Sie die Appliance neu.
### <a name="configure-the-bios"></a>Konfigurieren des BIOS
Im folgenden Verfahren wird beschrieben, wie Sie das BIOS für die HP EL300-Appliance konfigurieren können.
**So konfigurieren Sie das BIOS**:
1. Schalten Sie die Appliance ein, und drücken Sie **F9**, um das BIOS zu aktivieren.
1. Wählen Sie **Erweitert** aus, und scrollen Sie nach unten zu **CSM Support**.
:::image type="content" source="media/tutorial-install-components/csm-support.png" alt-text="Aktivieren Sie „CSM Support“, um das zusätzliche Menü zu öffnen.":::
1. Drücken Sie die **EINGABETASTE,** , um „CSM Support“ zu aktivieren.
1. Navigieren Sie zu **Speicher** und pushen Sie **+/-** , um ihn in „Legacy“ zu ändern.
1. Navigieren Sie zu **Video** und pushen Sie **+/-** , um es in „Legacy“ zu ändern.
:::image type="content" source="media/tutorial-install-components/storage-and-video.png" alt-text="Navigieren Sie zu „Speicher“ und „Video“, und ändern Sie beide in „Legacy“.":::
1. Navigieren Sie zu **Boot** (Start) > **Boot mode select** (Auswahl des Startmodus).
1. Pushen Sie **+/-** , um ihn in „Legacy“ zu ändern.
:::image type="content" source="media/tutorial-install-components/boot-mode.png" alt-text="„Boot mode select“ (Auswahl des Startmodus) in „Legacy“ ändern.":::
1. Navigieren Sie zu **Speichern und beenden**.
1. Wählen Sie **Änderungen speichern und beenden** aus.
:::image type="content" source="media/tutorial-install-components/save-and-exit.png" alt-text="Speichern Sie Ihre Änderungen, und beenden Sie das System.":::
1. Wählen Sie **Ja** aus. Dann wird die Appliance neu gestartet.
1. Drücken Sie **F11**, um das **Startmenü** zu öffnen.
1. Wählen Sie das Gerät mit dem Sensorbild aus. Geben Sie **DVD** oder **USB** ein.
1. Wählen Sie Ihre Sprache aus.
1. Wählen Sie **sensor-10.0.3.12-62a2a3f724 Office: 4 CPUS, 8GB RAM, 100GB STORAGE** aus.
:::image type="content" source="media/tutorial-install-components/sensor-select-screen.png" alt-text="Wählen Sie die Sensorversion wie gezeigt aus.":::
1. Definieren Sie im Installations-Assistenten das Applianceprofil und die Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/appliance-parameters.png" alt-text="Definieren Sie das Profil und die Netzwerkkonfigurationen der Appliance mit den folgenden Parametern.":::
| Parameter | Konfiguration |
|--|--|
| **Konfigurieren eines Hardwareprofils** | **office** |
| **Konfigurieren der Verwaltungsnetzwerkschnittstelle** | **enp3s0** <br /> oder <br />**möglicher Wert** |
| **Konfigurieren der IP-Adresse des Verwaltungsnetzwerks:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren der Subnetzmaske:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren von DNS:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren der IP-Adresse des Standardgateways:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren von einer oder mehreren Eingabeschnittstelle(n)** | **enp4s0** <br /> oder <br />**möglicher Wert** |
| **Konfigurieren von einer oder mehreren Bridge-Schnittstelle(n)** | – |
1. Akzeptieren Sie die Einstellungen, und setzen Sie den Vorgang durch Eingabe von `Y` fort.
## <a name="sensor-installation-for-the-virtual-appliance"></a>Sensorinstallation für das virtuelle Gerät
Sie können den virtuellen Computer für den Defender für IoT-Sensor in den folgenden Architekturen bereitstellen:
| Aufbau | Spezifikationen | Verwendung | Kommentare |
|---|---|---|---|
| **Enterprise** | CPU: 8<br/>Memory: 32G RAM<br/>HDD: 1.800 GB | Produktionsumgebung | Standard und am häufigsten |
| **Kleinunternehmen** | CPU: 4 <br/>Memory: 8G RAM<br/>HDD: 500 GB | Test- oder kleine Produktionsumgebungen | - |
| **Office** | CPU: 4<br/>Memory: 8G RAM<br/>HDD: 100 GB | Kleine Testumgebungen | - |
### <a name="prerequisites"></a>Voraussetzungen
Die lokale Verwaltungskonsole unterstützt sowohl VMware- als auch Hyper-V-Bereitstellungsoptionen. Bevor Sie mit der Installation beginnen, müssen die folgenden Elemente vorhanden sein:
- VMware (ESXi 5.5 oder höher) oder Hyper-V-Hypervisor (Windows 10 Pro oder Enterprise) installiert und betriebsbereit
- Verfügbare Hardwareressourcen für den virtuellen Computer
- ISO-Installationsdatei für den Azure Defender für IoT-Sensor
Sorgen Sie dafür, dass der Hypervisor ausgeführt wird.
### <a name="create-the-virtual-machine-esxi"></a>Erstellen des virtuellen Computers (ESXi)
1. Melden Sie sich beim ESXi an, wählen Sie den relevanten **Datenspeicher** und dann **Datenspeicherbrowser** aus.
1. **Laden Sie das Image hoch**, und wählen Sie **Schließen** aus.
1. Wechseln Sie zu **Virtuelle Computer**, und wählen Sie **VM erstellen/registrieren** aus.
1. Wählen Sie **Neuen virtuellen Computer erstellen** und dann **Weiter** aus.
1. Fügen Sie einen Sensornamen hinzu, und wählen Sie:
- Kompatibilität: **<Neueste ESXi-Version>** aus.
- Gastbetriebssystemfamilie: **Linux**
- Gastbetriebssystemversion: **Ubuntu Linux (64 Bit)**
1. Wählen Sie **Weiter** aus.
1. Wählen Sie den relevanten Datenspeicher und dann **Weiter** aus.
1. Ändern Sie die virtuellen Hardwareparameter entsprechend der erforderlichen Architektur.
1. Wählen Sie für **CD/DVD-Laufwerk 1** die Option **Datenspeicher-ISO-Datei** und dann die zuvor hochgeladene ISO-Datei aus.
1. Klicken Sie auf **Weiter** > **Fertig stellen**.
### <a name="create-the-virtual-machine-hyper-v"></a>Erstellen des virtuellen Computers (Hyper-V)
In diesem Verfahren wird beschrieben, wie Sie mithilfe von Hyper-V einen virtuellen Computer erstellen.
So erstellen Sie einen virtuellen Computer:
1. Erstellen Sie im Hyper-V-Manager einen virtuellen Datenträger.
1. Wählen Sie **format = VHDX** aus.
1. Wählen Sie **type = Dynamic Expanding** (Dynamisch erweiterbar) aus.
1. Geben Sie den Namen und Speicherort für die virtuelle Festplatte ein.
1. Geben Sie die erforderliche Größe (entsprechend der Architektur) ein.
1. Überprüfen Sie die Zusammenfassung, und wählen Sie **Fertig stellen** aus.
1. Erstellen Sie im Menü **Aktionen** einen neuen virtuellen Computer.
1. Geben Sie einen Namen für den virtuellen Computer ein.
1. Wählen Sie **Generation angeben** > **Generation 1** aus.
1. Geben Sie die Speicherbelegung (entsprechend der Architektur) an, und aktivieren Sie das Kontrollkästchen für den dynamischen Arbeitsspeicher.
1. Konfigurieren Sie den Netzwerkadapter entsprechend Ihrer Server-Netzwerktopologie.
1. Verbinden Sie die zuvor erstellte VHDX-Datei mit dem virtuellen Computer.
1. Überprüfen Sie die Zusammenfassung, und wählen Sie **Fertig stellen** aus.
1. Klicken Sie mit der rechten Maustaste auf den neuen virtuellen Computer, und wählen Sie **Einstellungen** aus.
1. Wählen Sie **Hardware hinzufügen** aus, und fügen Sie einen neuen Netzwerkadapter hinzu.
1. Wählen Sie den virtuellen Switch aus, der eine Verbindung mit dem Sensorverwaltungsnetzwerk herstellen wird.
1. Ordnen Sie CPU-Ressourcen (entsprechend der Architektur) zu.
1. Verbinden Sie das ISO-Image der Verwaltungskonsole mit einem virtuellen DVD-Laufwerk.
1. Starten Sie den virtuellen Computer.
1. Wählen Sie im Menü **Aktionen** die Option **Verbinden** aus, um die Softwareinstallation fortzusetzen.
### <a name="software-installation-esxi-and-hyper-v"></a>Softwareinstallation (ESXi und Hyper-V)
In diesem Abschnitt wird die ESXi- und Hyper-V-Softwareinstallation beschrieben.
So führen Sie die Installation durch:
1. Öffnen Sie die Konsole für virtuelle Computer.
1. Der virtuelle Computer wird aus dem ISO-Image gestartet, und der Bildschirm für die Sprachauswahl wird angezeigt. Wählen Sie **Englisch** (?Deutsch?) aus.
1. Wählen Sie die erforderliche Architektur aus.
1. Definieren Sie das Applianceprofil und die Netzwerkeigenschaften:
| Parameter | Konfiguration |
| ----------| ------------- |
| **Hardwareprofil** | <erforderliche Architektur> |
| **Verwaltungsschnittstelle** | **ens192** |
| **Netzwerkparameter (vom Kunden bereitgestellt)** | **IP-Adresse des Verwaltungsnetzwerks:** <br/>**Subnetzmaske:** <br/>**Appliance-Hostname:** <br/>**DNS:** <br/>**Standardgateway:** <br/>**Eingabeschnittstellen:**|
| **Bridge-Schnittstellen:** | Die Bridge-Schnittstelle muss nicht konfiguriert werden. Diese Option wird nur bei besonderen Anwendungsfällen verwendet. |
1. Geben Sie **Y** ein, um die Einstellungen zu akzeptieren.
1. Anmeldeinformationen werden automatisch generiert und angezeigt. Kopieren Sie den Benutzernamen und das Kennwort an einen sicheren Ort, weil sie für die Anmeldung und Verwaltung erforderlich sind.
- **Support**: Der Administrator für die Benutzerverwaltung.
- **CyberX**: Die Entsprechung von „root“ für den Zugriff auf die Appliance.
1. Die Appliance wird neu gestartet.
1. Greifen Sie über die zuvor konfigurierte IP-Adresse auf die-Verwaltungskonsole zu: `https://ip_address`.
:::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot des Zugriffs auf die Verwaltungskonsole":::
## <a name="on-premises-management-console-installation"></a>Installation der lokalen Verwaltungskonsole
Vor Installation der Software auf der Appliance müssen Sie deren BIOS-Konfiguration anpassen:
### <a name="bios-configuration"></a>BIOS-Konfiguration
**So konfigurieren Sie das BIOS für Ihre Appliance**:
1. [Aktivieren Sie den Remotezugriff, und aktualisieren Sie das Kennwort](#enable-remote-access-and-update-the-password).
1. [Konfigurieren Sie das BIOS](#configure-the-hpe-bios).
### <a name="software-installation"></a>Softwareinstallation
Der Installationsvorgang dauert ungefähr 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
Während des Installationsvorgangs können Sie eine sekundäre NIC hinzufügen. Wenn Sie die sekundäre NIC während der Installation nicht installieren möchten, können Sie zu einem späteren Zeitpunkt [eine sekundäre NIC hinzufügen](#add-a-secondary-nic).
So installieren Sie die Software:
1. Wählen Sie Ihre bevorzugte Sprache für den Installationsvorgang aus.
:::image type="content" source="media/tutorial-install-components/on-prem-language-select.png" alt-text="Wählen Sie Ihre bevorzugte Sprache für den Installationsvorgang aus.":::
1. Wählen Sie **MANAGEMENT-RELEASE-\<version\>\<deployment type\>** aus.
:::image type="content" source="media/tutorial-install-components/on-prem-install-screen.png" alt-text="Wählen Sie Ihre Version aus.":::
1. Definieren Sie im Installations-Assistenten die Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot des Applianceprofils":::
| Parameter | Konfiguration |
|--|--|
| **Konfigurieren der Verwaltungsnetzwerkschnittstelle** | Für Dell: **eth0, eth1** <br /> Für HP: **enu1, enu2** <br> oder <br />**möglicher Wert** |
| **Konfigurieren der IP-Adresse des Verwaltungsnetzwerks:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren der Subnetzmaske:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren von DNS:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren der IP-Adresse des Standardgateways:** | **Die vom Kunden angegebene IP-Adresse** |
1. **(Optional)** Wenn Sie eine sekundäre Netzwerkschnittstellenkarte (Network Interface Card, NIC) installieren möchten, definieren Sie das folgende Applianceprofil und die folgenden Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot mit Fragen zur Installation der sekundären NIC":::
| Parameter | Konfiguration |
|--|--|
| **Konfigurieren der Sensorüberwachungsschnittstelle (optional):** | **eth1** oder **möglicher Wert** |
| **Konfigurieren einer IP-Adresse für die Sensorüberwachungsschnittstelle:** | **Die vom Kunden angegebene IP-Adresse** |
| **Konfigurieren einer Subnetzmaske für die Sensorüberwachungsschnittstelle:** | **Die vom Kunden angegebene IP-Adresse** |
1. Akzeptieren Sie die Einstellungen, und setzen Sie den Vorgang durch Eingabe von `Y` fort.
1. Nach ungefähr 10 Minuten werden die beiden Anmeldeinformationssätze angezeigt. Einer ist für einen **CyberX**-Benutzer und einer für einen **Support**-Benutzer bestimmt.
:::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Kopieren Sie diese Anmeldeinformationen, weil sie nicht erneut angezeigt werden.":::
Speichern Sie die Benutzernamen und Kennwörter. Sie benötigen diese Anmeldeinformationen für den Zugriff auf die Plattform, wenn Sie sie zum ersten Mal verwenden.
1. Wählen Sie **Eingeben** aus, um den Vorgang fortzusetzen.
Informationen dazu, wie Sie den physischen Port auf Ihrer Appliance finden können, finden Sie unter [Suchen Ihres Ports](#find-your-port).
### <a name="add-a-secondary-nic"></a>Hinzufügen einer sekundären NIC
Sie können die Sicherheit Ihrer lokalen Verwaltungskonsole erhöhen, indem Sie eine sekundäre NIC hinzufügen. Durch das Hinzufügen einer sekundären NIC haben Sie eine dedizierte NIC für Ihre Benutzer, und die andere NIC unterstützt die Konfiguration eines Gateways für geroutete Netzwerke. Die sekundäre NIC wird für alle angefügten Sensoren innerhalb eines IP-Adressbereichs dediziert.
:::image type="content" source="media/tutorial-install-components/secondary-nic.png" alt-text="Die Gesamtarchitektur der sekundären NIC":::
Beide NICs unterstützen die Benutzeroberfläche.
Wenn Sie wählen, dass Sie keine sekundäre NIC bereitstellen möchten, stehen alle Features über die primäre NIC zur Verfügung.
Wenn Sie Ihre lokale Verwaltungskonsole bereits konfiguriert haben und ihr eine sekundäre NIC hinzufügen möchten, führen Sie die folgenden Schritte aus:
1. Verwenden Sie den Befehl zum Neukonfigurieren des Netzwerks („network reconfigure“):
```bash
sudo cyberx-management-network-reconfigure
```
1. Geben Sie auf die folgenden Fragen die folgenden Antworten ein:
:::image type="content" source="media/tutorial-install-components/network-reconfig-command.png" alt-text="Geben Sie die folgenden Antworten zum Konfigurieren Ihrer Appliance ein.":::
| Parameter | Einzugebende Antwort |
|--|--|
| **IP-Adresse des Verwaltungsnetzwerks** | `N` |
| **Subnetzmaske** | `N` |
| **DNS** | `N` |
| **IP-Adresse des Standardgateways** | `N` |
| **Sensorüberwachungsschnittstelle (Optional. Anwendbar, wenn sich Sensoren in einem anderen Netzwerksegment befinden. Weitere Informationen finden Sie in den „Installationsanleitungen“.)**| `Y`, **wählen Sie einen möglichen Wert aus.** |
| **Eine IP-Adresse für die Sensorüberwachungsschnittstelle (auf die die Sensoren zugreifen können)** | `Y`, **die vom Kunden angegebene IP-Adresse**|
| **Eine Subnetzmaske für die Sensorüberwachungsschnittstelle (auf die die Sensoren zugreifen können)** | `Y`, **die vom Kunden angegebene IP-Adresse** |
| **Hostname** | **wird vom Kunden angegeben** |
1. Überprüfen Sie alle Auswahlmöglichkeiten, und geben Sie `Y` ein, um die Änderungen zu akzeptieren. Das System wird neu gestartet.
### <a name="find-your-port"></a>Suchen Ihres Ports
Wenn Sie Probleme bei der Suche nach dem physischen Port auf Ihrem Gerät haben, können Sie den folgenden Befehl verwenden:
```bash
sudo ethtool -p <port value> <time-in-seconds>
```
Dieser Befehl führt dazu, dass das Licht am Port während des angegebenen Zeitraums blinkt. Wenn Sie beispielsweise `sudo ethtool -p eno1 120` eingeben, blinkt Port „eno1“ 2 Minuten lang, sodass Sie den Port auf der Rückseite Ihrer Appliance finden können.
## <a name="virtual-appliance-on-premises-management-console-installation"></a>Virtuelles Gerät: Installation der lokalen Verwaltungskonsole
Der virtuelle Computer der lokalen Verwaltungskonsole unterstützt die folgenden Architekturen:
| Aufbau | Spezifikationen | Verwendung |
|--|--|--|
| Enterprise <br/>(Standard und am häufigsten) | CPU: 8 <br/>Memory: 32G RAM<br/> HDD: 1,8 TB | Große Produktionsumgebungen |
| Klein | CPU: 4 <br/> Memory: 8G RAM<br/> HDD: 500 GB | Große Produktionsumgebungen |
| Office | CPU: 4 <br/>Memory: 8G RAM <br/> HDD: 100 GB | Kleine Testumgebungen |
### <a name="prerequisites"></a>Voraussetzungen
Die lokale Verwaltungskonsole unterstützt sowohl VMware- als auch Hyper-V-Bereitstellungsoptionen. Überprüfen Sie vor Installationsbeginn die folgenden Punkte:
- VMware (ESXi 5.5 oder höher) oder Hyper-V-Hypervisor (Windows 10 Pro oder Enterprise) wurde installiert und ist betriebsbereit.
- Die Hardwareressourcen stehen für den virtuellen Computer zur Verfügung.
- Sie haben die ISO-Installationsdatei für die lokale Verwaltungskonsole.
- Der Hypervisor wird ausgeführt.
### <a name="create-the-virtual-machine-esxi"></a>Erstellen des virtuellen Computers (ESXi)
So erstellen Sie den virtuellen Computer (ESXi):
1. Melden Sie sich beim ESXi an, wählen Sie den relevanten **Datenspeicher** und dann **Datenspeicherbrowser** aus.
1. Laden Sie das Image hoch, und wählen Sie **Schließen** aus.
1. Wechseln Sie zu **Virtuelle Computer**.
1. Wählen Sie **VM erstellen/registrieren** aus.
1. Wählen Sie **Neuen virtuellen Computer erstellen** und dann **Weiter** aus.
1. Fügen Sie einen Sensornamen hinzu, und wählen Sie:
- Kompatibilität: \<latest ESXi version> aus.
- Gastbetriebssystemfamilie: Linux
- Gastbetriebssystemversion: Ubuntu Linux (64 Bit)
1. Wählen Sie **Weiter** aus.
1. Wählen Sie den relevanten Datenspeicher und dann **Weiter** aus.
1. Ändern Sie die virtuellen Hardwareparameter entsprechend der erforderlichen Architektur.
1. Wählen Sie für **CD/DVD-Laufwerk 1** die Option **Datenspeicher-ISO-Datei** und dann die zuvor hochgeladene ISO-Datei aus.
1. Klicken Sie auf **Weiter** > **Fertig stellen**.
### <a name="create-the-virtual-machine-hyper-v"></a>Erstellen des virtuellen Computers (Hyper-V)
So erstellen Sie einen virtuellen Computer mithilfe von Hyper-V:
1. Erstellen Sie im Hyper-V-Manager einen virtuellen Datenträger.
1. Wählen Sie das Format **VHDX** aus.
1. Wählen Sie **Weiter** aus.
1. Wählen Sie den Typ **Dynamic expanding** (Dynamisch erweiterbar) aus.
1. Wählen Sie **Weiter** aus.
1. Geben Sie den Namen und Speicherort für die virtuelle Festplatte ein.
1. Wählen Sie **Weiter** aus.
1. Geben Sie die erforderliche Größe (entsprechend der Architektur) ein.
1. Wählen Sie **Weiter** aus.
1. Überprüfen Sie die Zusammenfassung, und wählen Sie **Fertig stellen** aus.
1. Erstellen Sie im Menü **Aktionen** einen neuen virtuellen Computer.
1. Wählen Sie **Weiter** aus.
1. Geben Sie einen Namen für den virtuellen Computer ein.
1. Wählen Sie **Weiter** aus.
1. Wählen Sie **Generation** aus, und legen Sie den Wert auf **Generation 1** fest.
1. Wählen Sie **Weiter** aus.
1. Geben Sie die Speicherbelegung (entsprechend der Architektur) an, und aktivieren Sie das Kontrollkästchen für den dynamischen Arbeitsspeicher.
1. Wählen Sie **Weiter** aus.
1. Konfigurieren Sie den Netzwerkadapter entsprechend Ihrer Server-Netzwerktopologie.
1. Wählen Sie **Weiter** aus.
1. Verbinden Sie die zuvor erstellte VHDX-Datei mit dem virtuellen Computer.
1. Wählen Sie **Weiter** aus.
1. Überprüfen Sie die Zusammenfassung, und wählen Sie **Fertig stellen** aus.
1. Klicken Sie mit der rechten Maustaste auf den neuen virtuellen Computer, und wählen Sie **Einstellungen** aus.
1. Wählen Sie **Hardware hinzufügen** aus, und fügen Sie für **Netzwerkadapter** einen neuen Adapter hinzu.
1. Wählen Sie für **Virtueller Switch** den Switch aus, der eine Verbindung mit dem Sensorverwaltungsnetzwerk herstellen wird.
1. Ordnen Sie CPU-Ressourcen (entsprechend der Architektur) zu.
1. Verbinden Sie das ISO-Image der Verwaltungskonsole mit einem virtuellen DVD-Laufwerk.
1. Starten Sie den virtuellen Computer.
1. Wählen Sie im Menü **Aktionen** die Option **Verbinden** aus, um die Softwareinstallation fortzusetzen.
### <a name="software-installation-esxi-and-hyper-v"></a>Softwareinstallation (ESXi und Hyper-V)
Mit dem Start des virtuellen Computers wird der Installationsvorgang aus dem ISO-Image gestartet.
So installieren Sie die Software:
1. Wählen Sie **Englisch** (?Deutsch?) aus.
1. Wählen Sie die erforderliche Architektur für Ihre Bereitstellung aus.
1. Definieren Sie die Netzwerkschnittstelle für das Sensorverwaltungsnetzwerk: Schnittstelle, IP, Subnetz, DNS-Server und Standardgateway.
1. Anmeldeinformationen werden automatisch generiert. Speichern Sie den Benutzernamen und die Kennwörter. Sie benötigen diese Anmeldeinformationen für den Zugriff auf die Plattform, wenn Sie sie zum ersten Mal verwenden.
Die Appliance wird dann neu gestartet.
1. Greifen Sie über die zuvor konfigurierte IP-Adresse auf die Verwaltungskonsole zu: `<https://ip_address>`.
:::image type="content" source="media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png" alt-text="Screenshot des Anmeldebildschirms auf der Verwaltungskonsole":::
## <a name="legacy-appliances"></a>Legacyappliances
In diesem Abschnitt werden die Installationsverfahren nur für *Legacyappliances* beschrieben. Weitere Informationen finden Sie unter [Identifizieren erforderlicher Appliances](how-to-identify-required-appliances.md#identify-required-appliances), wenn Sie eine neue Appliance erwerben.
### <a name="nuvo-5006lp-installation"></a>Nuvo 5006LP-Installation
In diesem Abschnitt wird das Installationsverfahren für Nuvo 5006LP beschrieben. Vor der Installation der Software auf der Nuvo 5006LP-Appliance müssen Sie deren BIOS-Konfiguration anpassen.
#### <a name="nuvo-5006lp-front-panel"></a>Vorderseite des Nuvo 5006LP-Geräts
:::image type="content" source="media/tutorial-install-components/nuvo5006lp_frontpanel.png" alt-text="Ansicht der Vorderseite des Nuvo 5006LP-Geräts":::
1. Netzschalter, Betriebsanzeige
1. DVI-Videoanschlüsse
1. HDMI-Videoanschlüsse
1. VGA-Videoanschlüsse
1. Ein-/Ausschalten der Remotesteuerung und Status-LED-Ausgabe
1. Taste zum Zurücksetzen
1. Verwaltungsnetzwerkadapter
1. Ports zum Empfang gespiegelter Daten
#### <a name="nuvo-back-panel"></a>Nuvo-Rückseite
:::image type="content" source="media/tutorial-install-components/nuvo5006lp_backpanel.png" alt-text="Ansicht des Rückseite des Nuvo 5006LP-Geräts":::
1. SIM-Kartenslot
1. Mikrofon und Lautsprecher
1. COM-Ports
1. USB-Anschlüsse
1. Gleichstrom-Netzanschluss (DC IN)
#### <a name="configure-the-nuvo-5006lp-bios"></a>Konfigurieren des Nuvo 5006LP-BIOS
Im folgenden Verfahren wird beschrieben, wie Sie das BIOS eines Nuvo 5006LP-Geräts konfigurieren. Stellen Sie sicher, dass das Betriebssystem vorab auf der Appliance installiert wurde.
**So konfigurieren Sie das BIOS**:
1. Schalten Sie die Appliance ein.
1. Drücken Sie **F2**, um die BIOS-Konfiguration zu starten.
1. Navigieren Sie zu **Power** (Stromversorgung), und ändern Sie „Power On after Power Failure“ (Nach Stromausfall einschalten) in „S0-Power On“ (S0 – einschalten).
:::image type="content" source="media/tutorial-install-components/nuvo-power-on.png" alt-text="Ändern des Nuvo 5006LP-Geräts für das Einschalten nach einem Stromausfall":::
1. Navigieren Sie zu **Boot** (Start), und stellen Sie sicher, dass **PXE Boot to LAN** (PXE-LAN-Start) auf **Disabled** (Deaktiviert) festgelegt ist.
1. Drücken Sie zum Speichern **F10**, und wählen Sie dann **Exit** (Beenden) aus.
#### <a name="software-installation-nuvo-5006lp"></a>Softwareinstallation (Nuvo 5006LP)
Der Installationsvorgang dauert ca. 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
**So installieren Sie die Software**:
1. Verbinden Sie die externe CD oder den USB-Stick mit dem ISO-Image.
1. Starten Sie die Appliance.
1. Wählen Sie **Englisch** (?Deutsch?) aus.
1. Wählen Sie **XSENSE-RELEASE-\<version> Office...** aus.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Auswählen der zu installierenden Version des Sensors":::
1. Definieren Sie die Appliancearchitektur und die Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Definieren der Nuvo-Architektur und der Netzwerkeigenschaften":::
| Parameter | Konfiguration |
| ----------| ------------- |
| **Hardwareprofil** | Wählen Sie **office** aus. |
| **Verwaltungsschnittstelle** | **eth0** |
| **IP-Adresse des Verwaltungsnetzwerks** | **Die vom Kunden angegebene IP-Adresse** |
| **Subnetzmaske des Verwaltungsnetzwerks** | **Die vom Kunden angegebene IP-Adresse** |
| **DNS** | **Die vom Kunden angegebene IP-Adresse** |
| **IP-Adresse des Standardgateways** | **0.0.0.0** |
| **Eingabeschnittstelle** | Die Liste der Eingabeschnittstellen wird vom System für Sie generiert. <br />Kopieren Sie zum Spiegeln der Eingabeschnittstellen alle in der Liste mit einem Komma als Trennzeichen dargestellten Elemente. |
| **Brückenschnittstelle** | - |
1. Akzeptieren Sie die Einstellungen, und setzen Sie den Vorgang durch Eingabe von `Y` fort.
Nach etwa 10 Minuten werden automatisch Anmeldeinformationen generiert. Speichern Sie den Benutzernamen und die Kennwörter. Sie benötigen diese Anmeldeinformationen für den Zugriff auf die Plattform, wenn Sie sie zum ersten Mal verwenden.
### <a name="fitlet2-mini-sensor-installation"></a>Installation des Fitlet2-Minisensors
In diesem Abschnitt wird das Installationsverfahren für Fitlet2 beschrieben. Vor der Installation der Software auf der Fitlet-Appliance müssen Sie deren BIOS-Konfiguration anpassen.
#### <a name="fitlet2-front-panel"></a>Vorderseite des Fitlet2-Geräts
:::image type="content" source="media/tutorial-install-components/fitlet-front-panel.png" alt-text="Ansicht der Vorderseite des Fitlet2-Geräts":::
#### <a name="fitlet2-back-panel"></a>Rückseite des Fitlet2-Geräts
:::image type="content" source="media/tutorial-install-components/fitlet2-back-panel.png" alt-text="Ansicht der Rückseite des Fitlet2-Geräts":::
#### <a name="configure-the-fitlet2-bios"></a>Konfigurieren des Fitlet2-BIOS
**So konfigurieren Sie das Fitlet2-BIOS**:
1. Schalten Sie die Appliance ein.
1. Navigieren Sie zu **Main** > **OS Selection** (Hauptbereich > Betriebssystemauswahl).
1. Drücken Sie **+/-** , um **Linux** auszuwählen.
:::image type="content" source="media/tutorial-install-components/fitlet-linux.png" alt-text="Festlegen von Linux als Betriebssystem auf dem Fitlet2-Gerät":::
1. Überprüfen Sie, ob das Systemdatum und die Systemzeit mit dem Datum und der Uhrzeit der Installation aktualisiert werden.
1. Navigieren Sie zu **Advanced** (Erweitert), und wählen Sie **ACPI Settings** (ACPI-Einstellungen) aus.
1. Wählen Sie **Enable Hibernation** (Ruhezustand aktivieren) aus, und drücken Sie **+/-** , um **Disabled** (Deaktiviert) auszuwählen.
:::image type="content" source="media/tutorial-install-components/disable-hibernation.png" alt-text="Deaktivieren des Ruhezustands auf dem Fitlet2-Gerät":::
1. Drücken Sie die **ESC**-Taste.
1. Navigieren Sie zu **Advanced** > **TPM Configuration** (Erweitert > TPM-Konfiguration).
1. Wählen Sie **fTPM** aus, und drücken Sie **+/-** , um **Disabled** (Deaktiviert) auszuwählen.
1. Drücken Sie die **ESC**-Taste.
1. Navigieren Sie zu **CPU Configuration** > **VT-d** (CPU-Konfiguration > VT-d).
1. Drücken Sie **+/-** , um **Enabled** (Aktiviert) auszuwählen.
1. Navigieren Sie zu **CSM Configuration** > **CSM Support** (CSM-Konfiguration > CSM-Unterstützung).
1. Drücken Sie **+/-** , um **Enabled** (Aktiviert) auszuwählen.
1. Navigieren Sie zu **Advanced** > **Boot option filter [Legacy only]** (Erweitert > Startoptionsfilter [nur Legacy]), und ändern Sie die Einstellung in den folgenden Feldern in **Legacy**:
- Netzwerk
- Storage
- Video
- Andere PCI
:::image type="content" source="media/tutorial-install-components/legacy-only.png" alt-text="Festlegen aller Felder auf „Legacy“":::
1. Drücken Sie die **ESC**-Taste.
1. Navigieren Sie zu **Security** > **Secure Boot Customization** (Sicherheit > Anpassung für sicheren Start).
1. Drücken Sie **+/-** , um **Disabled** (Deaktiviert) auszuwählen.
1. Drücken Sie die **ESC**-Taste.
1. Navigieren Sie zu **Boot** > **Boot mode select** (Start > Startmodus auswählen), und wählen Sie **Legacy** aus.
1. Wählen Sie **Boot Option #1 – [USB CD/DVD]** (Startoption 1: [USB CD/DVD]) aus.
1. Wählen Sie **Save & Exit** (Speichern und beenden) aus.
#### <a name="software-installation-fitlet2"></a>Softwareinstallation (Fitlet2)
Der Installationsvorgang dauert ca. 20 Minuten. Nach der Installation wird das System mehrmals neu gestartet.
1. Verbinden Sie die externe CD oder den USB-Stick mit dem ISO-Image.
1. Starten Sie die Appliance.
1. Wählen Sie **Englisch** (?Deutsch?) aus.
1. Wählen Sie **XSENSE-RELEASE-\<version> Office...** aus.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Auswählen der zu installierenden Version des Sensors":::
> [!Note]
> Wählen Sie nicht „Ruggedized“ aus.
1. Definieren Sie die Appliancearchitektur und die Netzwerkeigenschaften:
:::image type="content" source="media/tutorial-install-components/nuvo-profile-appliance.png" alt-text="Definieren der Nuvo-Architektur und der Netzwerkeigenschaften":::
| Parameter | Konfiguration |
| ----------| ------------- |
| **Hardwareprofil** | Wählen Sie **office** aus. |
| **Verwaltungsschnittstelle** | **em1** |
| **IP-Adresse des Verwaltungsnetzwerks** | **Die vom Kunden angegebene IP-Adresse** |
| **Subnetzmaske des Verwaltungsnetzwerks** | **Die vom Kunden angegebene IP-Adresse** |
| **DNS** | **Die vom Kunden angegebene IP-Adresse** |
| **IP-Adresse des Standardgateways** | **0.0.0.0** |
| **Eingabeschnittstelle** | Die Liste der Eingabeschnittstellen wird vom System für Sie generiert. <br />Kopieren Sie zum Spiegeln der Eingabeschnittstellen alle in der Liste mit einem Komma als Trennzeichen dargestellten Elemente. |
| **Brückenschnittstelle** | - |
1. Akzeptieren Sie die Einstellungen, und setzen Sie den Vorgang durch Eingabe von `Y` fort.
Nach etwa 10 Minuten werden automatisch Anmeldeinformationen generiert. Speichern Sie den Benutzernamen und die Kennwörter. Sie benötigen diese Anmeldeinformationen für den Zugriff auf die Plattform, wenn Sie sie zum ersten Mal verwenden.
## <a name="post-installation-validation"></a>Überprüfung nach der Installation
Zum Überprüfen der Installation einer physischen Appliance müssen Sie viele Tests durchführen. Derselbe Überprüfungsvorgang gilt für alle Appliancetypen.
Führen Sie die Überprüfung über die GUI oder die CLI durch. Die Überprüfung steht für den Benutzer **Support** und den Benutzer **CyberX** zur Verfügung.
Die Überprüfung nach der Installation muss die folgenden Tests enthalten:
- **Integritätstest**: Überprüfen Sie, ob das System ausgeführt wird.
- **Version**: Überprüfen Sie, ob die Version richtig ist.
- **ifconfig**: Überprüfen Sie, ob alle während des Installationsvorgangs konfigurierten Eingabeschnittstellen ausgeführt werden.
### <a name="check-system-health-by-using-the-gui"></a>Überprüfen der Systemintegrität über die GUI
:::image type="content" source="media/tutorial-install-components/system-health-check-screen.png" alt-text="Screenshot der Systemintegritätsprüfung":::
#### <a name="sanity"></a>Integrität
- **Appliance**: Führt die Integritätsprüfung der Appliance durch. Sie können dieselbe Überprüfung mit dem CLI-Befehl `system-sanity` durchführen.
- **Version**: Zeigt die Applianceversion an.
- **Netzwerkeigenschaften**: Zeigt die Netzwerkparameter des Sensors an.
#### <a name="redis"></a>Redis
- **Arbeitsspeicher**: Liefert das Gesamtbild der Speicherauslastung, z. B. wie viel Arbeitsspeicher belegt wurde und wie viel noch übrig ist.
- **Längster Schlüssel**: Zeigt die längsten Schlüssel an, die zu einer übermäßigen Speicherauslastung führen können.
#### <a name="system"></a>System
- **Kernprotokoll**: Stellt die letzten 500 Zeilen des Kernprotokolls bereit, sodass Sie die neuesten Protokollzeilen anzeigen können, ohne das gesamte Systemprotokoll exportieren zu müssen.
- **Task-Manager**: Übersetzt die in der Tabelle der Prozesse angezeigten Aufgaben in die folgenden Ebenen:
- Persistente Ebene (Redis)
- Cacheebene (SQL)
- **Netzwerkstatistik**: Zeigt Ihre Netzwerkstatistik an.
- **TOP**: Zeigt die Tabelle der Prozesse (Table Of Processes) an. Dies ist ein Linux-Befehl, der eine dynamische Echtzeitansicht des ausgeführten Systems bereitstellt.
- **Überprüfung des Sicherungsspeichers**: Gibt den Status des Sicherungsspeichers an und überprüft Folgendes:
- Den Speicherort des Sicherungsordners
- Die Größe des Sicherungsordners
- Die Einschränkungen des Sicherungsordners
- Den Zeitpunkt der letzten Sicherung
- Wie viel Speicherplatz für die zusätzlichen Sicherungsdateien vorhanden ist
- **ifconfig**: Zeigt die Parameter für die physischen Schnittstellen der Appliance an.
- **CyberX nload**: Zeigt den Netzwerkdatenverkehr und die Bandbreite mithilfe der 6-Sekunden-Tests an.
- **Fehler aus Core.log**: Zeigt Fehler aus der Kernprotokolldatei an.
**So greifen Sie auf das Tool zu**:
1. Melden Sie sich beim Sensor mit den Benutzeranmeldeinformationen für **Support** an.
1. Wählen Sie im Fenster **Systemeinstellungen** die Option **Systemstatistik** aus.
:::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false":::
### <a name="check-system-health-by-using-the-cli"></a>Überprüfen der Systemintegrität über die CLI
Vergewissern Sie sich, dass das System betriebsbereit ist und ausgeführt wird, bevor Sie die Integrität des Systems testen.
**So testen Sie die Integrität des Systems**:
1. Stellen Sie eine Verbindung mit der CLI mit dem Linux-Terminal (z. b. PuTTY) und dem Benutzer **Support** her.
1. Geben Sie `system sanity` ein.
1. Überprüfen Sie, ob alle Dienste grün sind (d. h., sie werden ausgeführt).
:::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Der Screenshot zeigt, dass Dienste ausgeführt werden.":::
1. Überprüfen Sie, ob unten **System is UP! (prod)** (System ist AKTIV (Prod)) angezeigt wird.
Überprüfen Sie, ob die richtige Version verwendet wird:
**So überprüfen Sie die Version des Systems**:
1. Stellen Sie eine Verbindung mit der CLI mit dem Linux-Terminal (z. b. PuTTY) und dem Benutzer **Support** her.
1. Geben Sie `system version` ein.
1. Überprüfen Sie, ob die richtige Version angezeigt wird.
Überprüfen Sie, ob alle während des Installationsvorgangs konfigurierten Eingabeschnittstellen ausgeführt werden:
**So überprüfen Sie den Netzwerkstatus des Systems**:
1. Stellen Sie eine Verbindung mit der CLI mit dem Linux-Terminal (z. b. PuTTY) und dem Benutzer **Support** her.
1. Geben Sie `network list` ein (entspricht dem Linux-Befehl `ifconfig`).
1. Überprüfen Sie, ob die erforderlichen Eingabeschnittstellen angezeigt werden. Wenn beispielsweise zwei Quad-Kupfer-NICs installiert wurden, sollte die Liste 10 Schnittstellen enthalten.
:::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot der Liste von Schnittstellen":::
Überprüfen Sie, ob Sie auf die Web-GUI der Konsole zugreifen können:
**So überprüfen Sie, ob die Verwaltung auf die Benutzeroberfläche zugreifen kann**:
1. Schließen Sie einen Laptop mit einem Ethernet-Kabel an den Verwaltungsport (**Gb1**) an.
1. Definieren Sie für die Laptop-NIC-Adresse, dass sie sich in demselben Bereich wie die Appliance befinden soll.
:::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot des Verwaltungszugriffs auf die Benutzeroberfläche":::
1. Pingen Sie die IP-Adresse der Appliance vom Laptop aus, um die Konnektivität zu überprüfen (Standard: 10.100.10.1).
1. Öffnen Sie den Chrome-Browser auf dem Laptop, und geben Sie die IP-Adresse der Appliance ein.
1. Wählen Sie im Fenster **Ihre Verbindung ist nicht privat** die Option **Erweitert** aus, und setzen Sie den Vorgang fort.
1. Der Test ist erfolgreich, wenn der Defender für IoT-Anmeldebildschirm angezeigt wird.
:::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot des Zugriffs auf die Verwaltungskonsole":::
## <a name="troubleshooting"></a>Problembehandlung
### <a name="you-cant-connect-by-using-a-web-interface"></a>Sie können keine Verbindung über eine Weboberfläche herstellen
1. Überprüfen Sie, ob sich der Computer, mit dem Sie die Verbindung herstellen möchten, in demselben Netzwerk wie die Appliance befindet.
1. Überprüfen Sie, ob das GUI-Netzwerk mit dem Verwaltungsport verbunden ist.
1. Pingen Sie die IP-Adresse der Appliance. Wenn Pingen nicht möglich ist:
1. Schließen Sie einen Monitor und eine Tastatur an die Appliance an.
1. Verwenden Sie den Benutzer **Support** und dessen Kennwort für die Anmeldung.
1. Verwenden Sie den Befehl `network list`, um die aktuelle IP-Adresse anzuzeigen.
:::image type="content" source="media/tutorial-install-components/network-list.png" alt-text="Screenshot der Netzwerkliste":::
1. Wenn die Netzwerkparameter falsch konfiguriert wurden, ändern Sie sie mit dem folgenden Verfahren:
1. Verwenden Sie den Befehl `network edit-settings`.
1. Wählen Sie zum Ändern der IP-Adresse des Verwaltungsnetzwerks **Y** aus.
1. Wählen Sie zum Ändern der Subnetzmaske **Y** aus.
1. Wählen Sie zum Ändern des DNS **Y** aus.
1. Wählen Sie zum Ändern der IP-Adresse des Standardgateways **Y** aus.
1. Wählen Sie für eine Änderung der Eingabeschnittstelle (nur Sensor) **N** aus.
1. Wählen Sie zum Übernehmen der Einstellungen **Y** aus.
1. Stellen Sie nach dem Neustart eine Verbindung mit den Anmeldeinformationen des Supports her, und überprüfen Sie mit dem Befehl `network list`, ob die Parameter geändert wurden.
1. Versuchen Sie von der GUI aus erneut, zu pingen und eine Verbindung herzustellen.
### <a name="the-appliance-isnt-responding"></a>Die Appliance reagiert nicht
1. Schließen Sie einen Monitor und eine Tastatur an die Appliance an, oder verwenden Sie PuTTY, um eine Remoteverbindung mit der CLI herzustellen.
1. Verwenden Sie die Anmeldeinformationen für den Benutzer **Support**, um sich anzumelden.
1. Verwenden Sie den Befehl `system sanity`, um zu überprüfen, ob alle Prozesse ausgeführt werden.
:::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot des Befehls „system sanity“ für Systemintegrität":::
Wenden Sie sich bei allen anderen Problemen an den [Microsoft-Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
## <a name="configure-a-span-port"></a>Konfigurieren eines SPAN-Ports
Ein virtueller Switch verfügt nicht über Spiegelungsfunktionen. Sie können jedoch den Promiscuous Mode in einer virtuellen Switch-Umgebung verwenden. Der Promiscuous-Modus ist ein Betriebsmodus sowie eine Sicherheits-, Überwachungs- und Verwaltungstechnik, die auf der Ebene des virtuellen Switches oder der Portgruppe definiert wird. In der Standardeinstellung ist der Promiscuous-Modus deaktiviert. Wenn der Promiscuous-Modus aktiviert ist, verwenden die Netzwerkschnittstellen der virtuellen Maschine, die sich in derselben Portgruppe befinden, den Promiscuous-Modus, um den gesamten Netzwerkverkehr anzuzeigen, der über diesen virtuellen Switch läuft. Sie können eine Umgehung entweder mit ESXi oder Hyper-V implementieren.
:::image type="content" source="media/tutorial-install-components/purdue-model.png" alt-text="Screenshot der Stelle in Ihrer Architektur, an der der Sensor platziert werden soll.":::
### <a name="configure-a-span-port-with-esxi"></a>Konfigurieren eines SPAN-Ports mit ESXi
**So konfigurieren Sie einen SPAN-Port mit ESXi:**
1. Öffnen Sie „vSwitch“-Eigenschaften.
1. Wählen Sie **Hinzufügen**.
1. Wählen Sie **Virtueller Computer** > **Weiter** aus.
1. Fügen Sie die Netzwerkbezeichnung **SPAN-Netzwerk** ein, wählen Sie **VLAN-ID** > **Alle** und dann **Weiter** aus.
1. Wählen Sie **Fertig stellen** aus.
1. Wählen Sie **SPAN-Netzwerk** > **Bearbeiten* aus.
1. Wählen Sie **Sicherheit** aus, und überprüfen Sie, ob die Richtlinie für **Promisker Modus** auf den Modus **Akzeptieren** festgelegt wurde.
1. Wählen Sie **OK** und dann **Schließen** aus, um die vSwitch-Eigenschaften zu schließen.
1. Öffnen Sie die **XSense VM**-Eigenschaften.
1. Wählen Sie für **Netzwerkadapter 2** das **SPAN**-Netzwerk aus.
1. Klicken Sie auf **OK**.
1. Stellen Sie eine Verbindung mit dem Sensor her, und überprüfen Sie, ob die Spiegelung funktioniert.
### <a name="configure-a-span-port-with-hyper-v"></a>Konfigurieren eines SPAN-Ports mit Hyper-V
Bevor Sie beginnen, müssen Sie Folgendes sicherstellen:
- Es wird keine Instanz einer virtuellen Appliance ausgeführt.
- SPAN ist für den Datenport und nicht für den Verwaltungsport aktiviert.
- Die SPAN-Konfiguration des Datenports ist nicht mit einer IP-Adresse konfiguriert.
**So konfigurieren Sie einen SPAN-Port mit Hyper-V:**
1. Öffnen Sie den Manager für virtuelle Switches.
1. Wählen Sie in der Liste virtueller Switches **Neuer virtueller Netzwerkswitch** > **Extern** als Typ für den dedizierten übergreifenden Netzwerkadapter aus.
:::image type="content" source="media/tutorial-install-components/new-virtual-network.png" alt-text="Screenshot: Auswählen von „Neuer virtueller Netzwerkswitch“ und „Extern“ vor dem Erstellen des virtuellen Switches":::
1. Wählen Sie **Virtuellen Switch erstellen** aus.
1. Wählen Sie unter „Verbindungstyp“ die Option **Externes Netzwerk** aus.
1. Vergewissern Sie sich, dass **Gemeinsames Verwenden dieses Netzwerkadapters für das Verwaltungsbetriebssystem zulassen** aktiviert ist.
:::image type="content" source="media/tutorial-install-components/external-network.png" alt-text="Auswählen von „Externes Netzwerk“ und „Gemeinsames Verwenden dieses Netzwerkadapters für das Verwaltungsbetriebssystem zulassen“":::
1. Klicken Sie auf **OK**.
#### <a name="attach-a-span-virtual-interface-to-the-virtual-switch"></a>Anfügen einer virtuellen SPAN-Schnittstelle an den virtuellen Switch
Sie können über Windows PowerShell oder den Hyper-V-Manager eine virtuelle SPAN-Schnittstelle an den virtuellen Switch anfügen.
**So fügen Sie über PowerShell eine virtuelle SPAN-Schnittstelle an den virtuellen Switch an**
1. Wählen Sie den neu hinzugefügten virtuellen SPAN-Switch aus, und fügen Sie mit dem folgenden Befehl einen neuen Netzwerkadapter hinzu:
```bash
ADD-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 -Name Monitor -SwitchName vSwitch_Span
```
1. Aktivieren Sie mit dem folgenden Befehl die Portspiegelung für die ausgewählte Schnittstelle als SPAN-Ziel:
```bash
Get-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 | ? Name -eq Monitor | Set-VMNetworkAdapter -PortMirroring Destination
```
| Parameter | BESCHREIBUNG |
|--|--|
| VK-C1000V-LongRunning-650 | Name der virtuellen CPPM-Appliance |
|vSwitch_Span |Name des neu hinzugefügten virtuellen SPAN-Switches |
|Monitor |Name des neu hinzugefügten Adapters |
1. Klicken Sie auf **OK**.
Mit diesen Befehlen wird der Name der neu hinzugefügten Adapterhardware auf `Monitor` festgelegt. Bei Verwendung des Hyper-V-Managers wird der Name der neu hinzugefügten Adapterhardware auf `Network Adapter` festgelegt.
**So fügen Sie über den Hyper-V-Manager eine virtuelle SPAN-Schnittstelle an den virtuellen Switch an**
1. Wählen Sie in der Hardwareliste die Option **Netzwerkadapter** aus.
1. Wählen Sie im Feld „Virtueller Switch“ die Option **vSwitch_Span** aus.
:::image type="content" source="media/tutorial-install-components/vswitch-span.png" alt-text="Screenshot: Auswählen der folgenden Optionen auf dem Bildschirm des virtuellen Switches":::
1. Wählen Sie in der Hardwareliste in der Dropdownliste „Netzwerkadapter“ die Option **Erweiterte Features** aus.
1. Wählen Sie im Abschnitt „Portspiegelung“ die Option **Ziel** als Spiegelungsmodus für die neue virtuelle Schnittstelle aus.
:::image type="content" source="media/tutorial-install-components/destination.png" alt-text="Screenshot: Erforderliche Auswahl zum Konfigurieren des Spiegelungsmodus":::
1. Klicken Sie auf **OK**.
#### <a name="enable-microsoft-ndis-capture-extensions-for-the-virtual-switch"></a>Aktivieren der Microsoft NDIS-Aufzeichnungserweiterungen für den virtuellen Switch
Die Microsoft NDIS-Aufzeichnungserweiterungen müssen für den neuen virtuellen Switch aktiviert werden.
**So aktivieren Sie Microsoft NDIS-Aufzeichnungserweiterungen für den neu hinzugefügten virtuellen Switch**:
1. Öffnen Sie auf dem Hyper-V-Host den Manager für virtuelle Switches.
1. Erweitern Sie in der Liste „Virtuelle Switches“ den Namen des virtuellen Switches (`vSwitch_Span`), und wählen Sie **Erweiterungen** aus.
1. Wählen Sie im Feld „Switcherweiterungen“ die Option **Microsoft NDIS-Aufzeichnung** aus.
:::image type="content" source="media/tutorial-install-components/microsoft-ndis.png" alt-text="Screenshot: Aktivieren von „Microsoft NDIS-Aufzeichnung“ durch Auswahl im Menü „Switcherweiterungen“":::
1. Klicken Sie auf **OK**.
#### <a name="set-the-mirroring-mode-on-the-external-port"></a>Festlegen des Spiegelungsmodus für den externen Port
Der Spiegelungsmodus muss für den externen Port des neuen virtuellen Switches festgelegt werden, der als Quelle fungieren soll.
Sie müssen den virtuellen Hyper-V-Switch (vSwitch_Span) so konfigurieren, dass der gesamte Datenverkehr, der an den externen Quellport gesendet wird, an den virtuellen Netzwerkadapter weitergeleitet wird, den Sie als Ziel konfiguriert haben.
Verwenden Sie die folgenden PowerShell-Befehle, um den Port des externen virtuellen Switches auf den Quellspiegelungsmodus festzulegen:
```bash
$ExtPortFeature=Get-VMSystemSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings"
$ExtPortFeature.SettingData.MonitorMode=2
Add-VMSwitchExtensionPortFeature -ExternalPort -SwitchName vSwitch_Span -VMSwitchExtensionFeature $ExtPortFeature
```
| Parameter | BESCHREIBUNG |
|--|--|
| vSwitch_Span | Name des neu hinzugefügten virtuellen SPAN-Switches |
| MonitorMode=2 | `Source` |
| MonitorMode=1 | Destination |
| MonitorMode=0 | Keine |
Verwenden Sie den folgenden PowerShell-Befehl, um den Status des Überwachungsmodus zu überprüfen:
```bash
Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings" -SwitchName vSwitch_Span -ExternalPort | select -ExpandProperty SettingData
```
| Parameter | BESCHREIBUNG |
|--|--|
| vSwitch_Span | Name des neu hinzugefügten virtuellen SPAN-Switches |
## <a name="access-sensors-from-the-on-premises-management-console"></a>Zugreifen auf Sensoren über die lokale Verwaltungskonsole
Sie können die Systemsicherheit verbessern, indem Sie den direkten Benutzerzugriff auf den Sensor verhindern. Verwenden Sie stattdessen Proxy-Tunnelung, damit Benutzer mit einer einzigen Firewallregel über die lokale Verwaltungskonsole auf den Sensor zugreifen können. Diese Vorgehensweise schränkt die Möglichkeit eines nicht autorisierten Zugriffs auf die Netzwerkumgebung außerhalb des Sensors ein. Die Benutzeroberfläche bleibt bei der Anmeldung beim Sensor unverändert.
:::image type="content" source="media/tutorial-install-components/sensor-system-graph.png" alt-text="Screenshot des Zugriffs auf den Sensor":::
**So aktivieren Sie Tunnelung**:
1. Melden Sie sich bei der Befehlszeilenschnittstelle der lokalen Verwaltungskonsole mit den Benutzeranmeldeinformationen für **CyberX** oder **Support** an.
1. Geben Sie `sudo cyberx-management-tunnel-enable` ein.
1. Drücken Sie die **EINGABETASTE**.
1. Geben Sie `--port 10000` ein.
### <a name="next-steps"></a>Nächste Schritte
[Einrichten Ihres Netzwerks](how-to-set-up-your-network.md)
| 51.098435 | 813 | 0.762314 | deu_Latn | 0.993344 |
93f11f77cca137ad08152f9111ae82f4b4e7f861 | 34,523 | md | Markdown | articles/azure-monitor/logs/private-link-security.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/logs/private-link-security.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/logs/private-link-security.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Używanie usługi Azure Private Link do bezpiecznego łączenia sieci z usługą Azure Monitor
description: Używanie usługi Azure Private Link do bezpiecznego łączenia sieci z usługą Azure Monitor
author: noakup
ms.author: noakuper
ms.topic: conceptual
ms.date: 10/05/2020
ms.openlocfilehash: 65af5810152034fd7b6014041edd07835eebd194
ms.sourcegitcommit: 4b7a53cca4197db8166874831b9f93f716e38e30
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/04/2021
ms.locfileid: "102101481"
---
# <a name="use-azure-private-link-to-securely-connect-networks-to-azure-monitor"></a>Używanie usługi Azure Private Link do bezpiecznego łączenia sieci z usługą Azure Monitor
[Link prywatny platformy Azure](../../private-link/private-link-overview.md) umożliwia bezpieczne łączenie usług Azure PaaS z siecią wirtualną za pomocą prywatnych punktów końcowych. W przypadku wielu usług wystarczy skonfigurować punkt końcowy dla każdego zasobu. Azure Monitor jest jednak Constellation różnych połączonych usług, które współpracują ze sobą w celu monitorowania obciążeń. W związku z tym utworzyliśmy zasób o nazwie Azure Monitor prywatny zakres linków (AMPLS). AMPLS umożliwia zdefiniowanie granic sieci monitorowania i nawiązanie połączenia z siecią wirtualną. W tym artykule omówiono, kiedy należy używać programu i jak skonfigurować Azure Monitor zakres linków prywatnych.
## <a name="advantages"></a>Zalety
Za pomocą linku prywatnego można:
- Połącz się prywatnie, aby Azure Monitor bez otwierania dostępu do sieci publicznej
- Upewnij się, że dane monitorowania są dostępne tylko za poorednictwem autoryzowanych sieci prywatnych
- Zapobiegaj eksfiltracji danych z sieci prywatnych przez definiowanie określonych zasobów Azure Monitor, które łączą się za pośrednictwem prywatnego punktu końcowego
- Bezpieczne łączenie prywatnej sieci lokalnej w celu Azure Monitor przy użyciu linku ExpressRoute i prywatnego
- Zachowaj cały ruch w sieci szkieletowej Microsoft Azure
Aby uzyskać więcej informacji, zobacz [najważniejsze zalety linku prywatnego](../../private-link/private-link-overview.md#key-benefits).
## <a name="how-it-works"></a>Jak to działa
Azure Monitor prywatny zakres linków (AMPLS) łączy prywatne punkty końcowe (oraz sieci wirtualnych, w których są zawarte) do co najmniej jednego Azure Monitor zasobów-Log Analytics obszarów roboczych i składników Application Insights.

* Prywatny punkt końcowy w sieci wirtualnej umożliwia działowi IT dostęp do Azure Monitor punktów końcowych za pośrednictwem prywatnych adresów IP z puli sieci, zamiast korzystać z publicznych adresów IP tych punktów końcowych. Dzięki temu można nadal korzystać z zasobów Azure Monitor bez konieczności otwierania sieci wirtualnej na niewymagany ruch wychodzący.
* Ruch z prywatnego punktu końcowego do zasobów Azure Monitor przejdzie przez szkielet Microsoft Azure i nie jest kierowany do sieci publicznych.
* Można skonfigurować poszczególne obszary robocze lub składniki, aby zezwalać na pozyskiwanie i wykonywanie zapytań z sieci publicznych. Zapewnia to ochronę na poziomie zasobów, dzięki czemu można kontrolować ruch do określonych zasobów.
> [!NOTE]
> Pojedynczy zasób Azure Monitor może należeć do wielu AMPLSs, ale nie można połączyć jednej sieci wirtualnej z więcej niż jedną AMPLSą.
## <a name="planning-your-private-link-setup"></a>Planowanie konfiguracji linku prywatnego
Przed rozpoczęciem konfigurowania Azure Monitor konfiguracji łącza prywatnego należy wziąć pod uwagę topologię sieci, a w związku z tym topologię routingu DNS.
### <a name="the-issue-of-dns-overrides"></a>Problem zastąpień DNS
Niektóre usługi Azure Monitor korzystają z globalnych punktów końcowych, co oznacza, że służą one do obsługi żądań dotyczących dowolnego obszaru roboczego/składnika. Kilka przykładów to punkt końcowy pozyskiwania Application Insights i punkt końcowy zapytania obu Application Insights i Log Analytics.
Po skonfigurowaniu połączenia z linkiem prywatnym usługa DNS jest aktualizowana w celu mapowania punktów końcowych Azure Monitor na prywatne adresy IP z zakresu adresów IP sieci wirtualnej. Ta zmiana zastępuje wszystkie poprzednie mapowania tych punktów końcowych, które mogą mieć zrozumiałe konsekwencje, poniżej.
### <a name="azure-monitor-private-link-applies-to-all-azure-monitor-resources---its-all-or-nothing"></a>Azure Monitor link prywatny dotyczy wszystkich zasobów Azure Monitor — to wszystko lub nic
Ponieważ niektóre Azure Monitor punkty końcowe są globalne, nie można utworzyć połączenia prywatnego dla określonego składnika lub obszaru roboczego. Zamiast tego po skonfigurowaniu prywatnego linku do jednego składnika Application Insights rekordy DNS są aktualizowane dla **wszystkich** składników Application Insights. Każda próba pozyskania lub wypróbowania składnika spowoduje przejście przez połączenie prywatne i prawdopodobnie nie powiedzie się. Podobnie skonfigurowanie prywatnego linku do jednego obszaru roboczego spowoduje, że wszystkie zapytania Log Analytics przechodzą przez punkt końcowy zapytania łącza prywatnego (ale nie żądania pozyskania, które mają punkty końcowe specyficzne dla obszaru roboczego).

To prawda nie tylko dla określonej sieci wirtualnej, ale dla wszystkich sieci wirtualnych, które współużytkują ten sam serwer DNS (zobacz [problem zastąpień DNS](#the-issue-of-dns-overrides)). Na przykład żądanie pozyskania dzienników do dowolnego składnika Application Insights będzie zawsze wysyłane za pomocą prywatnej trasy linków. Składniki, które nie są połączone z AMPLS, nie będą mogły przeprowadzić walidacji prywatnego linku i nie przechodzą przez nie.
> [!NOTE]
> Aby dokończyć: po skonfigurowaniu połączenia prywatnego z pojedynczym zasobem zostanie ono zastosowane do wszystkich zasobów Azure Monitor w sieci — to wszystko lub nic nie rób. Dzięki temu należy dodać wszystkie zasoby Azure Monitor w sieci do AMPLS lub żadnego z nich.
### <a name="azure-monitor-private-link-applies-to-your-entire-network"></a>Azure Monitor łącze prywatne ma zastosowanie do całej sieci
Niektóre sieci składają się z wielu sieci wirtualnych. Jeśli sieci wirtualnych używają tego samego serwera DNS, zastąpią wszystkie inne mapowania DNS i możliwe będzie przerwanie komunikacji z Azure Monitor (zobacz [problem z zastąpień DNS](#the-issue-of-dns-overrides)). Ostatecznie tylko Ostatnia Sieć wirtualna będzie mogła komunikować się z Azure Monitor, ponieważ usługa DNS mapuje Azure Monitor punktów końcowych na prywatne adresy IP z tego zakresu sieci wirtualnych (które mogą nie być dostępne z innych sieci wirtualnych).

Na powyższym diagramie Sieć wirtualna 10.0.1. x najpierw łączy się z AMPLS1 i mapuje globalne punkty końcowe Azure Monitor na adresy IP z zakresu. Później Sieć wirtualna 10.0.2. x nawiązuje połączenie z AMPLS2 i zastępuje mapowanie DNS tych *samych globalnych punktów końcowych* z adresami IP z zakresu. Ponieważ te sieci wirtualnych nie są połączone za pomocą komunikacji równorzędnej, pierwsza Sieć wirtualna nie może teraz połączyć się z tymi punktami końcowymi.
> [!NOTE]
> Aby dokończyć: Konfiguracja AMPLS ma wpływ na wszystkie sieci, które współużytkują te same strefy DNS. Aby uniknąć przesłaniania mapowań punktów końcowych DNS, najlepiej jest skonfigurować pojedynczy prywatny punkt końcowy w sieci równorzędnej (takiej jak koncentrator sieci wirtualnej) lub oddzielić sieci na poziomie systemu DNS (Foe przykład przy użyciu usług przesyłania dalej DNS lub całkowicie oddzielnych serwerów DNS).
### <a name="hub-spoke-networks"></a>Sieci gwiazdy
Topologie gwiazdy mogą uniknąć problemu zastąpień DNS przez ustawienie prywatnego linku w sieci wirtualnej Hub (głównej), zamiast konfigurowania prywatnego linku dla każdej sieci wirtualnej. Ta konfiguracja ma sens szczególnie w przypadku udostępnienia Azure Monitor zasobów używanych przez sieci wirtualnych szprych.

> [!NOTE]
> Celowe może być utworzenie oddzielnych linków prywatnych dla satelity sieci wirtualnych, na przykład w celu zezwolenia każdej sieci wirtualnej na dostęp do ograniczonego zestawu zasobów monitorowania. W takich przypadkach można utworzyć dedykowany prywatny punkt końcowy i AMPLS dla każdej sieci wirtualnej, ale również sprawdzić, czy nie współużytkują te same strefy DNS w celu uniknięcia zastąpień DNS.
### <a name="consider-limits"></a>Uwzględnij limity
Jak wymieniono w [ograniczeniach i ograniczeniach](#restrictions-and-limitations), obiekt AMPLS ma liczbę limitów pokazanych w poniższej topologii:
* Każda sieć wirtualna nawiązuje połączenie tylko z **1** AMPLS obiektem.
* AMPLS B jest połączony z prywatnymi punktami końcowymi dwóch sieci wirtualnych (VNet2 i sieci vnet3) przy użyciu 2 z 10 możliwych połączeń prywatnych punktów końcowych.
* AMPLS łączy się do dwóch obszarów roboczych i jednego składnika usługi Application Insights, używając 3 z 50 Azure Monitor możliwych połączeń zasobów.
* Workspace2 nawiązuje połączenie z AMPLS a i AMPLS B przy użyciu 2 z 5 możliwych połączeń AMPLS.

## <a name="example-connection"></a>Przykładowe połączenie
Zacznij od utworzenia zasobu zakresu prywatnego linku Azure Monitor.
1. Przejdź do pozycji **Utwórz zasób** w Azure Portal i Wyszukaj **Azure monitor prywatny zakres linków**.

2. Wybierz pozycję **Utwórz**.
3. Wybierz subskrypcję i grupę zasobów.
4. Nadaj nazwę AMPLS. Najlepiej używać zrozumiałej i jasnej nazwy, takiej jak "AppServerProdTelem".
5. Wybierz pozycję **Przejrzyj i utwórz**.

6. Pozwól na przekazanie walidacji, a następnie wybierz pozycję **Utwórz**.
### <a name="connect-azure-monitor-resources"></a>Łączenie Azure Monitor zasobów
Połącz zasoby Azure Monitor (Log Analytics obszary robocze i składniki Application Insights) z AMPLS.
1. W zakresie prywatnego łącza Azure Monitor wybierz pozycję **Azure monitor zasoby** w menu po lewej stronie. Wybierz przycisk **Add** (Dodaj).
2. Dodaj obszar roboczy lub składnik. Wybranie przycisku **Dodaj** powoduje wyświetlenie okna dialogowego, w którym można wybrać Azure monitor zasoby. Możesz przeglądać subskrypcje i grupy zasobów lub wpisywać ich nazwy, aby filtrować do nich. Wybierz obszar roboczy lub składnik, a następnie wybierz pozycję **Zastosuj** , aby dodać je do zakresu.

> [!NOTE]
> Usuwanie zasobów Azure Monitor wymaga, aby najpierw odłączyć je od wszelkich obiektów AMPLS, z którymi są połączone. Nie jest możliwe usunięcie zasobów połączonych z AMPLS.
### <a name="connect-to-a-private-endpoint"></a>Nawiązywanie połączenia z prywatnym punktem końcowym
Teraz, gdy masz zasoby połączone z AMPLS, Utwórz prywatny punkt końcowy, aby połączyć naszą sieć. To zadanie można wykonać w ramach [prywatnego centrum linków Azure Portal](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints)lub wewnątrz zakresu prywatnego linku Azure monitor, jak zostało to zrobione w tym przykładzie.
1. W obszarze zasób zakresu wybierz pozycję **połączenia prywatnych punktów końcowych** w menu zasobów po lewej stronie. Wybierz pozycję **prywatny punkt końcowy** , aby uruchomić proces tworzenia punktu końcowego. Możesz także zatwierdzać połączenia, które zostały uruchomione w centrum linku prywatnego, wybierając je i wybierając pozycję **Zatwierdź**.

2. Wybierz subskrypcję, grupę zasobów i nazwę punktu końcowego oraz region, w którym powinien się znajdować. Region musi być tym samym regionem, do którego jest podłączona Sieć wirtualna.
3. Wybierz pozycję **Dalej: zasób**.
4. Na ekranie zasób
a. Wybierz **subskrypcję** zawierającą zasób zakresu prywatnego Azure monitor.
b. W obszarze **Typ zasobu** wybierz pozycję **Microsoft. Insights/privateLinkScopes**.
c. Z listy rozwijanej **zasób** wybierz swój prywatny zakres linków, który został utworzony wcześniej.
d. Wybierz kolejno pozycje **Dalej: Configuration >**.

5. W okienku Konfiguracja,
a. Wybierz **sieć wirtualną** i **podsieć** , którą chcesz połączyć z zasobami Azure monitor.
b. Wybierz opcję **tak** dla **integracji z prywatną strefą DNS** i zezwól na automatyczne tworzenie nowej strefy prywatna strefa DNS. Rzeczywiste strefy DNS mogą się różnić od tego, co pokazano na poniższym zrzucie ekranu.
> [!NOTE]
> Jeśli wybierzesz opcję **nie** i wolisz ręcznie zarządzać rekordami DNS, najpierw Ukończ Konfigurowanie linku prywatnego — łącznie z tym prywatnym punktem końcowym i konfiguracją AMPLS. Następnie skonfiguruj usługę DNS zgodnie z instrukcjami zawartymi w artykule [Konfiguracja usługi DNS prywatnego punktu końcowego platformy Azure](../../private-link/private-endpoint-dns.md). Pamiętaj, aby nie tworzyć pustych rekordów podczas przygotowywania się do konfiguracji łącza prywatnego. Tworzone rekordy DNS mogą przesłaniać istniejące ustawienia i mieć wpływ na łączność z usługą Azure Monitor.
c. Wybierz pozycję **Przejrzyj i utwórz**.
d. Zezwalaj na weryfikację.
e. Wybierz przycisk **Utwórz**.

Utworzono nowy prywatny punkt końcowy, który jest połączony z tym AMPLSem.
## <a name="review-and-validate-your-private-link-setup"></a>Przejrzyj i sprawdź poprawność konfiguracji linku prywatnego
### <a name="reviewing-your-endpoints-dns-settings"></a>Przeglądanie ustawień DNS punktu końcowego
Utworzony prywatny punkt końcowy powinien mieć teraz cztery skonfigurowane strefy DNS:
[](./media/private-link-security/private-endpoint-dns-zones-expanded.png#lightbox)
* privatelink-monitor-Azure-com
* privatelink-OMS-usługi OpInsights-Azure-com
* privatelink-ODS-usługi OpInsights-Azure-com
* privatelink-agentsvc-Azure-Automation-NET
> [!NOTE]
> Każda z tych stref mapuje określone punkty końcowe Azure Monitor na prywatne adresy IP z puli adresów IP sieci wirtualnej. Adresy IP wyświetlane w poniższych obrazach są tylko przykładowe. W takiej konfiguracji należy zamiast tego pokazać prywatne adresy IP z własnej sieci.
#### <a name="privatelink-monitor-azure-com"></a>Privatelink-monitor-Azure-com
Ta strefa obejmuje globalne punkty końcowe używane przez Azure Monitor, co oznacza, że te punkty końcowe obsłużą żądania rozważające wszystkie zasoby, a nie konkretną. Ta strefa powinna mieć punkty końcowe mapowane dla:
* `in.ai` -(Application Insights punkt końcowy pozyskiwania, zobaczysz globalny i regionalny wpis
* `api` -Application Insights i Log Analytics punkt końcowy interfejsu API
* `live` -Application Insights punkt końcowy metryk na żywo
* `profiler` -Application Insights punkt końcowy profilera
* `snapshot`-Application Insights zrzut ekranu punktu końcowego migawek [ ](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
#### <a name="privatelink-oms-opinsights-azure-com"></a>privatelink-OMS-usługi OpInsights-Azure-com
Ta strefa obejmuje mapowanie specyficzne dla obszaru roboczego do punktów końcowych pakietu OMS. Powinien zostać wyświetlony wpis dla każdego obszaru roboczego połączonego z AMPLSą połączoną z tym prywatnym punktem końcowym.
[](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
#### <a name="privatelink-ods-opinsights-azure-com"></a>privatelink-ODS-usługi OpInsights-Azure-com
Ta strefa zawiera mapowanie specyficzne dla obszaru roboczego do punktów końcowych ODS — punkt końcowy pozyskiwania Log Analytics. Powinien zostać wyświetlony wpis dla każdego obszaru roboczego połączonego z AMPLSą połączoną z tym prywatnym punktem końcowym.
[](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
#### <a name="privatelink-agentsvc-azure-automation-net"></a>privatelink-agentsvc-Azure-Automation-NET
Ta strefa obejmuje mapowanie specyficzne dla obszaru roboczego do punktów końcowych automatyzacji usługi agenta. Powinien zostać wyświetlony wpis dla każdego obszaru roboczego połączonego z AMPLSą połączoną z tym prywatnym punktem końcowym.
[](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
### <a name="validating-you-are-communicating-over-a-private-link"></a>Weryfikowanie komunikacji za pośrednictwem prywatnego linku
* Aby sprawdzić, czy żądania są teraz wysyłane przez prywatny punkt końcowy oraz do prywatnych punktów końcowych mapowanych przez adresy IP, możesz przejrzeć je za pomocą funkcji śledzenia sieci, a nawet przeglądarki. Na przykład podczas próby wysłania zapytania do obszaru roboczego lub aplikacji upewnij się, że żądanie jest wysyłane do prywatnego adresu IP zamapowanego na punkt końcowy interfejsu API, w tym przykładzie jest to *172.17.0.9*.
Uwaga: Niektóre przeglądarki mogą używać innych ustawień DNS (zobacz [Ustawienia DNS przeglądarki](#browser-dns-settings)). Upewnij się, że ustawienia DNS są stosowane.
* Aby upewnić się, że obszar roboczy lub składnik nie odbiera żądań z sieci publicznych (niepołączone przez AMPLS), ustaw publiczne pozyskiwanie i flagi zapytania dla zasobu *, tak jak* wyjaśniono w temacie [Zarządzanie dostępem spoza zakresów linków prywatnych](#manage-access-from-outside-of-private-links-scopes).
* Z poziomu klienta w chronionej sieci, użyj `nslookup` do dowolnego punktu końcowego wymienionego w strefach DNS. Powinien on zostać rozwiązany przez serwer DNS z mapowanymi prywatnymi adresami IP zamiast publicznymi adresami IP używanymi domyślnie.
## <a name="configure-log-analytics"></a>Konfigurowanie usługi Log Analytics
Przejdź do witryny Azure Portal. W menu zasobów obszaru roboczego Log Analytics istnieje element o nazwie **izolacja sieci** po lewej stronie. W tym menu można kontrolować dwa różne stany.

### <a name="connected-azure-monitor-private-link-scopes"></a>Połączone Azure Monitor zakresy linków prywatnych
Wszystkie zakresy połączone z obszarem roboczym są wyświetlane na tym ekranie. Łączenie z zakresami (AMPLSs) zezwala na ruch sieciowy z sieci wirtualnej połączonej z każdym AMPLS, aby dotrzeć do tego obszaru roboczego. Utworzenie połączenia w tym miejscu ma taki sam skutek jak ustawienie go w zakresie, tak jak w przypadku [łączenia Azure Monitor zasobów](#connect-azure-monitor-resources). Aby dodać nowe połączenie, wybierz opcję **Dodaj** i wybierz prywatny zakres linków Azure monitor. Wybierz pozycję **Zastosuj** , aby nawiązać połączenie. Należy pamiętać, że obszar roboczy może połączyć się z 5 AMPLS obiektów, jak wspomniano w [ograniczeniach i ograniczeniach](#restrictions-and-limitations).
### <a name="manage-access-from-outside-of-private-links-scopes"></a>Zarządzanie dostępem spoza zakresów linków prywatnych
Ustawienia w dolnej części tej strony kontrolują dostęp z sieci publicznych, czyli oznacza sieci, które nie są połączone przez wymienione powyżej zakresy. Ustawienie **Zezwalaj na dostęp do sieci publicznej na potrzeby** pozyskiwania **nie** ma bloków pozyskiwania dzienników z maszyn poza połączonymi zakresami. Ustawienie **Zezwalaj na dostęp do sieci publicznej dla zapytań w przypadku** **braku** bloków zapytań pochodzących z komputerów poza zakresem. Obejmuje to zapytania uruchamiane za pośrednictwem skoroszytów, pulpitów nawigacyjnych, środowiska klienta opartego na interfejsie API, szczegółowych informacji w Azure Portal i innych. Środowiska działające poza Azure Portal i że zapytanie Log Analytics dane muszą być uruchomione w prywatnej sieci wirtualnej.
### <a name="exceptions"></a>Wyjątki
Ograniczanie dostępu, jak wyjaśniono powyżej, nie ma zastosowania do Azure Resource Manager i dlatego ma następujące ograniczenia:
* Dostęp do danych — podczas blokowania/zezwalania na zapytania z sieci publicznych stosuje się do większości Log Analyticsych środowisk, ale niektóre z nich wykonują zapytania dotyczące danych za pośrednictwem Azure Resource Manager i w związku z tym nie będą mogły wysyłać zapytań o dane, chyba że prywatne ustawienia linku są stosowane również do Menedżer zasobów (funkcja jest dostępna wkrótce). Przykłady to Azure Monitor rozwiązania, skoroszyty i szczegółowe informacje oraz łącznik LogicApp.
* Zarządzanie obszarem roboczym — ustawienia obszaru roboczego i zmiany konfiguracji (w tym Włączanie lub wyłączanie tych ustawień dostępu) są zarządzane przez Azure Resource Manager. Ogranicz dostęp do zarządzania obszarami roboczymi przy użyciu odpowiednich ról, uprawnień, kontroli sieci i inspekcji. Aby uzyskać więcej informacji, zobacz [Azure monitor role, uprawnienia i zabezpieczenia](../roles-permissions-security.md).
> [!NOTE]
> Dzienniki i metryki przekazane do obszaru roboczego za pośrednictwem [ustawień diagnostycznych](../essentials/diagnostic-settings.md) korzystają z bezpiecznego prywatnego kanału firmy Microsoft i nie są kontrolowane przez te ustawienia.
### <a name="log-analytics-solution-packs-download"></a>Pobieranie pakietów rozwiązań Log Analytics
Aby zezwolić agentowi Log Analytics na pobieranie pakietów rozwiązań, Dodaj odpowiednie nazwy FQDN do listy dozwolonych zapór.
| Środowisko chmury | Zasób agenta | Porty | Kierunek |
|:--|:--|:--|:--|
|Azure — publiczna | scadvisorcontent.blob.core.windows.net | 443 | Wychodzący
|Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Wychodzący
|Azure w Chinach — 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Wychodzący
## <a name="configure-application-insights"></a>Konfigurowanie usługi Application Insights
Przejdź do witryny Azure Portal. W Azure Monitor Application Insights zasobów składnika jest **izolacja sieci** elementu menu po lewej stronie. W tym menu można kontrolować dwa różne stany.

Najpierw można podłączyć ten zasób Application Insights do Azure Monitor prywatnych zakresów łączy, do których masz dostęp. Wybierz pozycję **Dodaj** i wybierz **prywatny zakres linków Azure monitor**. Wybierz pozycję Zastosuj, aby nawiązać połączenie. Wszystkie połączone zakresy są wyświetlane na tym ekranie. Nawiązanie tego połączenia zezwala na ruch sieciowy podłączonych sieci wirtualnych do tego składnika i ma ten sam efekt, co połączenie z zakresem, tak jak w przypadku [łączenia Azure Monitor zasobów](#connect-azure-monitor-resources).
Następnie można kontrolować sposób, w jaki można uzyskać dostęp do tego zasobu poza zakresem prywatnych linków (AMPLS) wymienionym wcześniej. Jeśli ustawisz opcję **Zezwalaj na dostęp do sieci publicznej na potrzeby** pozyskiwania **nie**, wówczas maszyny lub zestawy SDK poza połączonymi zakresami nie mogą przekazywać danych do tego składnika. Jeśli ustawisz opcję **Zezwalaj na dostęp do sieci publicznej dla zapytań** na wartość **nie**, wówczas maszyny spoza zakresów nie mogą uzyskać dostępu do danych w tym zasobie Application Insights. Te dane obejmują dostęp do dzienników APM, metryk i strumienia metryk na żywo, a także wbudowanych środowisk, takich jak skoroszyty, pulpity nawigacyjne, zapytania dotyczące środowiska klienta opartego na interfejsie API, szczegółowe informacje w Azure Portal i inne.
> [!NOTE]
> Środowiska użycia poza portalem również muszą być uruchamiane w połączeniu z prywatną siecią wirtualną, która obejmuje monitorowane obciążenia.
Do prywatnego linku należy dodać zasoby obsługujące monitorowane obciążenia. Na przykład zobacz [Używanie prywatnych punktów końcowych dla aplikacji internetowej platformy Azure](../../app-service/networking/private-endpoint.md).
Ograniczanie dostępu w ten sposób dotyczy tylko danych w zasobie Application Insights. Jednak zmiany w konfiguracji, w tym włączenie lub wyłączenie tych ustawień dostępu, są zarządzane przez Azure Resource Manager. Dlatego należy ograniczyć dostęp do Menedżer zasobów przy użyciu odpowiednich ról, uprawnień, kontroli sieci i inspekcji. Aby uzyskać więcej informacji, zobacz [Azure monitor role, uprawnienia i zabezpieczenia](../roles-permissions-security.md).
> [!NOTE]
> Aby w pełni zabezpieczyć Application Insights oparte na obszarze roboczym, musisz zablokować zarówno dostęp do zasobu Application Insights, jak i bazowego obszaru roboczego Log Analytics.
>
> Diagnostyka na poziomie kodu (Profiler/debuger) wymaga [podania własnego konta magazynu](../app/profiler-bring-your-own-storage.md) do obsługi linku prywatnego.
### <a name="handling-the-all-or-nothing-nature-of-private-links"></a>Obsługa wszystkich rodzajów prywatnych linków lub nie
Zgodnie z opisem w temacie [Planowanie konfiguracji linku prywatnego](#planning-your-private-link-setup), skonfigurowanie prywatnego linku nawet dla pojedynczego zasobu ma wpływ na wszystkie zasoby Azure monitor w tych sieciach oraz w innych sieciach, które współużytkują ten sam serwer DNS. Takie zachowanie może sprawiać, że proces dołączania jest wyzwaniem. Należy wziąć pod uwagę następujące opcje:
* Wszystko to najprostsze i najbezpieczniejsze podejście polega na dodaniu wszystkich składników Application Insights do AMPLS. W przypadku składników, do których nadal uzyskuje się dostęp z innych sieci, pozostaw flagi "Zezwalaj na publiczny dostęp do Internetu na potrzeby pozyskiwania/wykonywania zapytań" ustawioną na wartość tak (domyślnie).
* Izoluj sieci — Jeśli masz (lub możesz ją wyrównać) przy użyciu sieci wirtualnych szprychy, postępuj zgodnie ze wskazówkami w [topologii sieci Hub i szprych na platformie Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Następnie skonfiguruj osobne ustawienia linku prywatnego w odpowiedniej sieci wirtualnych szprych. Upewnij się, że strefy DNS są również rozdzielone, ponieważ współużytkowanie stref DNS z innymi sieciami szprych spowoduje [przesłonięcie DNS](#the-issue-of-dns-overrides).
* Używanie niestandardowych stref DNS dla określonych aplikacji — to rozwiązanie umożliwia dostęp do wybranych składników Application Insights za pośrednictwem prywatnego linku, zachowując jednocześnie cały ruch w ramach tras publicznych.
- Skonfiguruj [niestandardową prywatną strefę DNS](../../private-link/private-endpoint-dns.md)i nadaj jej unikatową nazwę, taką jak Internal.Monitor.Azure.com
- Utwórz AMPLS i prywatny punkt końcowy, a następnie wybierz opcję " **nie** Integruj" z prywatną usługą DNS
- Przejdź do pozycji prywatny punkt końcowy — > Konfiguracja DNS i przejrzyj sugerowane Mapowanie nazw FQDN.
- Wybierz opcję dodania konfiguracji i wybrania właśnie utworzonej strefy internal.monitor.azure.com
- Dodaj rekordy dla powyższego 
- Przejdź do składnika Application Insights i skopiuj jego [Parametry połączenia](../app/sdk-connection-string.md).
- Aplikacje lub skrypty, które chcą wywołać ten składnik za pośrednictwem prywatnego linku, powinny używać parametrów połączenia z EndpointSuffix = Internal. Monitor. Azure. com
* Mapuj punkty końcowe za pośrednictwem plików Hosts zamiast systemu DNS — Aby uzyskać dostęp do prywatnego linku tylko z określonej maszyny/maszyny wirtualnej w sieci:
- Skonfiguruj AMPLS i prywatny punkt końcowy, a następnie wybierz pozycję **nie** należy przeprowadzać autointegracji z prywatnym systemem DNS
- Skonfiguruj powyższe rekordy A na komputerze, na którym jest uruchamiana aplikacja w pliku hosts
## <a name="use-apis-and-command-line"></a>Korzystanie z interfejsów API i wiersza polecenia
Można zautomatyzować proces opisany wcześniej przy użyciu Azure Resource Manager szablonów, REST i interfejsów wiersza polecenia.
Aby utworzyć prywatne zakresy łączy i zarządzać nimi, użyj [interfejsu API REST](/rest/api/monitor/private%20link%20scopes%20(preview)) lub [wiersza polecenia platformy Azure (AZ monitor Private-Scope-zakres)](/cli/azure/monitor/private-link-scope).
Aby zarządzać dostępem do sieci, użyj flag `[--ingestion-access {Disabled, Enabled}]` i `[--query-access {Disabled, Enabled}]` na [log Analytics obszarach roboczych](/cli/azure/monitor/log-analytics/workspace) lub [składników Application Insights](/cli/azure/ext/application-insights/monitor/app-insights/component).
## <a name="collect-custom-logs-and-iis-log-over-private-link"></a>Zbieranie niestandardowych dzienników i dzienników usług IIS przez łącze prywatne
Konta magazynu są używane w procesie pozyskiwania dzienników niestandardowych. Domyślnie są używane konta magazynu zarządzane przez usługę. Jednak w przypadku linków prywatnych można korzystać z własnych kont magazynu i kojarzyć je z obszarami roboczymi Log Analytics. Zobacz więcej szczegółowych informacji na temat konfigurowania takich kont przy użyciu [wiersza polecenia](/cli/azure/monitor/log-analytics/workspace/linked-storage).
Aby uzyskać więcej informacji na temat przełączania własnych kont magazynu, zobacz [konta magazynu należące do klienta na potrzeby](private-storage.md) pozyskiwania dziennika
## <a name="restrictions-and-limitations"></a>Ograniczenia
### <a name="ampls"></a>AMPLS
Obiekt AMPLS ma liczbę limitów, które należy wziąć pod uwagę podczas planowania konfiguracji linku prywatnego:
* Sieć wirtualna może łączyć się tylko z 1 AMPLS obiektem. Oznacza to, że obiekt AMPLS musi zapewnić dostęp do wszystkich zasobów Azure Monitor, do których sieć wirtualna powinna mieć dostęp.
* Zasób Azure Monitor (składnik obszaru roboczego lub Application Insights) może łączyć się z 5 AMPLSs.
* Obiekt AMPLS może łączyć się z zasobami 50 Azure Monitor.
* Obiekt AMPLS może łączyć się z 10 prywatnymi punktami końcowymi.
Zobacz [limity](#consider-limits) dotyczące dokładniejszego przeglądu tych limitów.
### <a name="agents"></a>Agenci
Najnowsze wersje agentów systemów Windows i Linux muszą być używane do obsługi bezpiecznego pozyskiwania Log Analytics obszarów roboczych. Starsze wersje nie mogą przekazywać danych monitorowania za pośrednictwem sieci prywatnej.
**Agent usługi Log Analytics dla systemu Windows**
Użyj agenta Log Analytics w wersji 10.20.18038.0 lub nowszej.
**Agent usługi Log Analytics dla systemu Linux**
Użyj agenta w wersji 1.12.25 lub nowszej. Jeśli nie możesz, uruchom następujące polecenia na maszynie wirtualnej.
```cmd
$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key>
```
### <a name="azure-portal"></a>Azure Portal
Aby korzystać z środowisk Azure Monitor Portal, takich jak Application Insights i Log Analytics, należy zezwolić na dostęp rozszerzeń Azure Portal i Azure Monitor w sieciach prywatnych. Dodaj [znaczniki usługi](../../firewall/service-tags.md) **usługi azureactivedirectory**, **AzureResourceManager**, **AzureFrontDoor. FirstParty** i **AzureFrontDoor. frontonu** do sieciowej grupy zabezpieczeń.
### <a name="querying-data"></a>Wykonywanie zapytań na danych
[ `externaldata` Operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) nie jest obsługiwany przez link prywatny, ponieważ odczytuje dane z kont magazynu, ale nie gwarantuje, że magazyn jest dostępny prywatnie.
### <a name="programmatic-access"></a>Dostęp programowy
Aby użyć interfejsu API REST, interfejsu [wiersza polecenia](/cli/azure/monitor) lub programu PowerShell z Azure monitor w sieciach prywatnych, należy dodać do zapory [Tagi usług](../../virtual-network/service-tags-overview.md) **usługi azureactivedirectory** i **AzureResourceManager** .
### <a name="application-insights-sdk-downloads-from-a-content-delivery-network"></a>Application Insights pobierania zestawu SDK z usługi Content Delivery Network
Powiąż kod JavaScript w skrypcie, aby przeglądarka nie próbowała pobrać kodu z sieci CDN. Przykład jest dostępny w witrynie [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup)
### <a name="browser-dns-settings"></a>Ustawienia usługi DNS przeglądarki
W przypadku łączenia się z zasobami Azure Monitor za pośrednictwem prywatnego linku ruch do tych zasobów musi przechodzić przez prywatny punkt końcowy skonfigurowany w sieci. Aby włączyć prywatny punkt końcowy, zaktualizuj ustawienia DNS zgodnie z opisem w temacie [Connect to Private Endpoint](#connect-to-a-private-endpoint). Niektóre przeglądarki używają własnych ustawień DNS zamiast ustawionych przez użytkownika. Przeglądarka może próbować nawiązać połączenie z Azure Monitor publicznymi punktami końcowymi i całkowicie ominąć prywatny link. Sprawdź, czy ustawienia przeglądarki nie przesłaniają ani nie buforują starych ustawień DNS.
## <a name="next-steps"></a>Następne kroki
- Informacje o [magazynie prywatnym](private-storage.md) | 100.650146 | 812 | 0.810156 | pol_Latn | 0.999932 |
93f13ece8ee7070cd4f1e6c5b37874af05a5f0f8 | 10,788 | md | Markdown | articles/cognitive-services/bing-visual-search/quickstarts/csharp.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-08-10T02:23:39.000Z | 2019-08-10T02:23:40.000Z | articles/cognitive-services/bing-visual-search/quickstarts/csharp.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/bing-visual-search/quickstarts/csharp.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Hızlı Başlangıç: Bing görsel arama REST API'sini kullanarak görüntü Öngörüler elde edin veC#"
titleSuffix: Azure Cognitive Services
description: Bing görsel arama API'sine bir görüntüyü karşıya yükleme ve ilgili Öngörüler alma hakkında bilgi edinin.
services: cognitive-services
author: swhite-msft
manager: nitinme
ms.service: cognitive-services
ms.subservice: bing-visual-search
ms.topic: quickstart
ms.date: 04/26/2019
ms.author: scottwhi
ms.openlocfilehash: b1518af9c37ffe0b8175e741b363d79941e3caaf
ms.sourcegitcommit: 67625c53d466c7b04993e995a0d5f87acf7da121
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 05/20/2019
ms.locfileid: "65905709"
---
# <a name="quickstart-get-image-insights-using-the-bing-visual-search-rest-api-and-c"></a>Hızlı Başlangıç: Bing görsel arama REST API'sini kullanarak görüntü Öngörüler elde edin veC#
Bu hızlı başlangıçta, Bing görsel arama API'sine bir görüntüyü karşıya yükleme ve döndürdüğü öngörülerini görüntülemek için nasıl gösterir.
## <a name="prerequisites"></a>Önkoşullar
* Herhangi bir sürümünü [Visual Studio 2019](https://www.visualstudio.com/downloads/).
* [Json.NET framework](https://www.newtonsoft.com/json), bir NuGet paketi olarak kullanılabilir.
* Linux/MacOS kullanıyorsanız, bu uygulamayı kullanarak çalıştırabilirsiniz [Mono](https://www.mono-project.com/).
[!INCLUDE [cognitive-services-bing-visual-search-signup-requirements](../../../../includes/cognitive-services-bing-visual-search-signup-requirements.md)]
## <a name="create-and-initialize-a-project"></a>Proje oluşturma ve başlatma
1. Visual Studio'da BingSearchApisQuickStart adlı yeni bir konsol çözümü oluşturun. Aşağıdaki ad alanlarını ana kod dosyasına ekleyin:
```csharp
using System;
using System.Text;
using System.Net;
using System.IO;
using System.Collections.Generic;
```
2. Karşıya yüklemek istediğiniz görüntüye abonelik anahtarınızı, uç noktası ve yol değişkenleri ekleyin:
```csharp
const string accessKey = "<my_subscription_key>";
const string uriBase = "https://api.cognitive.microsoft.com/bing/v7.0/images/visualsearch";
static string imagePath = @"<path_to_image>";
```
3. Adlı bir yöntem oluşturma `GetImageFileName()` görüntünüzü olan yolu almak için:
```csharp
static string GetImageFileName(string path)
{
return new FileInfo(path).Name;
}
```
4. Görüntünün ikili verileri almak için bir yöntem oluşturun:
```csharp
static byte[] GetImageBinary(string path)
{
return File.ReadAllBytes(path);
}
```
## <a name="build-the-form-data"></a>Form verileri oluşturun
Yerel bir görüntüyü karşıya yüklemek için öncelikle API'sine göndermek için form verileri oluşturun. Form verileri içermelidir `Content-Disposition` üst bilgi, kendi `name` parametresi ayarlanması gerekir "Görüntü" ve `filename` herhangi bir dize parametresi ayarlanabilir. Form içeriğini görüntünün ikili verileri içerir. Karşıya yüklediğiniz en yüksek görüntü boyutu 1 MB'dir.
```
--boundary_1234-abcd
Content-Disposition: form-data; name="image"; filename="myimagefile.jpg"
ÿØÿà JFIF ÖÆ68g-¤CWŸþ29ÌÄøÖ‘º«™æ±èuZiÀ)"óÓß°Î= ØJ9á+*G¦...
--boundary_1234-abcd--
```
1. Form POST verilerin biçimlendirilmesi için sınır dizeleri ekleyin. Veriler için başlangıç ve bitiş yeni satır karakterleri sınır dizeleri belirler:
```csharp
// Boundary strings for form data in body of POST.
const string CRLF = "\r\n";
static string BoundaryTemplate = "batch_{0}";
static string StartBoundaryTemplate = "--{0}";
static string EndBoundaryTemplate = "--{0}--";
```
2. Form parametreleri eklemek için aşağıdaki değişkenleri kullanın:
```csharp
const string CONTENT_TYPE_HEADER_PARAMS = "multipart/form-data; boundary={0}";
const string POST_BODY_DISPOSITION_HEADER = "Content-Disposition: form-data; name=\"image\"; filename=\"{0}\"" + CRLF +CRLF;
```
3. Adlı bir işlev oluşturma `BuildFormDataStart()` başlangıç sınır dizeleri ve görüntü yolu kullanarak form verileri oluşturmak için:
```csharp
static string BuildFormDataStart(string boundary, string filename)
{
var startBoundary = string.Format(StartBoundaryTemplate, boundary);
var requestBody = startBoundary + CRLF;
requestBody += string.Format(POST_BODY_DISPOSITION_HEADER, filename);
return requestBody;
}
```
4. Adlı bir işlev oluşturma `BuildFormDataEnd()` sınır dizeleri kullanarak form verilerinin ucu oluşturmak için:
```csharp
static string BuildFormDataEnd(string boundary)
{
return CRLF + CRLF + string.Format(EndBoundaryTemplate, boundary) + CRLF;
}
```
## <a name="call-the-bing-visual-search-api"></a>Bing görsel arama API'si çağırma
1. Bing görsel arama uç noktasını çağırmak ve JSON yanıtını döndürmek için bir işlev oluşturun. İşlevi başlangıç ve bitiş görüntü verilerini içeren bir bayt dizisi form verileri alır ve bir `contentType` değeri.
2. Kullanım bir `WebRequest` URI, contentType değeri ve üst bilgileri depolamak için.
3. Kullanım `request.GetRequestStream()` , form ve resim veri yazma ve ardından yanıt alın. İşlevinizi aşağıdakine benzer olmalıdır:
```csharp
static string BingImageSearch(string startFormData, string endFormData, byte[] image, string contentTypeValue)
{
WebRequest request = HttpWebRequest.Create(uriBase);
request.ContentType = contentTypeValue;
request.Headers["Ocp-Apim-Subscription-Key"] = accessKey;
request.Method = "POST";
// Writes the boundary and Content-Disposition header, then writes
// the image binary, and finishes by writing the closing boundary.
using (Stream requestStream = request.GetRequestStream())
{
StreamWriter writer = new StreamWriter(requestStream);
writer.Write(startFormData);
writer.Flush();
requestStream.Write(image, 0, image.Length);
writer.Write(endFormData);
writer.Flush();
writer.Close();
}
HttpWebResponse response = (HttpWebResponse)request.GetResponseAsync().Result;
string json = new StreamReader(response.GetResponseStream()).ReadToEnd();
return json;
}
```
## <a name="create-the-main-method"></a>Main yöntemi oluşturma
1. İçinde `Main` yöntemi, uygulamanızın dosya adı ve ikili veri görüntünüzün Al:
```csharp
var filename = GetImageFileName(imagePath);
var imageBinary = GetImageBinary(imagePath);
```
2. POST gövdesini sınır biçimlendirme olarak ayarlayın. Ardından çağırın `startFormData()` ve `endFormData` form verilerini oluşturmak için:
```csharp
// Set up POST body.
var boundary = string.Format(BoundaryTemplate, Guid.NewGuid());
var startFormData = BuildFormDataStart(boundary, filename);
var endFormData = BuildFormDataEnd(boundary);
```
3. Oluşturma `ContentType` biçimlendirme tarafından değeri `CONTENT_TYPE_HEADER_PARAMS` ve form veri sınırı:
```csharp
var contentTypeHdrValue = string.Format(CONTENT_TYPE_HEADER_PARAMS, boundary);
```
4. API yanıtı çağırarak alma `BingImageSearch()` ve yanıt yazdırın:
```csharp
var json = BingImageSearch(startFormData, endFormData, imageBinary, contentTypeHdrValue);
Console.WriteLine(json);
Console.WriteLine("enter any key to continue");
Console.readKey();
```
## <a name="using-httpclient"></a>HttpClient kullanma
Kullanırsanız `HttpClient`, kullanabileceğiniz `MultipartFormDataContent` form verilerini oluşturmak için sınıfı. Yalnızca önceki örnekte karşılık gelen yöntemlere değiştirilecek kod aşağıdaki bölümleri kullanın.
Değiştirin `Main` bu kodla yöntemi:
```csharp
static void Main()
{
try
{
Console.OutputEncoding = System.Text.Encoding.UTF8;
if (accessKey.Length == 32)
{
if (IsImagePathSet(imagePath))
{
var filename = GetImageFileName(imagePath);
Console.WriteLine("Getting image insights for image: " + filename);
var imageBinary = GetImageBinary(imagePath);
var boundary = string.Format(BoundaryTemplate, Guid.NewGuid());
var json = BingImageSearch(imageBinary, boundary, uriBase, accessKey);
Console.WriteLine("\nJSON Response:\n");
Console.WriteLine(JsonPrettyPrint(json));
}
}
else
{
Console.WriteLine("Invalid Bing Visual Search API subscription key!");
Console.WriteLine("Please paste yours into the source code.");
}
Console.Write("\nPress Enter to exit ");
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
```
Değiştirin `BingImageSearch` bu kodla yöntemi:
```csharp
/// <summary>
/// Calls the Bing visual search endpoint and returns the JSON response.
/// </summary>
static string BingImageSearch(byte[] image, string boundary, string uri, string subscriptionKey)
{
var requestMessage = new HttpRequestMessage(HttpMethod.Post, uri);
requestMessage.Headers.Add("Ocp-Apim-Subscription-Key", accessKey);
var content = new MultipartFormDataContent(boundary);
content.Add(new ByteArrayContent(image), "image", "myimage");
requestMessage.Content = content;
var httpClient = new HttpClient();
Task<HttpResponseMessage> httpRequest = httpClient.SendAsync(requestMessage, HttpCompletionOption.ResponseContentRead, CancellationToken.None);
HttpResponseMessage httpResponse = httpRequest.Result;
HttpStatusCode statusCode = httpResponse.StatusCode;
HttpContent responseContent = httpResponse.Content;
string json = null;
if (responseContent != null)
{
Task<String> stringContentsTask = responseContent.ReadAsStringAsync();
json = stringContentsTask.Result;
}
return json;
}
```
## <a name="next-steps"></a>Sonraki adımlar
> [!div class="nextstepaction"]
> [Görsel arama tek sayfa web uygulaması oluşturma](../tutorial-bing-visual-search-single-page-app.md)
| 39.372263 | 378 | 0.67297 | tur_Latn | 0.933614 |
93f2d7ef67dbc8e41c252a3751fbed3163a76e5b | 29,188 | md | Markdown | articles/batch/virtual-file-mount.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:21.000Z | 2021-03-12T23:37:21.000Z | articles/batch/virtual-file-mount.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/batch/virtual-file-mount.md | changeworld/azure-docs.de-de | 26492264ace1ad4cfdf80e5234dfed9a106e8012 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Einbinden eines virtuellen Dateisystems in einen Pool
description: Erfahren Sie, wie Sie ein virtuelles Dateisystem in einen Batch-Pool einbinden.
ms.topic: how-to
ms.custom: devx-track-csharp
ms.date: 11/11/2021
ms.openlocfilehash: ed83f5771a451f92c69ba5f80de7bfa7d8388778
ms.sourcegitcommit: 901ea2c2e12c5ed009f642ae8021e27d64d6741e
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 11/12/2021
ms.locfileid: "132369415"
---
# <a name="mount-a-virtual-file-system-on-a-batch-pool"></a>Einbinden eines virtuellen Dateisystems in einen Batch-Pool
Azure Batch unterstützt die Einbindung von Cloudspeicher oder eines externen Dateisystems auf Windows- oder Linux-Computeknoten in Ihren Batch-Pools. Wenn ein Computeknoten einem Pool beitritt, wird das virtuelle Dateisystem eingebunden und als lokales Laufwerk auf diesem Knoten behandelt.
Sie können Dateisysteme einbinden, z. B.:
- Azure Files
- Azure Blob Storage
- Network File System (NFS), einschließlich eines [Avere vFXT Caches](../avere-vfxt/avere-vfxt-overview.md)
- Common Internet File System (CIFS)
In diesem Artikel erfahren Sie, wie Sie ein virtuelles Dateisystem mithilfe der [Azure Batch-Verwaltungsbibliothek für .NET](/dotnet/api/overview/azure/batch) in einen Pool von Computeknoten einbinden.
> [!NOTE]
> Das Einbinden eines virtuellen Dateisystems wird nur in Batch-Pools unterstützt, die am oder nach dem 8. August 2019 erstellt wurden. Batch-Pools, die vor diesem Datum erstellt wurden, unterstützen dieses Feature nicht.
## <a name="benefits-of-mounting-on-a-pool"></a>Vorteile der Einbindung in einen Pool
Wenn Sie das Dateisystem in den Pool einbinden, statt Vorgänge eigene Daten aus einem großen Dataset abrufen zu lassen, können Aufgaben einfacher und effizienter auf die erforderlichen Daten zugreifen.
Stellen Sie sich ein Szenario vor, in dem mehrere Aufgaben auf einen gemeinsamen Satz von Daten zugreifen müssen. Ein Beispiel hierfür ist das Rendern eines Films. Jede Aufgabe rendert jeweils ein oder mehrere Frames aus den Szenendateien. Durch die Einbindung eines Laufwerks, das die Szenendateien enthält, können Computeknoten einfacher auf freigegebene Daten zugreifen.
Darüber hinaus kann das zugrunde liegende Dateisystem basierend auf der Leistung und Skalierung (Durchsatz und IOPS) unabhängig ausgewählt und skaliert werden, die für die Anzahl von Computeknoten erforderlich ist, die gleichzeitig auf die Daten zugreifen. Beispielsweise kann ein verteilter speicherinterner [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md)-Cache verwendet werden, um große Renderings im Kinofilmmaßstab mit Tausenden von gleichzeitigen Renderknoten zu unterstützen, die auf lokal gespeicherte Quelldaten zugreifen. Für Daten, die sich bereits in einem cloudbasierten Blobspeicher befinden, kann alternativ [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md) verwendet werden, um diese Daten als lokales Dateisystem einzubinden. Blobfuse ist nur auf Linux-Knoten verfügbar. [Azure Files](../storage/files/storage-files-introduction.md) bietet jedoch einen ähnlichen Workflow und ist sowohl unter Windows als auch unter Linux verfügbar.
## <a name="mount-a-virtual-file-system-on-a-pool"></a>Einbinden eines virtuellen Dateisystems in einen Pool
Durch die Einbindung eines virtuellen Dateisystems in einen Pool wird das Dateisystem für jeden Computeknoten im Pool verfügbar. Die Konfiguration für das Dateisystem erfolgt, wenn ein Computerknoten einem Pool beitritt, neu gestartet wird oder ein Erneutes Image erstellt wird.
Um ein Dateisystem in einen Pool einzubinden, erstellen Sie ein `MountConfiguration`-Objekt. Wählen Sie das Objekt aus, das Ihrem virtuellen Dateisystem entspricht: `AzureBlobFileSystemConfiguration`, `AzureFileShareConfiguration`, `NfsMountConfiguration` oder `CifsMountConfiguration`.
Für alle MountConfiguration-Objekte sind die folgenden Basisparameter erforderlich. Einige Einbindungskonfigurationen weisen Parameter auf, die für das verwendete Dateisystem spezifisch sind. Diese werden in den Codebeispielen ausführlicher erläutert.
- **Kontoname oder Kontoquelle**: Zum Einbinden einer virtuellen Dateifreigabe benötigen Sie den Namen des Speicherkontos oder die entsprechende Quelle.
- **Relativer Einhängepfad oder Quelle**: Der Speicherort des auf dem Rechenknoten eingehängten Dateisystems, relativ zum Standardverzeichnis `fsmounts`, das auf dem Knoten über `AZ_BATCH_NODE_MOUNTS_DIR` zugänglich ist. Der genaue Speicherort hängt von dem auf dem Knoten verwendeten Betriebssystem ab. Beispielsweise wird der physische Speicherort auf einem Ubuntu-Knoten`mnt\batch\tasks\fsmounts`zugeordnet. Auf einem CentOS-Knoten wird der Speicherort `mnt\resources\batch\tasks\fsmounts`zugeordnet.
- **Einbindungsoptionen oder blobfuse-Optionen**: Diese Optionen beschreiben bestimmte Parameter für die Einbindung eines Dateisystems.
Nachdem das `MountConfiguration`-Objekt erstellt wurde, können Sie dem Objekt die `MountConfigurationList`-Eigenschaft zuweisen, wenn Sie den Pool erstellen. Die Konfiguration für das Dateisystem erfolgt, wenn ein Computerknoten einem Pool beitritt, neu gestartet wird oder ein Erneutes Image erstellt wird.
Beim Einbinden des Dateisystems wird eine Umgebungsvariable vom Typ `AZ_BATCH_NODE_MOUNTS_DIR` erstellt, die auf den Speicherort der eingebundenen Dateisysteme sowie auf Protokolldateien verweist, die für die Problembehandlung und das Debuggen hilfreich sind. Protokolldateien werden im Abschnitt [Diagnostizieren von Einbindungsfehlern](#diagnose-mount-errors) ausführlicher erläutert.
> [!IMPORTANT]
> Die maximale Anzahl eingebundener Dateisysteme in einem Pool beträgt 10. Ausführliche Informationen und andere Grenzwerte finden Sie unter [Batch-Dienst – Kontingente und Limits](batch-quota-limit.md#other-limits).
## <a name="mount-azure-file-share-with-powershell"></a>Einbinden der Azure-Dateifreigabe mithilfe von PowerShell
Sie können eine Azure-Dateifreigabe in einen Batch-Pool einbinden, indem Sie [Azure PowerShell](/powershell/) oder [Azure Cloud Shell](../cloud-shell/overview.md)verwenden.
# <a name="windows"></a>[Windows](#tab/windows)
1. Melden Sie sich bei Ihrem Azure-Abonnement an.
```powershell
Connect-AzAccount -Subscription "<subscription-ID>"
```
1. Abrufen des Kontexts für Ihr Batch-Konto.
```powershell
$context = Get-AzBatchAccount -AccountName <batch-account-name>
```
1. Erstellen Sie einen Batch-Pool mit den folgenden Einstellungen. Ersetzen Sie die Beispielwerte nach Bedarf durch Ihre eigenen Informationen.
```powershell
$fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<Storage-Account-name>", https://<Storage-Account-name>.file.core.windows.net/batchfileshare1, "S", "Storage-Account-key")
$mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
$imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("WindowsServer", "MicrosoftWindowsServer", "2016-Datacenter", "latest")
$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.windows amd64")
New-AzBatchPool -Id "<Pool-Name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -MountConfiguration @($mountConfig) -BatchContext $Context
```
1. Greifen Sie über den direkten Pfad Ihres Laufwerks auf die Bereitstellungsdateien zu. Beispiel:
```powershell
cmd /c "more S:\folder1\out.txt & timeout /t 90 > NULL"
```
1. Überprüfen Sie, ob die Ausgabedatei korrekt ist.
1. Wenn Sie Remotedesktopprotokoll (RDP) oder SSH verwenden, fügen Sie Anmeldeinformationen hinzu, um direkt auf das `S` Laufwerk zuzugreifen. Der Azure Batch-Agent gewährt nur Zugriff auf Azure Batch Aufgaben in Windows. Wenn Sie über RDP eine Verbindung mit dem Knoten herstellen, hat Ihr Benutzerkonto keinen automatischen Zugriff auf das Einbindungslaufwerk.
Hinzufügen `cmdkey` Ihrer Anmeldeinformationen. Ersetzen Sie die Beispielwerte nach Bedarf durch Ihre eigenen Informationen.
```powershell
cmdkey /add:"<storage-account-name>.file.core.windows.net" /user:"Azure\<storage-account-name>" /pass:"<storage-account-key>"
```
# <a name="linux"></a>[Linux](#tab/linux)
1. Melden Sie sich bei Ihrem Azure-Abonnement an.
```powershell
Connect-AzAccount -Subscription "<subscription-ID>"
```
1. Abrufen des Kontexts für Ihr Batch-Konto.
```powershell
$context = Get-AzBatchAccount -AccountName <batch-account-name>
```
1. Erstellen Sie einen Batch-Pool mit den folgenden Einstellungen. Ersetzen Sie die Beispielwerte nach Bedarf durch Ihre eigenen Informationen.
```powershell
$fileShareConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSAzureFileShareConfiguration" -ArgumentList @("<Storage-Account-name>", https://<Storage-Account-name>.file.core.windows.net/batchfileshare1, "S", "<Storage-Account-key>", "-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp")
$mountConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSMountConfiguration" -ArgumentList @($fileShareConfig)
$imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("ubuntuserver", "canonical", "20.04-lts", "latest")
$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.ubuntu 20.04")
New-AzBatchPool -Id "<Pool-Name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -MountConfiguration @($mountConfig) -BatchContext $Context
```
1. Greifen Sie mithilfe der Umgebungsvariablen auf die Bereitstellungsdateien `AZ_BATCH_NODE_MOUNTS_DIR`zu. Beispiel:
```bash
/bin/bash -c 'more $AZ_BATCH_NODE_MOUNTS_DIR/S/folder1/out.txt; sleep 20s'
```
Optional können Sie auch über den direkten Pfad auf die Bereitstellungsdateien zugreifen.
1. Überprüfen Sie, ob die Ausgabedatei korrekt ist.
1. Wenn Sie RDP oder SSH verwenden, können Sie manuell direkt auf das `S` Laufwerk zugreifen. Verwenden Sie den Pfad `/mnt/batch/tasks/fsmounts/S`.
---
### <a name="troubleshoot-powershell-mounting"></a>Problembehandlung bei der PowerShell-Einbindung
Wenn Sie eine Azure-Dateifreigabe mit PowerShell oder Cloud Shell in einen Batch-Pool einbinden, wird möglicherweise der folgende Fehler angezeigt:
```text
Mount Configuration Error | An error was encountered while configuring specified mount(s)
Message: System error (out of memory, cannot fork, no more loop devices)
MountConfigurationPath: S
```
Wenn dieser Fehler auftritt, stellen Sie eine RDP- oder SSH-Verbindung mit dem Knoten her, um die zugehörigen Protokolldateien zu überprüfen. Der Batch-Agent implementiert die Einbindung auf Windows und Linux unterschiedlich. Unter Linux installiert Batch das Paket `cifs-utils`. Anschließend gibt Batch den Einhängebefehl aus. Auf Windows verwendet Batch `cmdkey` um Ihre Batch-Kontoanmeldeinformationen hinzuzufügen. Dann gibt Batch den Bereitstellungsbefehl über `net use` aus. Beispiel:
```powershell
net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<storage-account-name> <storage-account-key>
```
# <a name="windows"></a>[Windows](#tab/windows)
1. Verbinden über RDP zum Knoten.
1. Öffnen Sie die Protokolldatei, `fshare-S.log`. Der Dateipfad ist `D:\batch\tasks\fsmounts`.
1. Überprüfen Sie die Fehlermeldungen. Beispiel:
```text
CMDKEY: Credential added successfully.
System error 86 has occurred.
The specified network password is not correct.
```
1. Beheben Sie das Problem mithilfe von [Troubleshoot Azure Files Problems in Windows Server Message Block (SMB)](../storage/files/storage-troubleshoot-windows-file-connection-problems.md).
# <a name="linux"></a>[Linux](#tab/linux)
1. Verbinden über SSH zum Knoten.
1. Öffnen Sie die Protokolldatei, `fshare-S.log`. Der Dateipfad ist `/mnt/batch/tasks/fsmounts`.
1. Überprüfen Sie die Fehlermeldungen. Beispiel: `mount error(13): Permission denied`.
1. Beheben Sie das Problem mit [Problembehandlung von Problemen mit Azure-Dateien unter Linux (SMB)](../storage/files/storage-troubleshoot-linux-file-connection-problems.md).
---
Wenn Sie RDP oder SSH nicht verwenden können, um die Protokolldateien auf dem Knoten zu überprüfen, überprüfen Sie die Batch-Protokolle direkt. Verwenden Sie diese Methode sowohl für Windows- als auch für Linux-Protokolle.
1. Melden Sie sich am [Azure-Portal](https://portal.azure.com) an.
1. Geben Sie in die Suchleiste ein und wählen Sie **Batch-Konten**.
1. Wählen Sie auf der Seite **Batch-Konten** das Konto mit Ihrem Batch-Pool aus.
1. Wählen Sie im Menü der Seite "Sammelkonto" unter **Funktionen** die Option **Pools**.
1. Wählen Sie den Namen des Pools aus.
1. Wählen Sie im Menü der Batchpool-Seite unter **Allgemein** die Option **Pool**.
1. Wählen Sie den Namen des Pools aus.
1. Wählen Sie auf der Seite **Übersicht** über den Pool **Hochladen der Batchprotokolle** aus.
1. Wählen Sie im **Bereich Hochladen der Batchprotokolle** Ihren Azure Storage Container aus. Wählen Sie dann **Speichercontainer** auswählen.
1. Wählen Sie die Protokolldateien aus, und laden Sie sie aus dem Speichercontainer herunter.
1. Öffnen Sie `agent-debug.log`.
1. Überprüfen Sie die Fehlermeldungen. Beispiel:
```text
..20210322T113107.448Z.00000000-0000-0000-0000-000000000000.ERROR.agent.mount.filesystems.basefilesystem.basefilesystem.py.run_cmd_persist_output_async.59.2912.MainThread.3580.Mount command failed with exit code: 2, output:
CMDKEY: Credential added successfully.
System error 86 has occurred.
The specified network password is not correct.
```
1. Beheben Sie das Problem mit [Beseitigung von Problemen mit Azure-Dateien unter Windows (SMB)](../storage/files/storage-troubleshoot-windows-file-connection-problems.md) oder [Beseitigung von Problemen mit Azure-Dateien unter Linux (SMB)](../storage/files/storage-troubleshoot-linux-file-connection-problems.md).
Wenn Sie die Ursache des Fehlers immer noch nicht finden können, können Sie die Dateifreigabe stattdessen [manuell mit PowerShell einbinden.](#manually-mount-file-share-with-powershell)
### <a name="manually-mount-file-share-with-powershell"></a>Manuelles Einbinden einer Dateifreigabe mit PowerShell
Wenn Sie Einbindungsfehler nicht mit PowerShell diagnostizieren oder beheben können, können Sie die Dateifreigabe manuell einbinden.
# <a name="windows"></a>[Windows](#tab/windows)
1. Erstellen Sie einen Pool ohne Einbindungskonfiguration. Beispiel:
```powershell
$imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("WindowsServer", "MicrosoftWindowsServer", "2016-Datacenter", "latest")
$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.windows amd64")
New-AzBatchPool -Id "<Pool-Name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -BatchContext $Context
```
1. Warten Sie, bis sich der Knoten im **Leerlaufzustand** befindet.
1. Melden Sie sich am [Azure-Portal](https://portal.azure.com) an.
1. Geben Sie in die Suchleiste die **Batch-Konten** ein und wählen sie.
1. Wählen Sie den Namen des Speicherkontos mit Ihrer Dateifreigabe aus.
1. Wählen Sie im Navigationsmenü des Speicherkontos unter **Datenspeicher** die Option **Container** aus.
1. Wählen Sie auf der Seite **Dateifreigaben** die Option Dateifreigabe aus.
1. Wählen Sie auf der Seite **Übersicht** die Option **Verbinden** aus.
1. Wählen Sie im **Bereich Verbinden** die Registerkarte **Windows** aus.
1. Geben Sie unter **Laufwerkbuchstabe** das Laufwerk ein, das Sie verwenden möchten. Der Standardwert lautet `Z`.
1. Wählen Sie unter **Authentifizierungsmethode** aus, wie Sie eine Verbindung mit der Dateifreigabe herstellen möchten.
1. Kopieren Sie den PowerShell-Befehl zum Einbinden der Dateifreigabe.
1. Verbinden über RDP zum Knoten.
1. Führen Sie den Befehl aus, den Sie zum Einbinden der Dateifreigabe kopiert haben.
1. Beachten Sie alle Fehlermeldungen in der Ausgabe. Verwenden Sie diese Informationen, um netzwerkbezogene Probleme zu beheben.
# <a name="linux"></a>[Linux](#tab/linux)
1. Erstellen Sie einen Pool ohne Einbindungskonfiguration. Beispiel:
```bash
$imageReference = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSImageReference" -ArgumentList @("ubuntuserver", "canonical", "20.04-lts", "latest")
$configuration = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSVirtualMachineConfiguration" -ArgumentList @($imageReference, "batch.node.ubuntu 20.04")
New-AzBatchPool -Id "<Pool-Name>" -VirtualMachineSize "STANDARD_D2_V2" -VirtualMachineConfiguration $configuration -TargetDedicatedComputeNodes 1 -BatchContext $Context
```
1. Warten Sie, bis sich der Knoten im **Leerlaufzustand** befindet.
1. Melden Sie sich am [Azure-Portal](https://portal.azure.com) an.
1. Geben Sie in die Suchleiste die **Batch-Konten** ein und wählen sie.
1. Wählen Sie den Namen des Speicherkontos mit Ihrer Dateifreigabe aus.
1. Wählen Sie im Navigationsmenü des Speicherkontos unter **Datenspeicher** die Option **Container** aus.
1. Wählen Sie auf der Seite **Dateifreigaben** die Option Dateifreigabe aus.
1. Wählen Sie auf der Seite **Übersicht** die Option **Verbinden** aus.
1. Wählen Sie im **Bereich Verbinden** die Registerkarte **Windows** aus.
1. Geben Sie den **Einhängepunkt** ein, den Sie verwenden möchten.
1. Kopieren Sie den PowerShell-Befehl zum Einhängen der Dateifreigabe.
1. Verbinden über SSH zum Knoten.
1. Führen Sie den Befehl aus, den Sie zum Einbinden der Dateifreigabe kopiert haben.
1. Beachten Sie alle Fehlermeldungen in der Ausgabe. Verwenden Sie diese Informationen, um netzwerkbezogene Probleme zu beheben.
---
## <a name="examples"></a>Beispiele
In den folgenden Codebeispielen wird das Einhängen verschiedener Dateifreigaben in einem Pool von Computerknoten veranschaulicht.
### <a name="azure-files-share"></a>Azure Files-Freigabe
Azure Files ist das Standardangebot für Azure-Clouddateisysteme. Informationen zu den Parametern im Codebeispiel finden Sie unter [Verwendung einer Azure-Dateifreigabe - SMB](../storage/files/storage-how-to-use-files-windows.md) oder [Verwendung einer Azure-Dateifreigabe mit - NFS](../storage/files/storage-files-how-to-create-nfs-shares.md).
```csharp
new PoolAddParameter
{
Id = poolId,
MountConfiguration = new[]
{
new MountConfiguration
{
AzureFileShareConfiguration = new AzureFileShareConfiguration
{
AccountName = "{storage-account-name}",
AzureFileUrl = "https://{storage-account-name}.file.core.windows.net/{file-share-name}",
AccountKey = "{storage-account-key}",
RelativeMountPath = "S",
MountOptions = "-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp"
},
}
}
}
```
### <a name="azure-blob-container"></a>Azure-Blobcontainer
Eine andere Option ist die Verwendung von Azure-Blobspeicher über [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md). Zum Einhängen eines Blobdateisystems ist ein `AccountKey`, ein `SasKey` oder eine `Managed Identity` mit Zugriff auf Ihr Speicherkonto erforderlich.
Informationen zum Abrufen dieser Schlüssel finden Sie unter:
- [Verwalten von Zugriffsschlüsseln für Speicherkonten](../storage/common/storage-account-keys-manage.md)
- [Gewähren von eingeschränktem Zugriff auf Azure Storage-Ressourcen mithilfe von SAS (Shared Access Signature)](../storage/common/storage-sas-overview.md)
- [Konfigurieren Sie verwaltete Identitäten in Batch-Pools](managed-identity-pools.md).
Weitere Informationen und Tipps zur Verwendung von blobfuse erhalten Sie beim [blobfuse-Projekt](https://github.com/Azure/azure-storage-fuse).
Um Standardzugriff auf das über blobfuse eingebundene Verzeichnis zu erhalten, müssen Sie die Aufgabe als **Administrator** ausführen. Blobfuse bindet das Verzeichnis im Benutzerbereich ein, und bei der Poolerstellung wird es als Stamm eingebunden. Unter Linux sind alle **Administratoraufgaben** stammbasiert. Eine Beschreibung aller Optionen für das FUSE-Modul finden Sie auf der [FUSE-Referenzseite](https://manpages.ubuntu.com/manpages/xenial/man8/mount.fuse.8.html).
Weitere Informationen und Tipps zur Verwendung von blobfuse finden Sie in den [häufig gestellten Fragen zur Behandlung von Problemen](https://github.com/Azure/azure-storage-fuse/wiki/3.-Troubleshoot-FAQ) mit blobfuse. Informationen zu aktuellen blobfuse-Problemen und deren Lösungen finden Sie auch im Artikel zu [GitHub-Problemen im blobfuse-Repository](https://github.com/Azure/azure-storage-fuse/issues).
> [!NOTE]
> Das folgende Beispiel zeigt einen `AccountKey`, einen `SasKey` und eine `IdentityReference`, die sich aber gegenseitig ausschließen. Es kann nur eine dieser Optionen angegeben werden.
```csharp
new PoolAddParameter
{
Id = poolId,
MountConfiguration = new[]
{
new MountConfiguration
{
AzureBlobFileSystemConfiguration = new AzureBlobFileSystemConfiguration
{
AccountName = "StorageAccountName",
ContainerName = "containerName",
AccountKey = "StorageAccountKey",
SasKey = "SasKey",
IdentityReference = new ComputeNodeIdentityReference("/subscriptions/SUB/resourceGroups/RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity-name"),
RelativeMountPath = "RelativeMountPath",
BlobfuseOptions = "-o attr_timeout=240 -o entry_timeout=240 -o negative_timeout=120 "
},
}
}
}
```
> [!TIP]
>Wenn Sie eine verwaltete Identität verwenden, stellen Sie sicher, dass die Identität [dem Pool zugewiesen](managed-identity-pools.md) wurde, damit sie auf der VM verfügbar ist, die die Einbindung vornimmt. Die Identität muss über die Rolle `Storage Blob Data Contributor` verfügen, damit sie ordnungsgemäß funktioniert.
### <a name="network-file-system"></a>Network File System
Network File System (NFS) kann auf Poolknoten eingebunden werden, sodass Azure Batch auf herkömmliche Dateisysteme zugreifen kann. Bei dieser Einrichtung kann es sich um einen einzelnen NFS-Server handeln, der in der Cloud bereitgestellt wird, oder um einen NFS-Server vor Ort, auf den über ein virtuelles Netzwerk zugegriffen wird. NFS Einhängepunkte unterstützen [Avere vFXT](../avere-vfxt/avere-vfxt-overview.md). Avere vFXT ist eine verteilte In-Memory-Cachelösung für datenintensive HPC-Aufgaben (High Performance Computing) und andere NFS-kompatible Standardschnittstellen. Zum Beispiel, [NFS für Azure Blob](../storage/blobs/network-file-system-protocol-support.md) und [NFS für Azure Files](../storage/files/storage-files-how-to-mount-nfs-shares.md).
```csharp
new PoolAddParameter
{
Id = poolId,
MountConfiguration = new[]
{
new MountConfiguration
{
NfsMountConfiguration = new NFSMountConfiguration
{
Source = "source",
RelativeMountPath = "RelativeMountPath",
MountOptions = "options ver=3.0"
},
}
}
}
```
### <a name="common-internet-file-system"></a>Common Internet File System
Das Einbinden von [Common Internet File Systems (CIFS)](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) auf Poolknoten ist eine weitere Möglichkeit, um Zugriff auf herkömmliche Dateisysteme zu ermöglichen. CIFS ist ein Dateifreigabeprotokoll, das einen offenen und plattformübergreifenden Mechanismus zum Anfordern von Netzwerkserverdateien und -diensten bietet. CIFS basiert auf der erweiterten Version des [SMB-Protokolls](/windows-server/storage/file-server/file-server-smb-overview), das für die gemeinsame Nutzung von Dateien im Internet und Intranet gedacht ist.
```csharp
new PoolAddParameter
{
Id = poolId,
MountConfiguration = new[]
{
new MountConfiguration
{
CifsMountConfiguration = new CIFSMountConfiguration
{
Username = "StorageAccountName",
RelativeMountPath = "cifsmountpoint",
Source = "source",
Password = "StorageAccountKey",
MountOptions = "-o vers=3.0,dir_mode=0777,file_mode=0777,serverino"
},
}
}
}
```
## <a name="diagnose-mount-errors"></a>Diagnostizieren von Einbindungsfehlern
Wenn bei einer Einbindungskonfiguration ein Fehler auftritt, führt dies zu einem Fehler des Computeknotens im Pool, und der Knotenstatus wird auf `unusable` festgelegt. Überprüfen Sie zum Diagnostizieren eines Einbindungskonfigurationsfehlers die [`ComputeNodeError`](/rest/api/batchservice/computenode/get#computenodeerror)-Eigenschaft auf Details zum Fehler.
Wenn Sie die Protokolldateien für das Debuggen erhalten möchten, verwenden Sie [OutputFiles](batch-task-output-files.md), um die Protokolldateien (`*.log`) hochzuladen. Die Protokolldateien (`*.log`) enthalten Informationen zur Dateisystemeinbindung am Speicherort `AZ_BATCH_NODE_MOUNTS_DIR`. Einbindungsprotokolldateien weisen für jede Einbindung das Format `<type>-<mountDirOrDrive>.log` auf. Die zugehörige Einbindungsprotokolldatei einer `cifs`-Einbindung im Einbindungsverzeichnis `test` heißt beispielsweise `cifs-test.log`.
## <a name="support-matrix"></a>Unterstützungsmatrix
Azure Batch unterstützt die folgenden virtuellen Dateisystemtypen für Knoten-Agents, die für die jeweiligen Herausgeber und Angebote erstellt werden.
| Betriebssystemtyp | Azure Files-Freigabe | Azure-Blobcontainer | NFS-Einbindung | CIFS-Einbindung |
|---|---|---|---|---|
| Linux | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Windows | :heavy_check_mark: | :x: | :x: | :x: |
## <a name="networking-requirements"></a>Netzwerkanforderungen
Beachten Sie die folgenden Anforderungen, wenn Sie virtuelle Dateibereitstellungen mit [Azure Batch-Pools in einem virtuellen Netzwerk](batch-virtual-network.md) verwenden, und achten Sie darauf, dass kein erforderlicher Datenverkehr blockiert wird.
- **Azure-Dateifreigaben**:
- Der TCP-Port 445 muss für Datenverkehr zum/vom Diensttag „Storage“ geöffnet sein. Weitere Informationen finden Sie unter [Verwenden einer Azure-Dateifreigabe mit Windows](../storage/files/storage-how-to-use-files-windows.md#prerequisites).
- **Azure-Blobcontainer**:
- Der TCP-Port 443 muss für Datenverkehr zum/vom Diensttag „Storage“ geöffnet sein.
- VMs müssen auf https://packages.microsoft.com zugreifen können, um blobfuse- und gpg-Pakete herunterzuladen. Abhängig von der Konfiguration benötigen Sie u. U. auch Zugriff auf andere URLs, um weitere Pakete herunterzuladen.
- **Network File System (NFS)** :
- Erfordert Zugriff auf Port 2049 (Standardeinstellung; für Ihre Konfiguration gelten möglicherweise andere Anforderungen).
- VMs müssen Zugriff auf den entsprechenden Paket-Manager haben, um das Paket „nfs-common“ (für Debian oder Ubuntu) oder „nfs-utils“ (für CentOS) herunterzuladen. Diese URL kann je nach Betriebssystemversion variieren. Abhängig von der Konfiguration benötigen Sie u. U. auch Zugriff auf andere URLs, um weitere Pakete herunterzuladen.
- Das Einbinden von Azure Blob oder Azure Files über NFS kann mehr Netzwerkanforderungen haben. Beispielsweise benötigen Sie möglicherweise Computerknoten, die das gleiche Subnetz eines virtuellen Netzwerks wie das Speicherkonto verwenden.
- **Common Internet File System (CIFS)** :
- Erfordert Zugriff auf den TCP-Port 445.
- VMs müssen Zugriff auf den entsprechenden Paket-Manager haben, um das Paket „cifs-utils“ herunterzuladen. Diese URL kann je nach Betriebssystemversion variieren.
## <a name="next-steps"></a>Nächste Schritte
- Erfahren Sie mehr über das Einbinden einer Azure Files-Freigabe mit [Windows](../storage/files/storage-how-to-use-files-windows.md) oder [Linux](../storage/files/storage-how-to-use-files-linux.md).
- Erfahren Sie mehr über das Verwenden und Einbinden virtueller [blobfuse](https://github.com/Azure/azure-storage-fuse)-Dateisysteme.
- Lesen Sie die [Übersicht über Network File System](/windows-server/storage/nfs/nfs-overview), um mehr über NFS und die zugehörigen Anwendungen zu erfahren.
- Lesen Sie die [Übersicht über das Microsoft-SMB-Protokoll und das CIFS-Protokoll](/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), um mehr über CIFS zu erfahren. | 60.935282 | 971 | 0.770762 | deu_Latn | 0.981904 |
93f3cc1caf8c48f47d428c9e8b42aaa0b1f8b98d | 690 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c2704.md | ZombieX/cpp-docs.ja-jp | 0bfbcf6cf988c77875fd053c2d95fd38d623f9fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c2704.md | ZombieX/cpp-docs.ja-jp | 0bfbcf6cf988c77875fd053c2d95fd38d623f9fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c2704.md | ZombieX/cpp-docs.ja-jp | 0bfbcf6cf988c77875fd053c2d95fd38d623f9fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: コンパイラ エラー C2704 |Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-diagnostics
ms.topic: error-reference
f1_keywords:
- C2704
dev_langs:
- C++
helpviewer_keywords:
- C2704
ms.assetid: 185797e2-55b5-4c11-8493-e70eb1d15a94
author: corob-msft
ms.author: corob
ms.workload:
- cplusplus
ms.openlocfilehash: cc8914e8ad349f9dcd75c5f0c08ee0d55570e89d
ms.sourcegitcommit: 913c3bf23937b64b90ac05181fdff3df947d9f1c
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/18/2018
ms.locfileid: "46114931"
---
# <a name="compiler-error-c2704"></a>コンパイラ エラー C2704
'identifier' : varargs 中に __va_start intrinsic を許しました。
`__va_start` 組み込みは、関数の宣言で固定数の引数と一緒に使用されます。 | 23 | 60 | 0.794203 | yue_Hant | 0.079817 |
93f477c069d46735418994176c50b4cc8d41359e | 1,302 | md | Markdown | 2020/09/10/2020-09-10 04:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/09/10/2020-09-10 04:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/09/10/2020-09-10 04:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年09月10日04时数据
Status: 200
1.王菲状态
微博热度:153835
2.吴尊友回应十一假期出游是否安全
微博热度:142043
3.三星折叠屏Z Fold2
微博热度:141622
4.25岁时的宋丹丹
微博热度:136780
5.琉璃
微博热度:71959
6.新浪新闻宠粉节
微博热度:70100
7.被李雪琴笑死
微博热度:68807
8.考前急补身份证清华新生临行送锦旗
微博热度:66031
9.美股
微博热度:64514
10.阿富汗副总统遇袭受轻伤
微博热度:64070
11.差评阴影下的外卖员
微博热度:63961
12.肖战晒坚果
微博热度:63899
13.李思雨求婚陈一鸣
微博热度:51407
14.江苏卫视99晚会
微博热度:45253
15.韩国出现新冠流感双重感染病例
微博热度:42934
16.王菲马云合唱如果云知道
微博热度:42495
17.脱口秀大会
微博热度:42371
18.赵继伟 突然有点厌恶篮球
微博热度:41636
19.广州启用地上红绿灯
微博热度:40931
20.美团回应骑手问题
微博热度:40638
21.乃万秦昊合唱Melody
微博热度:40229
22.旗袍美探
微博热度:39829
23.恶之花
微博热度:39598
24.赵彬彬原生家庭
微博热度:36998
25.武汉不提倡不鼓励教师节收受鲜花
微博热度:36847
26.黄渤硬糖少女魔性合舞
微博热度:36836
27.段奥娟高音好稳
微博热度:36816
28.初遇夫妇双箭头
微博热度:36113
29.日本近500家公司破产
微博热度:35842
30.被绑架的17头牦牛
微博热度:35651
31.吴亦凡PUPPET舞台好炸
微博热度:35595
32.李晨里程被十余人盗刷
微博热度:29663
33.中国中药协会被降级
微博热度:27698
34.上海迪士尼等69家景区门票半价
微博热度:26775
35.特朗普称愿自掏腰包胜选
微博热度:26062
36.刘涛直播
微博热度:25924
37.信小呆
微博热度:25654
38.S10参赛队伍确定
微博热度:24546
39.狗子的标准英文
微博热度:24175
40.Jasper喊陈小春帮他抠脚
微博热度:24091
41.在劫难逃再反转
微博热度:21806
42.孙晓萌腹黑
微博热度:20906
43.教师节贺卡
微博热度:20461
44.墨西哥女子遛老虎逛街
微博热度:20162
45.张雨绮成团24小时vlog
微博热度:19756
46.豆豆淘汰
微博热度:19395
47.最荒唐的出校理由
微博热度:18976
48.心疼李哥
微博热度:16705
49.腾蛇护璇玑
微博热度:16199
50.90后少女博导获阿里百万奖金
微博热度:16021
| 6.382353 | 18 | 0.764209 | yue_Hant | 0.341071 |
93f47ebc5ad2fd5c44408e1b2e6338d3f0d3d1b4 | 70 | md | Markdown | README.md | GladiumLab/GladiumLab.github.io | 976aa46659d7f203e1a7bcffcd1870568f0d0ebc | [
"MIT"
] | null | null | null | README.md | GladiumLab/GladiumLab.github.io | 976aa46659d7f203e1a7bcffcd1870568f0d0ebc | [
"MIT"
] | null | null | null | README.md | GladiumLab/GladiumLab.github.io | 976aa46659d7f203e1a7bcffcd1870568f0d0ebc | [
"MIT"
] | null | null | null | # GladiumLab.github.io
The code behind much of TheBallot.com’s front.
| 23.333333 | 46 | 0.785714 | eng_Latn | 0.949131 |
93f4964512326bed3787263af16009689d800f7a | 3,834 | md | Markdown | contents/byway/15787.md | melitele/byways | 973924f06d66a8020dd0ea58ce640dbd406f95bb | [
"CC-BY-3.0",
"MIT"
] | 4 | 2019-01-26T20:50:25.000Z | 2020-08-12T20:47:19.000Z | contents/byway/15787.md | melitele/byways | 973924f06d66a8020dd0ea58ce640dbd406f95bb | [
"CC-BY-3.0",
"MIT"
] | 3 | 2019-04-01T17:25:37.000Z | 2021-03-09T01:55:25.000Z | contents/byway/15787.md | melitele/byways | 973924f06d66a8020dd0ea58ce640dbd406f95bb | [
"CC-BY-3.0",
"MIT"
] | 7 | 2017-04-19T22:52:17.000Z | 2021-07-13T03:51:42.000Z | ---
template: byway.jade
id: "15787"
name: Coastal Byway
distance: "18.5"
duration: "This route can be driven comfortably in 3 hours, but you are sure to want to stay longer."
description: "This byway escorts you through brilliant panoramas of the Atlantic Ocean. Very popular with bicyclists and pedestrians, it also showcases dozens of fascinating historic sites and scenic areas."
path:
- "kotdGfufoLsFVaHAeML_AAwCOcGE[CiBFwAP_BZ_EfAaE|@iCZoCFaF_@wDa@mF{@eI}AcJkAqDm@}C}@sU_EyCm@wJcBwDs@e@Q"
- "mb`eGjuboLxAB`@DhKhApJjAnC\\b@@hHl@v@AvEQrMiAlANtHxCdAh@vA~@x@t@f@bABZLlETpEl@pEh@dD`AfFb@`A`@d@b@\\lAp@hAj@l@ZhDzBjEfDfAfAbBlBf@^d@XtE|ArDfAdAXtI|AdEl@pNjAbCNpADVD\\PdDpCjBnBb@ZdAbAv@x@|@jAx@xArA`B"
- "mb`eGjuboLnA`@f@H~BT`C\\bCRzB\\dHt@ZF`@LbB`@|BT`KUtMgAxA?XD^FjHtCnBpAd@`@Xl@JXB`@ZlKNvArArIfArFn@jAb@f@fBhApAj@hB~@r@d@vAdArFpEx@z@f@n@Rv@Dn@Bj@AvB@DdAdA~BpB~@f@~Dr@jCp@tGfArIfBdAPd@JxAX|Bf@xI|ArFfAfBh@zFbA"
- "mb`eGjuboLKCgAQyHcByBk@aCi@wNcEwJwCwBy@GG[Us@c@sDiBa@[k@i@QUk@{@sBeEcEmJ]i@OQWYm@c@sMcG}EoA_RaEsBk@{F{Bo@g@Ye@e@iAaFqNuAkB{GaGeCiC}@wAk@_BI}@?}@l@iDFiAU}E_@eAe@m@oAiAaAe@g@MuBu@o@q@uAoBkDwFm@o@s@a@g@SQIeB_@oBSiCe@_@QoAqA_BqCc@uA[q@[G]NiApAsCpFq@p@sAt@iARqCOaBQ{@QsAg@{AaBaA}A}@qAc@g@uEaEoA_BeBiDsAaFcB_FgB_Ec@c@c@Mw@HyD`CcCTyAUsCw@_A@s@Na@f@SbASLq@BaGmBkFwAoOuD{F_B_TgIuCkAOIkBw@eKuEaI_C_N{Ec@QsAs@w@c@}AcAeI}GyJsH_@s@_EeDyIaHiC_CQi@?YVuBn@sCjBkHxAsGn@eEBy@EUKISIwC^qBLiCI}C_@aDe@mAKyBa@eA]oCqAuDeC_B{A_CsC}CeDaAcBo@iBaB_GEi@Da@F_EEk@IQG]w@uC_@uB[kAQc@c@w@g@i@a@YmC_@sC_@o@YqAw@cA}@]c@k@Q[G{DIs@B_BH_DXwAJmADqAL{@GyA?aCIeAQeA_@wOaIqBgA_@MqBRkCJ}AQeA_@gBuAsByCc@oAMgA@gAXuAb@yADYLe@HqCf@uCLqAIw@a@uAqCuFk@eAy@y@sAy@cAc@cAm@aA}@i@o@yAyCaAgBSy@e@}B?OGe@Yg@c@k@eAs@e@EoGx@q@Ae@UmDoC}IuH}D_DiAw@uD{@mAGmB^gBp@eAh@m@R}@HmBHsAJu@P_Af@{AnAw@r@_BlBiB`Ce@bAk@tCq@rASVm@p@uDlByBdCgArBa@nAk@zB[x@]x@_@bCKt@E~ABp@Nr@Rt@rBrD@LlEhHtFrIt@pAl@xA`BnFxArHXjBBXGvA?j@@`@b@tBVdBJ|BWbDyAbGSdAF~@f@lCDt@ItAQtAOt@WZaAt@{@h@g@^{@tAKZSv@Et@Dx@h@|BHdABfAAx@O`CMt@[f@QFoAh@yIxByFtAsCf@gAMyE{@o@CaBL"
- "evweGf}xnLYqWH_BT{CBcCOkC_@uA]y@_BaCc@kAgAcGMg@o@oDc@oDg@eD{GcQaBuEu@}AeHsJ}AcCwA_Dc@cBGaABuA_@sCYuAgCmIu@wB_@o@{A_BgAgCsEaNcAmDqAqAcB{A{@_AwAy@{AWwBJcBl@uAJmFPQ?c@Ek@_@EQqFgHsBaC{BbAq@x@e@nAe@pCAdB@zAPfBb@xBJbBGhBQ~@Pb@fBfCh@l@FNnDzELZ@b@B\\]tBeAzCkFxKgA~BAl@R~H?^IhC]lBk@dRCnBzApJn@tGt@zGt@dH?RGz@]~AUx@c@xBCb@dAbQ@Vb@lEt@vMTbBCrBsAt@c@XMD[FkEl@y@XwAx@i@VeBf@e@\\m@^}C~AyAf@gAFoATSJa@RmBhBuDvC[_@WG]A]JPHTd@F^D^@XE~@l@|Cf@zCv@fGR`AD\\nDjK`AtDZrAzAg@~AKbB?n@l@tA`AfBv@d@JnCbAzAdA\\ZtDmExEiEbBaAjBqAJMfFoCr@_@TQf@U~A_AXUXQdDgA|BgA`EeC~AgARMvAiAlDcE`AqAp@s@d@[r@QfBM~G[b@GZU~@sAzCaFxA}B?GdFeGnBuAvFeEf@WbAe@xAc@n@M"
websites:
- url: "http://www.nh.gov/dot/programs/scbp/tours/coastal.htm"
name: "Coastal Byway (NH-DOT)"
- url: "http://www.visitnh.gov/"
name: "Visit New Hampshire!"
designations:
- New Hampshire State Scenic Byway
states:
- NH
ll:
- -70.81827272727273
- 42.87238181818182
bounds:
- - -70.81943636363637
- 42.87238181818182
- - -70.71477272727273
- 43.07819090909091
---
This byway is unique because it runs along New Hampshire's only coastline. It consequently escorts you through brilliant panoramas of the Atlantic Ocean and its sandy beaches. It also showcases dozens of fascinating historic sites and scenic areas. For example, Portsmouth's greatness as one of the nation's first grand port cities is illustrated in its exceptional late 18th/early 19th-century architecture.
The foremost landmark of the community is the historic
Wentworth-By-the-Sea Hotel, which housed both delegations during the signing of the Russo-Japanese Peace Treaty at Portsmouth Naval Shipyard in 1905.
Hampton Beach is the most popular seasonal resort community in
the region. During the summer months, visitors flock to this area to enjoy sandy beaches, cultural events, and lively
entertainment. | 93.512195 | 1,076 | 0.767084 | yue_Hant | 0.534277 |
93f4ab1b4113edf2953f95e4ac56f9687b34aff1 | 675 | md | Markdown | README.md | lutzer/node-red-contrib-play-soundfile | 2ac09b864c89b80294db98f82c4c7dfcccec4760 | [
"MIT"
] | 1 | 2021-07-16T07:22:53.000Z | 2021-07-16T07:22:53.000Z | README.md | lutzer/node-red-contrib-play-soundfile | 2ac09b864c89b80294db98f82c4c7dfcccec4760 | [
"MIT"
] | 1 | 2021-03-09T13:54:43.000Z | 2021-07-24T17:47:21.000Z | README.md | lutzer/node-red-contrib-play-soundfile | 2ac09b864c89b80294db98f82c4c7dfcccec4760 | [
"MIT"
] | null | null | null | # node-red-contrib-play-soundfile
Plays a sound file on the system. Allows to set up a base directory which contains all the sound files. If no absolute directory path is supplied it will look for the file relative to the project directory.

## Installation
* Via Manage Palette -> Search for "node-red-contrib-play-soundfile"
* Via terminal: `cd ~/node-red; npm install node-red-contrib-play-soundfile`
## Requirements
The node relies on https://github.com/shime/play-sound. One of the following players needs to be installed on the system:
* mplayer
* afplay
* mpg123
* mpg321
* play
* omxplayer
* aplay
* cmdmp3
## Usage
* see node help
| 24.107143 | 206 | 0.74963 | eng_Latn | 0.98667 |
93f4d60a0a895724047ad0dc6d1a477e5d09757d | 9,014 | md | Markdown | README.md | roshnet/ppscore | deb7dfddc1017941d9a5a3b83276706b725ec480 | [
"MIT"
] | null | null | null | README.md | roshnet/ppscore | deb7dfddc1017941d9a5a3b83276706b725ec480 | [
"MIT"
] | null | null | null | README.md | roshnet/ppscore | deb7dfddc1017941d9a5a3b83276706b725ec480 | [
"MIT"
] | null | null | null | # ppscore - a Python implementation of the Predictive Power Score (PPS)
### From the makers of [bamboolib](https://bamboolib.com)
<!-- __If you don't know what the Predictive Power Score is, please read the following blog post: [RIP correlation. Introducing the Predictive Power Score](https://bamboolib.com)__ -->
The PPS is an asymmetric, data-type-agnostic score that can detect linear or non-linear relationships between two columns. The score ranges from 0 (no predictive power) to 1 (perfect predictive power). It can be used as an alternative to the correlation (matrix).
- [Installation](#installation)
- [Getting started](#getting-started)
- [API](#api)
- [Calculation of the PPS](#calculation-of-the-pps)
- [About](#about)
## Installation
> You need Python 3.6 or above.
From the terminal (or Anaconda prompt in Windows), enter:
```bash
pip install ppscore
```
## Getting started
First, let's create some data:
```python
import pandas as pd
import numpy as np
import ppscore as pps
df = pd.DataFrame()
df["x"] = np.random.uniform(-2, 2, 1_000_000)
df["error"] = np.random.uniform(-0.5, 0.5, 1_000_000)
df["y"] = df["x"] * df["x"] + df["error"]
```
Based on the dataframe we can calculate the PPS of x predicting y:
```python
pps.score(df, "x", "y")
```
Here is how we can calculate the PPS matrix between all columns:
```python
pps.matrix(df)
```
For the visualization of the PPS matrix you might want to use seaborn or your favorite viz library:
```python
import seaborn as sns
df_matrix = pps.matrix(df)
sns.heatmap(df_matrix, vmin=0, vmax=1, cmap="Blues", linewidths=0.5, annot=True)
```
## API
### ppscore.score(df, x, y, task=None, sample=5000)
Calculate the Predictive Power Score (PPS) for "x predicts y"
- The score always ranges from 0 to 1 and is data-type agnostic.
- A score of 0 means that the column x cannot predict the column y better than a naive baseline model.
- A score of 1 means that the column x can perfectly predict the column y given the model.
- A score between 0 and 1 states the ratio of how much potential predictive power the model achieved compared to the baseline model.
#### Parameters
- __df__ : pandas.DataFrame
- Dataframe that contains the columns x and y
- __x__ : str
- Name of the column x which acts as the feature
- __y__ : str
- Name of the column y which acts as the target
- __task__ : str, default ``None``
- Name of the prediction task, e.g. ``classification`` or ``regression``.
If the task is not specified, it is infered based on the y column
The task determines which model and evaluation score is used for the PPS
- __sample__ : int or ``None``
- Number of rows for sampling. The sampling decreases the calculation time of the PPS.
If ``None`` there will be no sampling.
#### Returns
- __Dict__:
- A dict that contains multiple fields about the resulting PPS.
The dict enables introspection into the calculations that have been performed under the hood
### ppscore.matrix(df, output="df", **kwargs)
Calculate the Predictive Power Score (PPS) matrix for all columns in the dataframe
#### Parameters
- __df__ : pandas.DataFrame
- The dataframe that contains the data
- __output__ : str - potential values: "df", "dict"
- Control the type of the output. Either return a df or a dict with all the PPS dicts arranged by the target column
- __kwargs__ :
- Other key-word arguments that shall be forwarded to the pps.score method
#### Returns
- __pandas.DataFrame__ or __Dict__:
- Either returns a df or a dict with all the PPS dicts arranged by the target column. This can be influenced by the output argument
## Calculation of the PPS
> If you are uncertain about some details, feel free to jump into the code to have a look at the exact implementation
There are multiple ways how you can calculate the PPS. The ppscore package provides a sample implementation that is based on the following calculations:
- The score is calculated using only 1 feature trying to predict the target column. This means there are no interaction effects between the scores of various features. Note that this is in contrast to feature importance
- The score is calculated on the test sets of a 4-fold crossvalidation (number is adjustable via `ppscore.CV_ITERATIONS`)
- All rows which have a missing value in the feature or the target column are dropped
- In case that the dataset has more than 5,000 rows the score is only calculated on a random subset of 5,000 rows with a fixed random seed (`ppscore.RANDOM_SEED`). You can adjust the number of rows or skip this sampling via the API. However, in most scenarios the results will be very similar.
- There is no grid search for optimal model parameters
### Learning algorithm
As a learning algorithm, we currently use a Decision Tree because the Decision Tree has the following properties:
- can detect any non-linear bivariate relationship
- good predictive power in a wide variety of use cases
- low requirements for feature preprocessing
- robust model which can handle outliers and does not easily overfit
- can be used for classification and regression
- can be calculated quicker than many other algorithms
We differentiate the exact implementation based on the data type of the target column:
- If the target column is numeric, we use the sklearn.DecisionTreeRegressor
- If the target column is categoric, we use the sklearn.DecisionTreeClassifier
> Please note that we prefer a general good performance on a wide variety of use cases over better performance in some narrow use cases. If you have a proposal for a better/different learning algorithm, please open an issue
However, please note why we actively decided against the following algorithms:
- Correlation or Linear Regression: cannot detect non-linear bivariate relationships without extensive preprocessing
- GAMs: might have problems with very unsmooth functions
- SVM: potentially bad performance if the wrong kernel is selected
- Random Forest/Gradient Boosted Tree: slower than a single Decision Tree
- Neural Networks and Deep Learning: slower calculation than a Decision Tree and also needs more feature preprocessing
### Data preprocessing
Even though the Decision Tree is a very flexible learning algorithm, we need to perform the following preprocessing steps if a column has the pandas dtype `object`.
- If the target column is categoric, we use the sklearn.LabelEncoder
- If the feature column is categoric, we use the sklearn.OneHotEncoder
### Inference of the prediction task
The choice of the task (classification or regression) has an influence on the final PPS and thus it is important how the task is chosen. If you calculate a single score, you can pass in a specific task. If you do not specify the task, the task is inferred as follows.
A __classification__ is inferred if one of the following conditions meet:
- the target has the dtype `object` or `categorical`
- the target only has two unique values
- the target is numeric but has less than 15 unique values. This breakpoint can be overridden via the constant `ppscore.NUMERIC_AS_CATEGORIC_BREAKPOINT`
Otherwise, the task is inferred as __regression__ if the dtype is numeric (float or integer).
### Tasks and their score metrics
Based on the data type and cardinality of the target column, ppscore assumes either the task of a classification or regression. Each task uses a different evaluation score for calculating the final predictive power score (PPS).
#### Regression
In case of an regression, the ppscore uses the mean absolute error (MAE) as the underlying evaluation metric (MAE_model). The best possible score of the MAE is 0 and higher is worse. As a baseline score, we calculate the MAE of a naive model (MAE_naive) that always predicts the median of the target column. The PPS is the result of the following normalization (and never smaller than 0):
> PPS = 1 - (MAE_model / MAE_naive)
#### Classification
If the task is a classification, we compute the weighted F1 score (wF1) as the underlying evaluation metric (F1_model). The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The weighted F1 takes into account the precision and recall of all classes weighted by their support as described [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html). As a baseline score, we calculate the weighted F1 score of a naive model (F1_naive) that always predicts the most common class of the target column. The PPS is the result of the following normalization (and never smaller than 0):
> PPS = (F1_model - F1_naive) / (1 - F1_naive)
## About
ppscore is developed by [8080 Labs](https://8080labs.com) - we create tools for Python Data Scientists. If you like `ppscore`, please check out our other project [bamboolib](https://bamboolib.com)
| 46.225641 | 780 | 0.76481 | eng_Latn | 0.998476 |
93f59959907981faccb3efa773127f8085e611c5 | 873 | md | Markdown | content/docs/sound_freq.md | DivHub/pixtudio-website | 1e8470ea18dd42fc1f8c5011f98f94eaa06128c0 | [
"MIT"
] | 1 | 2021-07-14T05:37:54.000Z | 2021-07-14T05:37:54.000Z | content/docs/sound_freq.md | DivHub/pixtudio-website | 1e8470ea18dd42fc1f8c5011f98f94eaa06128c0 | [
"MIT"
] | null | null | null | content/docs/sound_freq.md | DivHub/pixtudio-website | 1e8470ea18dd42fc1f8c5011f98f94eaa06128c0 | [
"MIT"
] | null | null | null | <category:variables> <category:predefined> [category:global
variables](category:global_variables "wikilink") <category:mod_sound>
[**Up to Global Variables**](Global_variables "wikilink")
------------------------------------------------------------------------
Definition
----------
**INT** sound\_freq = 22050
**Sound\_freq** is a [global variable](global_variable "wikilink"),
holding the set sound frequency, which is set when sound is used for the
first time, meaning altering the value of this variable will have no
effect after sound has been initialized. The higher the frequency, the
higher the quality is. Accepted frequencies are:
- 44100: high quality (recommended)
- 22050: medium quality (default)
- 11025: low quality (not recommended)
See also
--------
- [sound\_mode](sound_mode "wikilink")
- [sound\_channels](sound_channels "wikilink")
| 30.103448 | 72 | 0.678121 | eng_Latn | 0.968179 |
93f5bf8bcf8cd8fd7c2483f1f808c5742663f7cc | 227 | md | Markdown | README.en.md | kuinaein/TohoBloodyMarathon | 62569bcd262bf33828b33e3893e4698c1b01c180 | [
"MIT"
] | null | null | null | README.en.md | kuinaein/TohoBloodyMarathon | 62569bcd262bf33828b33e3893e4698c1b01c180 | [
"MIT"
] | null | null | null | README.en.md | kuinaein/TohoBloodyMarathon | 62569bcd262bf33828b33e3893e4698c1b01c180 | [
"MIT"
] | null | null | null | # Copyright Notice
Files in [`/res`](./res) and [`/src/static/third-party`](./src/static/third-party) are <u>**NOT**</u> MIT-licensed.
They were provided by third-party authors and therefore secondary use is restricted.
| 37.833333 | 116 | 0.700441 | eng_Latn | 0.996063 |
93f5d2f4ca550afa904ea45a8bc7c3d236da892a | 106 | md | Markdown | app/templates/course_details/README.md | newacropolis-uk-website/frontend | b7ec023d35194d6d7652addb81043dab630248f3 | [
"MIT"
] | 2 | 2018-10-12T15:02:23.000Z | 2018-10-12T15:04:45.000Z | app/templates/course_details/README.md | newacropolis-uk-website/frontend | b7ec023d35194d6d7652addb81043dab630248f3 | [
"MIT"
] | 24 | 2018-11-08T17:16:27.000Z | 2021-11-13T01:12:11.000Z | app/templates/course_details/README.md | newacropolis-uk-website/frontend | b7ec023d35194d6d7652addb81043dab630248f3 | [
"MIT"
] | 2 | 2018-12-14T14:23:37.000Z | 2019-08-22T00:22:38.000Z | ## Content here uses Textile formatting
See https://textile-lang.com/ for details on how to format text.
| 26.5 | 64 | 0.764151 | eng_Latn | 0.936576 |
93f60278f0b183cece2bc87d4b0701bf2644e93e | 17,951 | markdown | Markdown | _posts/2007-06-11-apparatus-for-releasing-a-parachute-from-its-payload.markdown | api-evangelist/patents-2007 | da723589b6977a05c0119d5476325327da6c5a5c | [
"Apache-2.0"
] | 1 | 2017-11-15T11:20:53.000Z | 2017-11-15T11:20:53.000Z | _posts/2007-06-11-apparatus-for-releasing-a-parachute-from-its-payload.markdown | api-evangelist/patents-2007 | da723589b6977a05c0119d5476325327da6c5a5c | [
"Apache-2.0"
] | null | null | null | _posts/2007-06-11-apparatus-for-releasing-a-parachute-from-its-payload.markdown | api-evangelist/patents-2007 | da723589b6977a05c0119d5476325327da6c5a5c | [
"Apache-2.0"
] | 2 | 2019-10-31T13:03:32.000Z | 2020-08-13T12:57:02.000Z | ---
title: Apparatus for releasing a parachute from its payload
abstract: An apparatus for releasing a parachute from its payload upon ground impact by the payload. The apparatus has a pair of sections releasably secured to each other. Each section has an intermediate portion having a longitudinally extending axis and a spur receiving opening that extends through the intermediate portion and is transverse to the longitudinally extending axis, a first end portion attached to the intermediate portion and comprising a spur that extends in a generally lateral direction with respect to the longitudinally extending axis, and a second end portion attached to the intermediate portion such that the intermediate portion is between the first and second end portions. Each section is configured so that a lanyard can be connected to the section wherein in order to use the apparatus, a lanyard is attached to and between one section and a parachute, and another lanyard is attached to and between the other section and a payload.
url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=08025254&OS=08025254&RS=08025254
owner: United States of America as represented by the Secretary of the Army
number: 08025254
owner_city: Washington
owner_country: US
publication_date: 20070611
---
The invention described herein may be manufactured and used by or for the Government of the United States of America for Governmental purposes without payment of any royalties thereon or therefor.
The present invention generally relates to an apparatus for releasing a parachute from its payload upon ground impact by the payload.
Parachutes are frequently used to deliver payloads to specific locations on the ground when it is not possible for aircraft to land. Typically such locations are isolated and not accessible by other means of transportation. Parachutes have become one of the main forms of payload delivery to military or civilian personnel located in isolated areas.
If the parachute remains connected to the payload when the payload hits the ground winds or other air turbulence can cause the parachute to drag the payload over the ground. This can damage or destroy the payload.
Various prior art devices for releasing a parachute from its payload are described in U.S. Pat. Nos. 2 502 097 2 562 459 2 655 163 2 732 245 2 919 154 4 619 424 and 5 687 931.
What is needed is a new and improved apparatus for instantly releasing a parachute from its payload upon ground impact by the payload.
It is a primary object of the present invention to provide a simple efficient durable strong and quick acting apparatus that instantly releases a parachute from a payload supported by the parachute when the payload impacts the ground thereby preventing wind or other air turbulence from blowing the parachute along the ground with the payload still attached thereto.
Other objects and advantages of the present invention will be apparent from the ensuing description and the accompanying drawings.
In accordance with aforesaid object the present invention is directed to an apparatus for releasing a parachute from its payload upon ground impact by the payload. In one embodiment this apparatus comprises a pair of sections releasably secured to each other. Each section comprises an intermediate portion having a longitudinally extending axis and a spur receiving opening that extends through the intermediate portion and is transverse to the longitudinally extending axis a first end portion attached to the intermediate portion and comprising a spur that extends in a generally lateral direction with respect to the longitudinally extending axis and a second end portion attached to the intermediate portion such that the intermediate portion is between the first and second end portions. Each section further comprises means for attaching a lanyard to the section wherein in order to use the apparatus a lanyard is attached between one of the sections and a parachute and another lanyard is attached between the other section and a payload. The apparatus further comprises components to forcefully release the sections from each other upon ground impact by a payload that is linked to one of the sections. The apparatus includes a device that releasably engages both sections to prevent the components from prematurely forcefully releasing the sections from each other in the absence of tension forces on the apparatus. The device is responsive to the opening of the parachute canopy such upon opening of the parachute canopy the device disengages from the sections so that the components will be free to forcefully release the sections from each other upon ground impact by the payload.
Referring to there is shown apparatus in accordance with one embodiment of the invention. Apparatus is configured for use with payloads that weigh between about one 1 pound and three thousand 3 000 pounds. Apparatus generally comprises sections and that are releasably secured together. Preferably sections and are substantially identical to each other in structure and geometry. Sections and can be fabricated from any one of a variety of materials that exhibit the required strength e.g. metal composites plastics etc. The particular material used to fabricate sections and is highly dependent upon the maximum force that will be applied to sections and .
Referring to A and B section comprises intermediate portion end portion and end portion . Intermediate portion is attached to and between end portions and . Intermediate portion comprises shank portion A and relatively wide portion B. Intermediate portion has slot and opening . Slot transverses longitudinally extending axis . Slot is in proximity to end portion and is sized to receive a lanyard line wire cord etc. that is either attached to a parachute or a payload. As used herein the term lanyard shall include line cable wire cord etc. Shank portion A includes a pair of openings and that are relatively smaller than opening . The purpose of openings and is discussed in the ensuing description. Curved leaf spring is connected to shank portion A and is positioned between openings and . End portion comprises spur that extends in a generally lateral direction with respect to longitudinally extending axis . Spur has a width that is less than the width of shank portion A. This difference in widths provides shoulders A and B. The purposes of shoulders A and B are discussed in the ensuing description. As shown in spur is angulated with respect to horizontal reference axis by an angle . Angle is between about 8 and 14 . However it is critical that the angle does not exceed 15 . The purpose of this configuration is discussed in the ensuing description.
Referring to section comprises intermediate portion end portion and end portion . Intermediate portion is attached to and between end portions and . Intermediate portion comprises shank portion A and relatively wide portion B. Intermediate portion has a slot and opening which perform the same functions as slot and opening respectively of section . Shank portion A has a pair of openings and that are relatively smaller than opening . The purpose of openings and are the same as the purpose of openings and respectively of shank portion A of section .
Referring to curved leaf spring is connected to shank portion A and located between openings and . The purpose of curved leaf spring is the same as that of curved leaf spring of section and is discussed in the ensuing description. End portion comprises spur . Spur has substantially the same shape as spur and also performs the same function as spur . Spur has a width that is less than the width of shank portion A. This difference in widths provides a pair of shoulders. One of these shoulders is shoulder . The other shoulder is not shown. The purpose of these shoulders is the same as the purpose of shoulders A and B. Spur is also angulated with respect to the longitudinally extending axis of section . In a preferred embodiment spur is angulated by the same angle to which spur see is angulated.
Referring to and sections and are configured to be releasably secured to each other. In order to releasably secure sections and together each section and is vertically oriented so that spur of section is aligned with opening in section and spur of section is aligned with opening of section . Next spur is inserted into opening of section and spur is inserted into opening of section . Sections and are pressed together so that curved leaf springs and become compressed. A safety tie is inserted into through openings of sections and openings and of section and configured to compress sections and together so as to counter the opposite force produced by curved leaf springs and . Thus safety tie keeps sections and releasably secured together when tension forces do not exist on apparatus thereby preventing sections and from being prematurely released from one another prior to the opening of the parachute canopy. Lanyard is connected to safety tie and to a parachute canopy not shown . A lanyard is attached to section via slot and then fastened to a payload not shown . Lanyard is fastened to section via slot and to a parachute not shown . Although sections and are shown to have slot and slot respectively for attaching lanyards other alternate configurations can be used. For example instead of slots the sections and can be configured to have eye hooks carabiners etc.
Referring to during the opening phase of the parachute a tension force is created in lanyard which breaks safety tie . After safety tie is broken sections and are held together by the combination of the shallow angle of spurs and and the tension exerted on sections and by the force of the parachute and the payload. Upon ground impact by the payload tension on apparatus instantly decreases to zero at which time curved leaf springs and force sections and apart causing spurs and to become dislodged from openings and respectively thereby instantly releasing sections and from each other. Section remains connected to the parachute via lanyard and section remains connected to the payload via lanyard . As a result the parachute is separated from the payload. Thus apparatus effects instant release of the parachute from the payload after ground impact so that the payload will not be dragged by the parachute.
If upon ground impact by the payload spur becomes dislodged from opening before spur becomes dislodged from opening shoulders A and B prevent the entire shank section A of section from sliding through opening . Similarly if upon ground impact by the payload spur becomes dislodged from opening before spur becomes dislodged from opening the shoulders e.g. shoulder of end portion prevent the entire shank section A of section from sliding through opening .
In a preferred embodiment sections and are substantially identical in shape and structure. Thus it does not matter which of these sections is attached to the parachute or the payload. Apparatus can be connected to one or more parachutes.
Referring to there is shown an apparatus for releasing a parachute from its payload in accordance with another embodiment of the present invention. Apparatus is configured to be used with payloads that are heavier than 3000 pounds. Apparatus comprises section and section . In a preferred embodiment sections and are substantially identical in structure and shape.
Referring to section generally comprises intermediate portion end portion and opposite end portion . End portions and are attached to intermediate portion such that intermediate portion is between end portions and . Section has slanted opening which extends through the thickness of section and which transverses longitudinally extending axis . Slanted opening is sized for receiving spur of section see . Section further includes bore and threaded bore which also extend through the thickness of section and transverse longitudinally extending axis . Bore is sized for receiving a safety lock pin assembly that is described in the ensuing description. Threaded bore is sized for receiving a plug screw assembly which is also described in the ensuing description. End portion comprises spur that is angulated with respect to longitudinally extending axis by angle . Angle is between about 8 and 14 . It is critical that angle does not exceed 15 . End portion is angulated with respect to longitudinally extending axis by angle . It is critical that angle does not exceed 12 . Section further includes opening that receives a lanyard line wire cable rope etc. that is attached to either a parachute or payload. As used herein the term lanyard shall include line wire cable rope etc. In a preferred embodiment opening is configured as a slot.
Referring to section generally comprises intermediate portion and end portions and that are attached to intermediate portion . Intermediate portion is between end portions and . Intermediate portion has slanted opening that extends through the thickness of section and transverses longitudinally extending axis . The purpose of slanted opening is sized to receive spur of section see . Section further includes threaded bore and bore that extend through the thickness of section . Threaded bore is sized for a receiving plug screw assembly which is described in the ensuing description. Bore is sized for receiving a safety lock pin assembly that is also described in the ensuing description. End portion comprises spur that is angulated in the same manner as spur of section . End portion is angulated in the same manner as end of section . Section further includes opening that receives lanyard that is attached to either a parachute or payload. In a preferred embodiment opening is configured as a slot see .
Although sections and are shown to have slot and slot respectively for attaching lanyards other alternate configurations can be used. For example instead of slots the sections and can be configured to have eye hooks carabiners etc.
Referring to in order to releasably secure sections and together spur of section is inserted into slanted opening of section and spur of section is inserted into slanted opening of section . Next safety lock pin and ejection spring are inserted into bore . Ejection spring is mounted on safety lock pin . Trigger pin is inserted into a cavity or bore that is near distal end of safety lock pin . Next plug screw assemblies are inserted into threaded bores and of sections and respectively. Specifically a plug screw assembly comprising plug screw separation spring and contact member are inserted into threaded bore . Spring is mounted on contact member . Plug screw is screwed into threaded bore until contact member abuts intermediate portion of section and spring is significantly compressed between plug screw and contact member . Similarly a plug screw assembly comprising plug screw separation spring and contact member are inserted into threaded bore . Spring is mounted on contact member . Plug screw spring and contact member function in the same manner as plug screw spring and contact member respectively. Plug screw is screwed into threaded bore until contact member abuts intermediate portion of section and spring is significantly compressed between plug screw and contact member .
As shown in a lanyard is fastened to a parachute canopy not shown and inserted through an opening in trigger pin . Cable stop is connected to the end of lanyard . Another lanyard not shown is fastened to section via opening and to a payload not shown . A further lanyard not shown is fastened to section via opening and to the parachute not shown . During the opening phase of the parachute canopy tension forces are applied to apparatus which force sections and tightly together. Lanyard is drawn up by the parachute canopy until cable stop contacts trigger pin . Further movement of lanyard withdraws trigger pin from safety lock pin . Once this occurs ejection spring ejects safety lock pin from bore thereby arming the apparatus such that sections and will be free to separate from one another once the payload impacts the ground. As the parachute and payload descend tension is applied to apparatus . This tension in combination with the placement of angled spurs and in slanted openings and respectively keep sections and secured together during descent. Upon ground impact of the payload the tension force between the parachute and payload instantly decreases to zero thereby allowing separation springs and to force sections and apart such that spurs and are ejected from openings and respectively. When this occurs sections and are completely separated from each other section remains connected to the parachute and section remains connected to the payload.
Thus apparatus effects instant release of the parachute from the payload after ground impact so that the payload will not be dragged or overturned. Thus the payload is prevented from being damaged or destroyed.
Since sections and are identical in geometry and construction it does not matter which section is attached to the parachute or payload. Apparatus can be connected to one or more parachutes.
The apparatus of the present invention is scalable in size mass and strength depending upon load requirements
The foregoing description of the preferred embodiments of the invention has been presented for purposes of illustration and description only. It is neither intended to be exhaustive nor to limit the invention to the precise form disclosed and obviously many modifications and variations are possible in light of the above teaching. Such modifications and variations that may be apparent to a person skilled in the art are intended to be included within the scope of this invention as defined by the accompanying claims.
| 271.984848 | 1,694 | 0.821347 | eng_Latn | 0.999983 |
93f6a1202a0ce3d89607e0789ecb69c62b511d0f | 355 | md | Markdown | split1/_posts/2013-11-16-Om suvarnaya namaha 11 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | split1/_posts/2013-11-16-Om suvarnaya namaha 11 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | split1/_posts/2013-11-16-Om suvarnaya namaha 11 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | ---
layout: post
last_modified_at: 2021-03-30
title: Om Suvarnaya namaha 11 times
youtubeId: VVqbaWPFc3M
---
Om Suvarnaya nama
- Who is of the golden colour
{% include youtubePlayer.html id=page.youtubeId %}
[Next]({{ site.baseurl }}{% link split1/_posts/2013-11-15-Om sarva dehinaam indriyaya namaha 11 times.md%})
| 12.678571 | 108 | 0.667606 | eng_Latn | 0.437591 |
93f74d339ec030cac627212b85dda4edf40d8a37 | 1,823 | md | Markdown | azure-monitor-ref/tables/hdinsightsparkstagetaskaccumulables.md | noueh/azure-reference-other | 9bd1ec0471450ba92b1aa24fdbb6b1995c68ee45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azure-monitor-ref/tables/hdinsightsparkstagetaskaccumulables.md | noueh/azure-reference-other | 9bd1ec0471450ba92b1aa24fdbb6b1995c68ee45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azure-monitor-ref/tables/hdinsightsparkstagetaskaccumulables.md | noueh/azure-reference-other | 9bd1ec0471450ba92b1aa24fdbb6b1995c68ee45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Monitor Logs reference - HDInsightSparkStageTaskAccumulables
description: Reference for HDInsightSparkStageTaskAccumulables table in Azure Monitor Logs.
ms.topic: reference
ms.service: azure-monitor
ms.subservice: logs
ms.author: bwren
author: bwren
ms.date: 8/19/2021
---
# HDInsightSparkStageTaskAccumulables
Spark Stage Task Accumulables.
## Categories
- Azure Resources
## Solutions
- LogManagement
## Resource types
- HDInsight Clusters
## Columns
|Column|Type|Description|
|---|---|---|
|ApplicationId|string|The application ID of the application producing the record.|
|ClusterDnsName|string|The DNS name of the cluster where the metric was collected.|
|ClusterTenantId|string|The tenant ID of the cluster where the metric was collected.|
|Entity|string|The name of the entity being described.|
|EntityId|string|The ID of the entity.|
|Host|string|The FQDN of the host where the metric was collected.|
|IpAddress|string|The IP Address of the node where the metric was collected.|
|MetricId|string|The ID of the metric.|
|MetricName|string|The name of the metric.|
|MetricValue|long|The value of the metric.|
|ParentId|string|The ID of the parent entity.|
|Region|string|The region of the cluster where the metric was collected.|
|_ResourceId|string|A unique identifier for the resource that the record is associated with|
|Role|string|The type of node where the metric was collected.|
|SourceSystem|string||
|_SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
|TenantId|string||
|TimeGenerated|datetime|The timestamp (UTC) of when the log was generated.|
|Type|string|The name of the table|
|UserSubscriptionId|string|The subscription ID of the cluster where the metric was collected.|
| 34.396226 | 101 | 0.766868 | eng_Latn | 0.943389 |
93f769c1870ce94a8510838f812b5e353c7b80fb | 19,263 | md | Markdown | articles/event-hubs/event-hubs-features.md | Microsoft/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T08:02:11.000Z | 2021-05-05T07:47:55.000Z | articles/event-hubs/event-hubs-features.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 476 | 2017-10-15T08:20:18.000Z | 2021-04-16T05:20:11.000Z | articles/event-hubs/event-hubs-features.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-03T09:46:48.000Z | 2021-11-05T11:41:27.000Z | ---
title: Översikt över funktioner – Azure Event Hubs | Microsoft Docs
description: Den här artikeln innehåller information om funktioner och terminologi i Azure Event Hubs.
ms.topic: article
ms.date: 03/15/2021
ms.openlocfilehash: 8ec4b7cdd13c3407747261ef54cb6b1fc58fdb69
ms.sourcegitcommit: b4fbb7a6a0aa93656e8dd29979786069eca567dc
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/13/2021
ms.locfileid: "107310095"
---
# <a name="features-and-terminology-in-azure-event-hubs"></a>Funktioner och terminologi i Azure Event Hubs
Azure Event Hubs är en skalbar tjänst för händelse bearbetning som matar in och bearbetar stora mängder händelser och data, med låg latens och hög tillförlitlighet. Se [Vad är Event Hubs?](./event-hubs-about.md) för en översikt på hög nivå.
Den här artikeln bygger på informationen i [översikts artikeln](./event-hubs-about.md)och ger teknisk och implementerings information om Event Hubs-komponenter och-funktioner.
> [!TIP]
> [Protokoll stödet för **Apache Kafka** -klienter](event-hubs-for-kafka-ecosystem-overview.md) (versioner >= 1,0) tillhandahåller nätverks slut punkter som gör det möjligt för program som skapats att använda Apache Kafka med vilken klient som helst som använder Event Hubs. De flesta befintliga Kafka-program kan enkelt konfigureras om så att de pekar på ett Event Hub-namnområde i stället för en Kafka-kluster Start Server.
>
>Azure Event Hubs är ett bra alternativ till att distribuera och driva dina egna Kafka-och Zookeeper-kluster och till Kafka-as-a-service-erbjudanden som inte är inbyggda i Azure.
>
> Förutom att hämta samma grund funktioner som Apache Kafka Broker får du också till gång till Azure Event Hub-funktioner som automatisk batchbearbetning och arkivering via [Event Hubs avbildning](event-hubs-capture-overview.md), automatisk skalning och balansering, haveri beredskap, kostnads neutral tillgänglighets zons support, flexibel och säker nätverks integrering och stöd för flera protokoll, inklusive det brand Väggs vänliga AMQP-över-WebSockets-protokollet.
## <a name="namespace"></a>Namnområde
Ett Event Hubs-namnområde ger DNS-integrerade nätverks slut punkter och en uppsättning åtkomst kontroll-och hanterings funktioner för nätverks integrering, till exempel [IP-filtrering](event-hubs-ip-filtering.md), [tjänst slut punkt för virtuellt nätverk](event-hubs-service-endpoints.md)och [privat länk](private-link-service.md) och är hanterings behållaren för en av flera Event Hub-instanser (eller ämnen, i Kafka parlance).
## <a name="event-publishers"></a>Händelseutfärdare
En entitet som skickar data till en Event Hub är en *händelse utgivare* (som används synonymt i *händelse producenten*). Händelse utgivare kan publicera händelser med hjälp av HTTPS eller AMQP 1,0 eller Kafka-protokollet. Händelse utgivare använder Azure Active Directory baserad auktorisering med OAuth2 JWT-token eller en Event Hub-speciell signatur för delad åtkomst-token (SAS) med publicerings åtkomst.
### <a name="publishing-an-event"></a>Publicera en händelse
Du kan publicera en händelse via AMQP 1,0, Kafka-protokollet eller HTTPS. Event Hubs tjänsten tillhandahåller klient biblioteken [REST API](/rest/api/eventhub/) och [.net](event-hubs-dotnet-standard-getstarted-send.md), [Java](event-hubs-java-get-started-send.md), [python](event-hubs-python-get-started-send.md), [Java Script](event-hubs-node-get-started-send.md)och [Go](event-hubs-go-get-started-send.md) för att publicera händelser till en Event Hub. För andra körningar och plattformar kan du använda alla AMQP 1.0-klienter, t.ex. [Apache Qpid](https://qpid.apache.org/).
Valet att använda AMQP eller HTTPS är specifikt för användningsscenariot. AMQP kräver en beständig dubbelriktad socket och dessutom säkerhet på transportnivå (TLS) eller SSL/TLS. AMQP har högre nätverks kostnader när sessionen initieras, men HTTPS kräver ytterligare TLS-kostnader för varje begäran. AMQP har betydligt högre prestanda för frekventa utgivare och kan uppnå mycket lägre fördröjning när den används med asynkron publicerings kod.
Du kan publicera händelser individuellt eller i batch. En enda publikation har en gräns på 1 MB, oavsett om det är en enskild händelse eller en batch. Publicerings händelser som är större än det här tröskelvärdet avvisas.
Event Hubs data flöde skalas med hjälp av partitioner och allokeringar av enhets enheter (se nedan). Det är en bra idé för utgivare att vara medveten om den speciella partitionering modell som valts för en Event Hub och bara ange en *partitionsnyckel* som används för att konsekvent tilldela relaterade händelser till samma partition.

Event Hubs garanterar att alla händelser som delar ett nyckel värde lagras tillsammans och levereras i den ordning de anländer. Om partitionsnycklar används med utfärdarprinciper måste utfärdarens identitet och partitionsnyckelns värde matcha varandra. Annars uppstår ett fel.
### <a name="event-retention"></a>Kvarhållning av händelser
Publicerade händelser tas bort från en Händelsehubben baserat på en konfigurerbar, tidsbaserad bevarande princip. Här följer några viktiga punkter:
- **Standardvärdet** och **kortast** möjliga kvarhållningsperiod är **1 dag (24 timmar)**.
- För Event Hubs **standard** är den maximala kvarhållningsperioden **7 dagar**.
- För Event Hubs **dedikerad** är den högsta kvarhållningsperioden **90 dagar**.
- Om du ändrar kvarhållningsperioden gäller den för alla meddelanden, inklusive meddelanden som redan finns i händelsehubben.
Event Hubs behåller händelser för en konfigurerad kvarhållningsperiod som gäller för alla partitioner. Händelser tas bort automatiskt när kvarhållningsperioden har nåtts. Om du anger en kvarhållningsperiod på en dag blir händelsen otillgänglig exakt 24 timmar efter att den har accepterats. Du kan inte uttryckligen ta bort händelser.
Om du behöver arkivera händelser utöver den tillåtna kvarhållningsperioden, kan du låta dem [lagras automatiskt i Azure Storage eller Azure Data Lake genom att aktivera funktionen för Event Hubs Capture](event-hubs-capture-overview.md)och om du behöver söka efter eller analysera sådana djup arkiv kan du [enkelt importera dem till Azure Synapse](store-captured-data-data-warehouse.md) eller andra liknande butiker och analys plattformar.
Orsaken till Event Hubs gräns för datakvarhållning baserat på tid är att förhindra att stora mängder historiska kund data får fångas i en djup lagring som bara indexeras av en tidsstämpel och endast tillåter sekventiell åtkomst. Arkitektur filosofin här är att historiska data behöver bättre indexering och direkt åtkomst än i real tids händelse gränssnittet som Event Hubs eller Kafka tillhandahåller. Händelse Ströms motorer är inte väl lämpade för att spela upp rollen som data sjöar eller långsiktiga Arkiv för händelse källor.
> [!NOTE]
> Event Hubs är en händelse Ströms motor i real tid och är inte avsedd att användas i stället för en databas och/eller som ett permanent Arkiv för händelse strömmar med oändligt kvarhållna händelser.
>
> Den djupare historiken för en händelse ström hämtar, desto mer behöver du hjälp index för att hitta en viss historisk sektor i en given data ström. Granskning av händelse nytto laster och indexering ingår inte i funktions omfånget för Event Hubs (eller Apache Kafka). Databaser och specialiserade analys lager och motorer som [Azure Data Lake Store](../data-lake-store/data-lake-store-overview.md), [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) och [Azure Synapse](../synapse-analytics/overview-what-is.md) är därför mycket bättre lämpade för att lagra historiska händelser.
>
> [Event Hubs avbildningen](event-hubs-capture-overview.md) integreras direkt med Azure Blob Storage och Azure Data Lake Storage och, med denna integrering, aktiverar även [flödes händelser direkt till Azure-Synapse](store-captured-data-data-warehouse.md).
>
> Om du vill använda mönstret för [händelse källor](/azure/architecture/patterns/event-sourcing) för ditt program bör du justera din ögonblicks bilds strategi med lagrings gränserna för Event Hubs. Du behöver inte skapa om materialiserade vyer från obehandlade händelser som börjar vid början av tiden. Surely kommer att behöva ångra en sådan strategi när ditt program är i produktion för ett tag och är väl använt och din projektion Builder måste omsättningen genom flera ändrings händelser samtidigt som du försöker fånga upp till de senaste och pågående ändringarna.
### <a name="publisher-policy"></a>Utgivarprincip
Med händelsehubbar får du granulär kontroll över utgivare via *utgivarprinciper*. Utfärdarprinciper är körningsfunktioner som utformats för att ge stöd för ett stort antal oberoende utfärdare. Med utgivarprinciper använder varje utgivare sin egen unika identifierare vid publicering av händelser på en händelsehubb med hjälp av följande mekanism:
```http
//<my namespace>.servicebus.windows.net/<event hub name>/publishers/<my publisher name>
```
Du behöver inte skapa utgivarnamnen i förväg, men de måste matcha SAS-token som används när du publicerar en händelse för att garantera oberoende utgivaridentiteter. När du använder utgivarprinciper ställs **PartitionKey**-värdet in på utgivarens namn. Dessa värden måste matcha för att fungera korrekt.
## <a name="capture"></a>Capture
Med [Event Hubs Capture](event-hubs-capture-overview.md) kan du automatiskt samla in strömmande data i Event Hubs och spara dem på ditt val av antingen ett Blob Storage-konto eller ett Azure Data Lake tjänst konto. Du kan aktivera avbildning från Azure Portal och ange en minimi storlek och tids period för att utföra avbildningen. Med Event Hubs avbildningen anger du ditt eget Azure Blob Storage-konto och-behållare, eller Azure Data Lake tjänst konto som används för att lagra insamlade data. Insamlade data skrivs i formatet Apache Avro.
## <a name="partitions"></a>Partitioner
[!INCLUDE [event-hubs-partitions](../../includes/event-hubs-partitions.md)]
## <a name="sas-tokens"></a>SAS-token
Event Hubs använder *signaturer för delad åtkomst*, som är tillgängliga i namn området och händelse Hub-nivån. En SAS-token genereras från en SAS-nyckel och är en SHA-hash för en URL som kodats i ett specifikt format. Med hjälp av namnet på nyckeln (principen) och token kan Event Hubs återgenerera hash-värdet och därmed autentisera avsändaren. Normalt skapas SAS-token för händelse utgivare med endast **Skicka** -privilegier för en speciell händelsehubben. Den här URL-mekanismen med SAS-token är den grund för utfärdaridentifiering som presenterades i principen för utfärdare. Mer information om hur du arbetar med SAS finns i [autentisering med signatur för delad åtkomst med Service Bus](../service-bus-messaging/service-bus-sas.md).
## <a name="event-consumers"></a>Händelsekonsumenter
En entitet som läser händelse data från en Event Hub är en *händelse konsument*. Alla Event Hubs-konsumenter ansluter via AMQP 1.0-sessionen och händelser levereras via sessionen när de blir tillgängliga. Klienten behöver inte söka efter datatillgänglighet.
### <a name="consumer-groups"></a>Konsumentgrupper
Publicerings-/prenumerationsmekanismen för Event Hubs aktiveras via *konsumentgrupper*. En konsumentgrupp är en vy (tillstånd, position eller offset) av en hel händelsehubb. Konsumentgrupper gör det möjligt för flera användningsprogram att vart och ett ha en separat vy över händelseströmmen och att oberoende läsa strömmen i egen takt och med sina egna offset.
Inom en arkitektur för strömbearbetning utgör varje nedströms program en konsumentgrupp. Om du vill skriva händelsedata till långsiktig lagring utgör programmet för att skriva data till lagring en konsumentgrupp. Komplex händelsebearbetning kan sedan utföras av en annan, separat konsumentgrupp. Du får bara åtkomst till en partition via en konsumentgrupp. Det finns alltid en förinställd konsumentgrupp i en händelsehubb, och du kan skapa upp till 20 konsumentgrupper för en händelsehubb på standardnivå.
Det får finnas högst 5 samtidiga läsare på en partition per konsument grupp. **vi rekommenderar dock att det bara finns en aktiv mottagare på en partition per konsument grupp**. Varje läsare tar emot alla meddelanden inom en enda partition. Om du har flera läsare på samma partition behandlar du duplicerade meddelanden. Du måste hantera detta i din kod, vilket inte kan vara trivialt. Det är dock en giltig metod i vissa scenarier.
Vissa klienter som erbjuds av Azure SDK: er är intelligenta konsument agenter som automatiskt hanterar information om att se till att varje partition har en enda läsare och att alla partitioner för en Event Hub läses från. På så sätt kan din kod fokusera på bearbetning av de händelser som läses från händelsehubben så att det går att ignorera många av information om partitionerna. Mer information finns i [ansluta till en partition](#connect-to-a-partition).
I följande exempel visas URL-konventionen för konsument gruppen:
```http
//<my namespace>.servicebus.windows.net/<event hub name>/<Consumer Group #1>
//<my namespace>.servicebus.windows.net/<event hub name>/<Consumer Group #2>
```
Följande bild visar strömhanteringsarkitekturen i Event Hubs:

### <a name="stream-offsets"></a>Offsets för strömmar
En *förskjutning* är en händelses position i en partition. Föreställ dig en offset som en markör på klientsidan. Denna offset är en byte-numrering av händelsen. Med den så kan en händelsekonsument (läsare) ange vid vilken punkt i händelseströmmen som läsningen ska starta. Du kan ange offseten som en tidsstämpel eller ett offset-värde. Konsumenterna ansvarar för att lagra sina egna offset-värden utanför händelsehubbtjänsten. Inom en partition innehåller varje händelse en offset.

### <a name="checkpointing"></a>Kontrollpunkter
*Att skapa kontrollpunkter* är en process genom vilken läsare markerar eller sparar sin position inom en händelsesekvens i en partition. Att skapa kontrollpunkter är konsumentens ansvar och görs för varje partition i en konsumentgrupp. Det här ansvaret innebär att varje läsare i partitionen måste hålla reda på sin nuvarande position i händelseströmmen för varje konsumentgrupp. Läsaren kan sedan informera tjänsten när de anser att dataströmmen är klar.
Om en läsare kopplar från en partition och den sedan återansluts kan han börja läsa vid den kontrollpunkt som tidigare skickades in av den senaste läsaren i den aktuella partitionen inom just den konsumentgruppen. När läsaren ansluter skickar den förskjutningen till händelsehubben för att ange den plats där du vill börja läsa. På så sätt kan du använda kontrollpunkter både till att markera händelser som ”klara” i underordnade program och som skydd i händelse av en redundansväxling mellan läsare som körs på olika datorer. Du kan återgå till äldre data genom att ange en lägre offset i den här kontrollpunktsprocessen. Den här mekanismen möjliggör både återhämtning vid redundansväxlingar och återuppspelning av händelseströmmar.
> [!IMPORTANT]
> Förskjutningar tillhandahålls av Event Hubss tjänsten. Det är konsumentens ansvar att kontroll punkten ska bearbetas som händelser.
> [!NOTE]
> Om du använder Azure Blob Storage som kontroll punkts Arkiv i en miljö som har stöd för en annan version av Storage BLOB SDK än vad som normalt är tillgängligt på Azure, måste du använda kod för att ändra Storage Service API-versionen till den version som stöds av den aktuella miljön. Om du till exempel kör [Event Hubs på en Azure Stack hubb version 2002](/azure-stack/user/event-hubs-overview)är den högsta tillgängliga versionen för lagrings tjänsten version 2017-11-09. I så fall måste du använda kod för att rikta Storage Service API-versionen till 2017-11-09. Ett exempel på hur du riktar in en speciell Storage API-version finns i följande exempel på GitHub:
> - [.Net](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
> - [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs-checkpointstore-blob/src/samples/java/com/azure/messaging/eventhubs/checkpointstore/blob/)
> - [Java Script](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventhub/eventhubs-checkpointstore-blob/samples/javascript) eller [typescript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventhub/eventhubs-checkpointstore-blob/samples/typescript)
> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/)
### <a name="common-consumer-tasks"></a>Vanliga konsumentuppgifter
Alla Event Hubs konsumenter ansluter via en AMQP 1,0-session, en tillstånds medveten dubbelriktad kommunikations kanal. Varje partition har en AMQP 1.0-session som gör det lättare att flytta händelser som åtskiljs av partitioner.
#### <a name="connect-to-a-partition"></a>Ansluta till en partition
När du ansluter till partitioner är det vanligt att använda en operationell mekanism för att koordinera läsar anslutningar till vissa partitioner. På så sätt är det möjligt att varje partition i en konsument grupp bara har en aktiv läsare. Kontroll punkter, leasing och hantering av läsare förenklas med hjälp av klienterna i Event Hubs SDK: er som fungerar som intelligenta konsument agenter. Dessa är:
- [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) för .net
- [EventProcessorClient](/java/api/com.azure.messaging.eventhubs.eventprocessorclient) för Java
- [EventHubConsumerClient](/python/api/azure-eventhub/azure.eventhub.aio.eventhubconsumerclient) för python
- [EventHubConsumerClient](/javascript/api/@azure/event-hubs/eventhubconsumerclient) för Java Script/typescript
#### <a name="read-events"></a>Läsa händelser
När en AMQP 1.0-session och -länk har öppnats för en specifik partition, levereras händelser till AMQP 1.0-klienten av händelsehubbtjänsten. Den här leveransmekanismen gör det möjligt med ett högre genomflöde och kortare svarstid än i pull-baserade mekanismer som t.ex. HTTP GET. När händelser skickas till klienten innehåller varje instans av händelsedata viktiga metadata, till exempel offset- och sekvensnumret som används för att göra det lättare att skapa kontrollpunkter i händelsesekvensen.
Händelsedata:
* Offset
* Sekvensnummer
* Brödtext
* Användaregenskaper
* Systemegenskaper
Det är ditt ansvar att hantera positionen (offset).
## <a name="next-steps"></a>Nästa steg
Besök följande länkar för mer utförlig information om Event Hubs:
- Kom igång med händelsehubbar
- [.NET](event-hubs-dotnet-standard-getstarted-send.md)
- [Java](event-hubs-java-get-started-send.md)
- [Python](event-hubs-python-get-started-send.md)
- [JavaScript](event-hubs-node-get-started-send.md)
* [Programmerings guide för Event Hubs](event-hubs-programming-guide.md)
* [Tillgänglighet och konsekvens i Event Hubs](event-hubs-availability-and-consistency.md)
* [Vanliga frågor och svar om Event Hubs](event-hubs-faq.yml)
* [Event Hubs exempel](event-hubs-samples.md) | 106.425414 | 739 | 0.80761 | swe_Latn | 0.999903 |
93f79d67c36c26f08383c8694d2823fcc6a11fca | 153 | md | Markdown | README.md | shoaibkhanz/Scikit-Learn-a-complete-machine-learning-book | bf5ec510da3cdb29c91fff541674fba9e4a05893 | [
"MIT"
] | null | null | null | README.md | shoaibkhanz/Scikit-Learn-a-complete-machine-learning-book | bf5ec510da3cdb29c91fff541674fba9e4a05893 | [
"MIT"
] | null | null | null | README.md | shoaibkhanz/Scikit-Learn-a-complete-machine-learning-book | bf5ec510da3cdb29c91fff541674fba9e4a05893 | [
"MIT"
] | null | null | null | # Scikit-Learn: a complete machine learning book
This is a book for a comprehensive/complete course on scikit-learn, accompanied with videos on YouTube.
| 51 | 103 | 0.810458 | eng_Latn | 0.994152 |
93f7cfbeffa20aee888acc59bf12b80c6d6726f3 | 267 | md | Markdown | .github/PULL_REQUEST_TEMPLATE.md | Savvy/SourceBin | f4b622cdcdf43986e04cb0a1f4fdfed4aa2d6fe1 | [
"ISC"
] | 37 | 2020-02-22T16:04:07.000Z | 2022-02-20T14:59:49.000Z | .github/PULL_REQUEST_TEMPLATE.md | DeCarlos-Stigger/SourceBin | 0415e3941480427a8970e312e6bf259bcb8b1296 | [
"MIT"
] | 15 | 2020-03-11T11:37:46.000Z | 2021-11-05T21:50:01.000Z | .github/PULL_REQUEST_TEMPLATE.md | DeCarlos-Stigger/SourceBin | 0415e3941480427a8970e312e6bf259bcb8b1296 | [
"MIT"
] | 12 | 2020-10-21T23:26:29.000Z | 2022-01-25T19:22:17.000Z | **Please describe the changes made in this PR and why it should be merged:**
**Information:**
- [ ] This PR changes interaction with data in localStorage, cookies or cache
- [ ] This PR **only** includes non-code changes, like changes to documentation, README, etc.
| 44.5 | 93 | 0.734082 | eng_Latn | 0.996604 |
93f93c32e09d94aa050cd378448636425579be38 | 1,808 | md | Markdown | biztalk/core/bam-dts-packages.md | changeworld/biztalk-docs.zh-CN | 0ee8ca09b377aa26a13e0f200c75fca467cd519c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | biztalk/core/bam-dts-packages.md | changeworld/biztalk-docs.zh-CN | 0ee8ca09b377aa26a13e0f200c75fca467cd519c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | biztalk/core/bam-dts-packages.md | changeworld/biztalk-docs.zh-CN | 0ee8ca09b377aa26a13e0f200c75fca467cd519c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: BAM DTS 包 |Microsoft Docs
ms.custom: ''
ms.date: 06/08/2017
ms.prod: biztalk-server
ms.reviewer: ''
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- DTS packages, BAM
- BAM, DTS packages
ms.assetid: bba70d81-6ddf-4f1f-a1f7-d5a5bf453bae
caps.latest.revision: 8
author: MandiOhlinger
ms.author: mandia
manager: anneta
ms.openlocfilehash: 1e49741a0fce6fd69e4e2ba5d8bb8dbd1956a0e8
ms.sourcegitcommit: 266308ec5c6a9d8d80ff298ee6051b4843c5d626
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/27/2018
ms.locfileid: "37015846"
---
# <a name="bam-dts-packages"></a>BAM DTS 包
管理员可以更新以下 BAM DTS 包的参数:
- **CubeUpdate** Data Transformation Services (DTS) 包始终位于星型架构数据库所在的同一服务器上。
- **DataMaintenance** DTS 包始终位于主导入数据库所在的同一服务器上。
DTS 包使用 BAMConfiguration.xml 文件中的以下参数。
|参数|Description|
|---------------|-----------------|
|ConnectionTimeOut|DTS 连接超时值(以秒为单位)是一个整数。 如果省略 ConnectionTimeOut 参数,则该配置文件使用默认值 60 秒。|
|加密|默认情况下,DTS 包在转换数据时不加密数据(即 Encryption 值为 0)。 将 Encryption 设置为 1 可以在转换时进行加密。|
|OwnerPassword|DTS 包所有者的密码。 DTS 包所有者可以打开和修改 DTS 包。 有关 DTS 包所有者的信息,请参阅 SQL Server 联机丛书。|
|UserPassword|DTS 用户的密码。 DTS 包的用户可以运行 DTS 包。 有关 DTS 包的用户的信息,请参阅 SQL Server 联机丛书。|
在 BAMConfiguration.xml 文件中,DTS 包使用以下命名约定:
- **CubeUpdate** DTS 包
**bam_AN_\<** ***多维数据集名称* \>** ,其中 CubeName 是多维数据集的名称。 BAM 工作簿从视图名称生成该多维数据集名称。 如果在 BAM 配置 XML 文档中修改该多维数据集名称,则将在 DTS 包名称中使用新的多维数据集名称。
- **DataMaintenance** DTS 包
**bam_DM_\<** ***ActivityName* \>**,其中 ActivityName 是活动的名称。
您可以运行 CubeUpdate DTS 包来聚合计划的聚合。 在下一部分中,您可以为实时数据聚合指定时段。
## <a name="see-also"></a>请参阅
[BAM 配置架构](../core/bam-configuration-schema.md)
[BAM 安全建议](../core/bam-security-recommendations.md)
[管理 BAM](../core/managing-bam.md) | 32.285714 | 138 | 0.704646 | yue_Hant | 0.954 |
93f9846210507c815339c9a6fbcd4420e3fb05a3 | 167 | md | Markdown | README.md | ayieko168/RP-Software-Serial | 991f0a52fb561468d226b6b5037eb6a4a590968b | [
"MIT"
] | null | null | null | README.md | ayieko168/RP-Software-Serial | 991f0a52fb561468d226b6b5037eb6a4a590968b | [
"MIT"
] | null | null | null | README.md | ayieko168/RP-Software-Serial | 991f0a52fb561468d226b6b5037eb6a4a590968b | [
"MIT"
] | null | null | null | # RP-Software-Serial
This project tries to bring the functionality of using The Raspberry Pi GPIO pins as Serial RX-TX ports as on the Arduino Software-Serial Project
| 55.666667 | 145 | 0.814371 | eng_Latn | 0.992685 |
93f9f3039bca7f59db8b6929d9450a7621d05bca | 7,794 | md | Markdown | articles/sql-database/sql-database-command-line-tools.md | wesmc7777/azure-content | 46ff014f473bf6b195d47911186cc238f79ff826 | [
"CC-BY-3.0"
] | null | null | null | articles/sql-database/sql-database-command-line-tools.md | wesmc7777/azure-content | 46ff014f473bf6b195d47911186cc238f79ff826 | [
"CC-BY-3.0"
] | null | null | null | articles/sql-database/sql-database-command-line-tools.md | wesmc7777/azure-content | 46ff014f473bf6b195d47911186cc238f79ff826 | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="Manage Azure SQL Database with PowerShell"
description="Azure SQL Database Manage with PowerShell."
services="sql-database"
documentationCenter=""
authors="stevestein"
manager="jeffreyg"
editor="monicar"/>
<tags
ms.service="sql-database"
ms.workload="data-management"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="10/08/2015"
ms.author="sstein; vinsonyu"/>
# Manage Azure SQL Database with PowerShell
> [AZURE.SELECTOR]
- [Azure Preview Portal](sql-database-manage-portal.md)
- [Transact-SQL (SSMS)](sql-database-manage-azure-ssms.md)
- [PowerShell](sql-database-command-line-tools.md)
This topic provides PowerShell commands to perform many Azure SQL Database tasks.
> [AZURE.IMPORTANT] Starting with the release of Azure PowerShell 1.0 Preview, the Switch-AzureMode cmdlet is no longer available, and cmdlets that were in the Azure ResourceManger module have been renamed. The examples in this article use the new PowerShell 1.0 Preview naming conventions. For detailed information, see [Deprecation of Switch-AzureMode in Azure PowerShell](https://github.com/Azure/azure-powershell/wiki/Deprecation-of-Switch-AzureMode-in-Azure-PowerShell).
To run PowerShell cmdlets, you need to have Azure PowerShell installed and running, and due to the removal of Switch-AzureMode, you should download and install the latest Azure PowerShell by running the [Microsoft Web Platform Installer](http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409). For detailed information, see [How to install and configure Azure PowerShell](../powershell-install-configure.md).
## Configure your credentials
To run PowerShell cmdlets against your Azure subscription you must first establish access to your Azure account. Run the following and you will be presented with a sign in screen to enter your credentials. Use the same email and password that you use to sign in to the Azure portal.
Add-AzureAccount
After successfully signing in you should see some information on screen that includes the Id you signed in with and the Azure subscriptions you have access to.
## Select your Azure subscription
To select the subscription you want to work with you need your subscription Id (**-SubscriptionId**) or subscription name (**-SubscriptionName**). You can copy it from the previous step, or if you have multiple subscriptions you can run the **Get-AzureSubscription** cmdlet and copy the desired subscription information from the resultset.
Run the following cmdlet with your subscription information to set your current subscription:
Select-AzureSubscription -SubscriptionId 4cac86b0-1e56-bbbb-aaaa-000000000000
The following commands will be run against the subscription you just selected above.
## Create a resource group
Create the resource group that will contain the server. You can edit the next command to use any valid location.
For a list of valid Azure SQL Database server locations run the following cmdlets:
$AzureSQLLocations = Get-AzureRMLocation | Where-Object Name -Like "*SQL/Servers"
$AzureSQLLocations.Locations
If you already have a resource group you can jump ahead to create a server, or you can edit and run the following command to create a new resource group:
New-AzureRMResourceGroup -Name "resourcegroupJapanWest" -Location "Japan West"
## Create a server
To create a new V12 server use the [New-AzureRMSqlServer](https://msdn.microsoft.com/library/azure/mt603715.aspx) cmdlet. Replace server12 with the name for your server. It must be unique to Azure SQL Servers so you will get an error here if the server name is already taken. Also worth noting is that this command may take several minutes to complete. The server details and PowerShell prompt will appear after the server is successfully created. You can edit the command to use any valid location.
New-AzureRMSqlServer -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -Location "Japan West" -ServerVersion "12.0"
When you run this command a window opens asking for a **User name** and **Password**. This is not your Azure credentials, enter the user name and password that will be the administrator credentials you want to create for the new server.
## Create a server firewall rule
To create a firewall rule to access the server use the [New-AzureRMSqlServerFirewallRule](https://msdn.microsoft.com/library/azure/mt603860.aspx) command. Run the following command replacing the start and end IP addresses with valid values for your client.
If your server needs to allow access to other Azure services, add the **-AllowAllAzureIPs** switch that will add a special firewall rule and allow all azure traffic access to the server.
New-AzureRMSqlServerFirewallRule -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -FirewallRuleName "clientFirewallRule1" -StartIpAddress "192.168.0.198" -EndIpAddress "192.168.0.199"
For more information, see [Azure SQL Database Firewall](https://msdn.microsoft.com/library/azure/ee621782.aspx).
## Create a SQL database
To create a database use the [New-AzureRMSqlDatabase](https://msdn.microsoft.com/library/azure/mt619339.aspx) command. You need a server to create a database. The following example creates a SQL database named TestDB12. The database is created as a Standard S1 database.
New-AzureRMSqlDatabase -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -DatabaseName "TestDB12" -Edition Standard -RequestedServiceObjectiveName "S1"
## Change the performance level of a SQL database
You can scale your database up or down with the [Set-AzureRMSqlDatabase](https://msdn.microsoft.com/library/azure/mt619433.aspx) command. The following example scales up a SQL database named TestDB12 from its current performance level to a Standard S3 level.
Set-AzureRMSqlDatabase -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -DatabaseName "TestDB12" -Edition Standard -RequestedServiceObjectiveName "S3"
## Delete a SQL database
You can delete a SQL database with the [Remove-AzureRMSqlDatabase](https://msdn.microsoft.com/library/azure/mt619368.aspx) command. The following example deletes a SQL database named TestDB12.
Remove-AzureRMSqlDatabase -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12" -DatabaseName "TestDB12"
## Delete a server
You can also delete a server with the [Remove-AzureRMSqlServer](https://msdn.microsoft.com/library/azure/mt603488.aspx) command. The following example deletes a server named server12.
Remove-AzureRMSqlServer -ResourceGroupName "resourcegroupJapanWest" -ServerName "server12"
If you will be creating these Azure SQL resources again or a similar ones, you can:
- Save this as a PowerShell script file (*.ps1)
- Save this as an Azure automation runbook in the Automation section of the Azure Management Portal
## Next Steps
Combine commands and automate. For example, replace everything within the quotes, including the < and > characters, with your values to create a server, firewall rule and database:
New-AzureRMResourceGroup -Name "<resourceGroupName>" -Location "<Location>"
New-AzureRMSqlServer -ResourceGroupName "<resourceGroupName>" -ServerName "<serverName>" -Location "<Location>" -ServerVersion "12.0"
New-AzureRMSqlServerFirewallRule -ResourceGroupName "<resourceGroupName>" -ServerName "<serverName>" -FirewallRuleName "<firewallRuleName>" -StartIpAddress "<192.168.0.198>" -EndIpAddress "<192.168.0.199>"
New-AzureRMSqlDatabase -ResourceGroupName "<resourceGroupName>" -ServerName "<serverName>" -DatabaseName "<databaseName>" -Edition <Standard> -RequestedServiceObjectiveName "<S1>"
## Related Information
- [Azure SQL Database Cmdlets](https://msdn.microsoft.com/library/azure/mt574084.aspx) | 59.045455 | 500 | 0.794714 | eng_Latn | 0.922076 |
93f9f526234bb6a03ad79e09febfc42029bbeadc | 8,561 | md | Markdown | docs/framework/data/adonet/oracle-data-type-mappings.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/oracle-data-type-mappings.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/oracle-data-type-mappings.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Oracle-Datentypzuordnungen
ms.date: 03/30/2017
ms.assetid: ec34ae21-bbbb-4adb-b672-83865e2a8451
ms.openlocfilehash: be478741069e9edd406d73c0b75d5960b9909896
ms.sourcegitcommit: d2e1dfa7ef2d4e9ffae3d431cf6a4ffd9c8d378f
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 09/07/2019
ms.locfileid: "70783421"
---
# <a name="oracle-data-type-mappings"></a>Oracle-Datentypzuordnungen
In der folgenden Tabelle werden Oracle-Datentypen und ihre Zuordnungen zum <xref:System.Data.OracleClient.OracleDataReader> aufgelistet.
|Oracle-Datentyp|Von OracleDataReader.GetValue zurückgegebener .NET Framework-Datentyp|Von OracleDataReader.GetOracleValue zurückgegebener OracleClient-Datentyp|Hinweise|
|----------------------|--------------------------------------------------------------------|------------------------------------------------------------------------|-------------|
|**BFILE**|**Byte[]**|<xref:System.Data.OracleClient.OracleBFile>||
|**BLOB**|**Byte[]**|<xref:System.Data.OracleClient.OracleLob>||
|**CHAR**|**String**|<xref:System.Data.OracleClient.OracleString>||
|**CLOB**|**String**|<xref:System.Data.OracleClient.OracleLob>||
|**DATE**|**DateTime**|<xref:System.Data.OracleClient.OracleDateTime>||
|**FLOAT**|**Decimal**|<xref:System.Data.OracleClient.OracleNumber>|Dieser Datentyp ist ein Alias für den **Number** -Datentyp. er ist so konzipiert, <xref:System.Data.OracleClient.OracleDataReader> dass ein **System. Decimal** oder <xref:System.Data.OracleClient.OracleNumber> anstelle eines Gleit Komma Werts zurückgibt. Die Verwendung des .NET Framework-Datentyps kann zu einem Überlauf führen.|
|**ZAH**|**Decimal**|<xref:System.Data.OracleClient.OracleNumber>|Dieser Datentyp ist ein Alias für den **Number (38)** -Datentyp und ist so konzipiert, dass <xref:System.Data.OracleClient.OracleDataReader> ein **System. Decimal** oder <xref:System.Data.OracleClient.OracleNumber> anstelle eines ganzzahligen Werts zurückgibt. Die Verwendung des .NET Framework-Datentyps kann zu einem Überlauf führen.|
|**INTERVALL JAHR UND MONAT**|**Int32**|<xref:System.Data.OracleClient.OracleMonthSpan>||
|**INTERVALL (TAG BIS SEKUNDE)**|**TimeSpan**|<xref:System.Data.OracleClient.OracleTimeSpan>||
|**LONG**|**String**|<xref:System.Data.OracleClient.OracleString>||
|**LANGE ROHDATEN**|**Byte[]**|<xref:System.Data.OracleClient.OracleBinary>||
|**NCHAR**|**String**|<xref:System.Data.OracleClient.OracleString>||
|**NCLOB**|**String**|<xref:System.Data.OracleClient.OracleLob>||
|**EINIGEN**|**Decimal**|<xref:System.Data.OracleClient.OracleNumber>|Die Verwendung des .NET Framework-Datentyps kann zu einem Überlauf führen.|
|**NVARCHAR2**|**String**|<xref:System.Data.OracleClient.OracleString>||
|**RAW**|**Byte[]**|<xref:System.Data.OracleClient.OracleBinary>||
|**REF CURSOR**|||Der Oracle **ref Cursor** -Datentyp wird vom <xref:System.Data.OracleClient.OracleDataReader> -Objekt nicht unterstützt.|
|**ROWID**|**String**|<xref:System.Data.OracleClient.OracleString>||
|**TIMESTAMP**|**DateTime**|<xref:System.Data.OracleClient.OracleDateTime>||
|**ZEITSTEMPEL MIT LOKALER ZEITZONE**|**DateTime**|<xref:System.Data.OracleClient.OracleDateTime>||
|**ZEITSTEMPEL MIT ZEITZONE**|**DateTime**|<xref:System.Data.OracleClient.OracleDateTime>||
|**GANZZAHL OHNE VORZEICHEN**|**Zahl**|<xref:System.Data.OracleClient.OracleNumber>|Dieser Datentyp ist ein Alias für den **Number (38)** -Datentyp und ist so konzipiert, dass <xref:System.Data.OracleClient.OracleDataReader> ein **System. Decimal** oder <xref:System.Data.OracleClient.OracleNumber> anstelle eines ganz Zahl Werts ohne Vorzeichen zurückgibt. Die Verwendung des .NET Framework-Datentyps kann zu einem Überlauf führen.|
|**VARCHAR2**|**String**|<xref:System.Data.OracleClient.OracleString>||
In der folgenden Tabelle werden die Oracle-Datentypen und die .NET Framework Datentypen (**System. Data. DbType** und <xref:System.Data.OracleClient.OracleType>) aufgelistet, die verwendet werden können, wenn Sie als Parameter gebunden werden.
|Oracle-Datentyp|DbType-Enumeration, die als Parameter gebunden werden soll|OracleType-Enumeration, die als Parameter gebunden werden soll|Hinweise|
|----------------------|-----------------------------------------------|---------------------------------------------------|-------------|
|**BFILE**||**BFILE**|Oracle ermöglicht nur das Binden einer **BFILE** als **BFILE** -Parameter. Der .NET-Datenanbieter für Oracle erstellt nicht automatisch eine für Sie, wenn Sie versuchen, einen anderen als einen**BFILE** -Wert (z. b. <xref:System.Data.OracleClient.OracleBinary> **Byte [] oder)** zu binden.|
|**BLOB**||**Blob**|Oracle ermöglicht nur das Binden eines **BLOBs** als **BLOB** -Parameter. Der .NET-Datenanbieter für Oracle erstellt nicht automatisch eine für Sie, wenn Sie versuchen, einen nicht-**BLOB** -Wert wie z. b. **Byte []** oder <xref:System.Data.OracleClient.OracleBinary>zu binden.|
|**CHAR**|**AnsiStringFixedLength**|**Char**||
|**CLOB**||**Clob**|Oracle ermöglicht nur das Binden eines **CLOB** als **CLOB** -Parameter. Der .NET-Datenanbieter für Oracle erstellt nicht automatisch einen für Sie, wenn Sie versuchen, einen nicht-**CLOB** -Wert wie z. b **. System. String** oder <xref:System.Data.OracleClient.OracleString>zu binden.|
|**DATE**|**DateTime**|**DateTime**||
|**FLOAT**|**Single, Double, Decimal**|**Float, Double, Number**|<xref:System.Data.OracleClient.OracleParameter.Size%2A>bestimmt den **System. Data. DbType** und <xref:System.Data.OracleClient.OracleType>.|
|**ZAH**|**SByte, Int16, Int32, Int64, Decimal**|**SByte, Int16, Int32, Zahl**|<xref:System.Data.OracleClient.OracleParameter.Size%2A>bestimmt den **System. Data. DbType** und <xref:System.Data.OracleClient.OracleType>.|
|**INTERVALL JAHR UND MONAT**|**Int32**|**Intervalyeartomonth**|<xref:System.Data.OracleClient.OracleType> ist nur verfügbar, wenn sowohl die Oracle 9i-Client- als auch die Oracle 9i-Serversoftware verwendet wird.|
|**INTERVALL (TAG BIS SEKUNDE)**|**Objekt**|**IntervalDayToSecond**|<xref:System.Data.OracleClient.OracleType> ist nur verfügbar, wenn sowohl die Oracle 9i-Client- als auch die Oracle 9i-Serversoftware verwendet wird.|
|**LONG**|**AnsiString**|**LongVarChar**||
|**LANGE ROHDATEN**|**Binary**|**LongRaw**||
|**NCHAR**|**StringFixedLength**|**NChar**||
|**NCLOB**||**NClob**|Oracle ermöglicht nur das Binden eines **NCLOB** als **NCLOB** -Parameter. Der .NET-Datenanbieter für Oracle erstellt nicht automatisch einen für Sie, wenn Sie versuchen, einen nicht-**NCLOB** -Wert wie z. b **. System. String** oder <xref:System.Data.OracleClient.OracleString>zu binden.|
|**EINIGEN**|**VarNumeric**|**Zahl**||
|**NVARCHAR2**|**String**|**NVarChar**||
|**RAW**|**Binary**|**Stoffes**||
|**REF CURSOR**||**Hand**|Weitere Informationen finden Sie unter [Oracle-REF Cursors](oracle-ref-cursors.md).|
|**ROWID**|**AnsiString**|**ROWID**||
|**TIMESTAMP**|**DateTime**|**Zeitstempel**|<xref:System.Data.OracleClient.OracleType> ist nur verfügbar, wenn sowohl die Oracle 9i-Client- als auch die Oracle 9i-Serversoftware verwendet wird.|
|**ZEITSTEMPEL MIT LOKALER ZEITZONE**|**DateTime**|**TimestampLocal**|<xref:System.Data.OracleClient.OracleType> ist nur verfügbar, wenn sowohl die Oracle 9i-Client- als auch die Oracle 9i-Serversoftware verwendet wird.|
|**ZEITSTEMPEL MIT ZEITZONE**|**DateTime**|**TimestampWithTz**|<xref:System.Data.OracleClient.OracleType> ist nur verfügbar, wenn sowohl die Oracle 9i-Client- als auch die Oracle 9i-Serversoftware verwendet wird.|
|**GANZZAHL OHNE VORZEICHEN**|**Byte, UInt16, UInt32, UInt64, Decimal**|**Byte, UInt16, UInt32, Zahl**|<xref:System.Data.OracleClient.OracleParameter.Size%2A>bestimmt den **System. Data. DbType** und <xref:System.Data.OracleClient.OracleType>.|
|**VARCHAR2**|**AnsiString**|**VarChar**||
Der input **Output**-, **Output**-und **ReturnValue** - **ParameterDirection** -Wert, <xref:System.Data.OracleClient.OracleParameter.Value%2A> der von der <xref:System.Data.OracleClient.OracleParameter> -Eigenschaft des-Objekts verwendet wird, sind .NET Framework-Datentypen, es sei denn, der Eingabe Wert ist ein Oracle-Datentyp (für Beispiel, <xref:System.Data.OracleClient.OracleNumber> oder <xref:System.Data.OracleClient.OracleString>). Dies gilt nicht für die Datentypen **ref Cursor**, **BFILE**oder **LOB** .
## <a name="see-also"></a>Siehe auch
- [Oracle und ADO.NET](oracle-and-adonet.md)
- [Übersicht über ADO.NET](ado-net-overview.md)
| 114.146667 | 519 | 0.715454 | yue_Hant | 0.472386 |
93fb0249f49e4b73f97aa59326c7ce7ca3c760ef | 554 | md | Markdown | README.md | arickuter/arcade-game | 2f15d1c8eb6732cb8d5d5febdb1f8be5decc999f | [
"MIT"
] | null | null | null | README.md | arickuter/arcade-game | 2f15d1c8eb6732cb8d5d5febdb1f8be5decc999f | [
"MIT"
] | null | null | null | README.md | arickuter/arcade-game | 2f15d1c8eb6732cb8d5d5febdb1f8be5decc999f | [
"MIT"
] | null | null | null | Frogger arcade game
===============================
## How to Run
### Download the project and open index.html in any browser to run the game.
## Movement
### To move the player use the up, down, left and right arrow keys
## Objective
### The objective of the game is to get to the water without touching the enemies
## Restarting
### You can restart the game by clicking the restart button
## Score
### Your score is increased by reaching the water. Your deaths are increased when you touch the enemies, so try and avoid them at all costs! Have fun! | 34.625 | 150 | 0.700361 | eng_Latn | 0.999816 |
93fcad17facd5401275900c7067a964307bfe064 | 8,717 | md | Markdown | _posts/aws/2019-05-29-AWS-Lambda-HowTo.md | dwilliams-armory/knowledge-base | a4e76ebe4dd798ce75b23ba761aa80ffeb4d637e | [
"MIT"
] | null | null | null | _posts/aws/2019-05-29-AWS-Lambda-HowTo.md | dwilliams-armory/knowledge-base | a4e76ebe4dd798ce75b23ba761aa80ffeb4d637e | [
"MIT"
] | null | null | null | _posts/aws/2019-05-29-AWS-Lambda-HowTo.md | dwilliams-armory/knowledge-base | a4e76ebe4dd798ce75b23ba761aa80ffeb4d637e | [
"MIT"
] | null | null | null | ---
date: 2019-05-29
title: AWS Lambda & Custom Webhook Stages
categories:
- AWS
description: How to enable AWS Lambda and use a custom webhook stage to update your Lambda code.
type: Document
---
## Background
Back in December of 2018, AWS added supported for Lambda and it was released in Spinnaker OSS v1.12. The challenge, however, was that there was no UI (Deck) components to support the usage of Lambda. Instead, a [README.md](https://github.com/spinnaker/clouddriver/blob/master/clouddriver-lambda/README.md) was published that specified the API to make changes to Lambda.
This document will show you how to create a custom webhook stage that utilizes the Lambda API built into Clouddriver. This document also assumes that you have configured an AWS account. If you need more instructions on configuring your AWS account please refer to either [Deploying to AWS from Spinnaker (using IAM instance roles)](https://docs.armory.io/spinnaker-install-admin-guides/add-aws-account-iam/) or [Deploying to AWS from Spinnaker (using IAM credentials)](https://docs.armory.io/spinnaker-install-admin-guides/add-aws-account/)
### Note:
This is a proof of concept implementation and is not recommended for production use!
## Enable AWS Lambda in Spinnaker
First, we need to enable Lambda by adding a `clouddriver-local.yml` file to your hal config profiles directory e.g. `.hal/default/profiles/clouddriver-local.yml`
(As of Halyard OSS version 1.20, there is no support for adding Lambda configurations via hal commands.)
```yaml
aws:
lambda:
enabled: true
accounts:
- name: aws
lambdaEnabled: true
providerVersion: V1
accountId: '555692138000'
regions:
- name: us-east-1
- name: us-west-2
assumeRole: role/awaylambda05242019-managedrole
```
You can check your configuration by running `hal deploy apply` and check the clouddriver logs (e.g. `kubectl logs -f -n spinnaker spin-clouddriver-xxxxx`) for any start-up errors. Refer to the [Debugging](#debugging) section for checking your deployment.
## Adding Custom Webhook Stage
Next, we need to add a custom webhook stage. Alternatively, you could use a simple webhook stage.
Following this guide on [Custom Webhook Stages](https://www.spinnaker.io/guides/operator/custom-webhook-stages/), you should add `.hal/default/profiles/orca-local.yml` with the following content:
```yaml
webhook:
preconfigured:
- label: Lambda - Get Functions
type: lambdaGetFunctions
enabled: true
description: Get Lambda Functions
method: GET
url: http://spin-clouddriver:7002/functions
customHeaders:
Accept:
- "application/json"
- label: Lambda - Update Function Code
type: lambdaUpdateFunctionCode
enabled: true
description: Update Lambda Function Code
method: POST
url: http://spin-clouddriver:7002/aws/ops/updateLambdaFunctionCode
customHeaders:
Accept:
- "application/json"
Content-Type:
- "application/json"
payload: |-
{
"credentials": "${#root['parameterValues']['account']}",
"region": "${#root['parameterValues']['region']}",
"functionName": "${#root['parameterValues']['functionName']}",
"s3Bucket": "${#root['parameterValues']['bucketname']}",
"s3Key": "${#root['parameterValues']['key']}",
"publish": "${#root['parameterValues']['publish']}"
}
parameters:
- label: Spinnaker Account Name
name: account
type: string
- label: Region
name: region
type: string
- label: Function Name
name: functionName
type: string
- label: S3 Bucket Name
name: bucketname
type: string
- label: S3 Key
name: key
type: string
- label: Publish
name: publish
type: string
- label: Lambda - Update Function Configuration
type: lambdaUpdateFunctionConfig
enabled: true
description: Update Lambda Function Configuration
method: POST
url: http://spin-clouddriver:7002/aws/ops/updateLambdaFunctionConfiguration
customHeaders:
Accept:
- "application/json"
Content-Type:
- "application/json"
payload: |-
{
"region": "${#root['parameterValues']['region']}",
"functionName": "${#root['parameterValues']['functionName']}",
"description": "${#root['parameterValues']['description']}",
"credentials": "${#root['parameterValues']['account']}",
"role": "${#root['parameterValues']['roleARN']}",
"timeout": "${#root['parameterValues']['timeout']}"
}
parameters:
- label: Region
name: region
type: string
- label: Function Name
name: functionName
type: string
- label: Description
name: description
type: string
- label: Spinnaker Account Name
name: account
type: string
- label: Role ARN
name: roleARN
type: string
- label: Timeout (secs)
name: timeout
type: string
```
You might notice that the `parameterValues` are being referenced with a `#root` helper function. This is to help ensure that Orca can evaluate the expressions using the parameter values from within the stage.
Run `hal deploy apply` and you should see a new orca pod. And if you want to verify you can exec into orca, and find the orca-local.yml file under `/opt/spinnaker/config/`
## Creating your Pipeline
After you've deployed your changes to orca, you should now be able to see the new stages when configuring your pipeline.
Simply select the stage, and provide the values: example below:
<a href="https://cl.ly/1d686d0f23ed" target="_blank"><img src="https://d2ddoduugvun08.cloudfront.net/items/2h1d3d2V3I0C162j0K1t/Image%202019-05-29%20at%201.13.08%20PM.png" style="display: block;height: auto;width: 100%;"/></a>
## Referencing values from `Get Functions`
You can use SPeL to get values from the response to the `Lambda - Get Functions` stage. Here is an example SPeL expression to get a function name.
`${#stage("Lambda - Get Functions").context.webhook.body[0].functionName}`
Check the 'source' of the pipeline to see the contents of the webhook - or reference the Get Functions output below in the [curl](#curl) section.
## Debugging
For debugging your aws lambda setup, check the clouddriver logs: e.g. `kubectl logs -f -n spinnaker spin-clouddriver-xxxxx`
For debugging your custom webhook, check the logs for both orca and clouddriver.
### Debugging Tools
You can deploy a [debugging pod](https://github.com/armory/docker-debugging-tools) into the same cluster and namespace where Spinnaker lives. This pod contains useful tools that are missing in the Spinnaker micro-services to help you debug your deployment. Check out the [debugging-tools repo](https://github.com/armory/docker-debugging-tools).
### curl
With the debugging pod deployed, you can exec into the pod and run the following calls to test if Lambda is properly configured in Spinnaker. The key point to note here is that the base URL is pointing to clouddriver (e.g. http://spin-clouddriver:7002)
```bash
curl -X GET --header 'Accept: application/json' 'http://spin-clouddriver:7002/functions' | jq .
```
This command requests for the lambda functions in the AWS account and pipes the output to jquery to create a nice human readable format.
You should expect an output similar to the following:
```json
{
"account": "aws",
"codeSha256": "gxyL5FY9DLxavA+MMBAFrsnhL7SRL/CiSwciLGGWBCI=",
"codeSize": 262,
"description": "",
"eventSourceMappings": [],
"functionArn": "arn:aws:lambda:us-west-2:555692138000:function:helloArmoryfx",
"functionName": "helloArmoryfx",
"handler": "index.handler",
"lastModified": "2019-05-28T15:12:40.330+0000",
"layers": [],
"memorySize": 128,
"region": "us-west-2",
"revisionId": "4a799ab5-9cf2-491c-a07f-129247a27b4e",
"revisions": {
"4a799ab5-9cf2-491c-a07f-129247a27b4e": "$LATEST"
},
"role": "arn:aws:iam::555692138000:role/service-role/helloArmoryfx-role-y9j7d2ll",
"runtime": "nodejs10.x",
"timeout": 10,
"tracingConfig": {
"mode": "PassThrough"
},
"version": "$LATEST"
}
```
From here you can test other commands such as Lambda update code.
```bash
curl -X POST \
http://spin-clouddriver:7002/aws/ops/updateLambdaFunctionCode \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"region": "us-west-2",
"functionName": "helloArmoryfx",
"credentials": "aws",
"s3Bucket": "armory-sales-away",
"s3Key": "lambdacode/lambdacode-v0.1.zip",
"publish": "true"
}'
``` | 40.924883 | 543 | 0.692669 | eng_Latn | 0.864368 |
93fd5ee5e47f5e44bc4523556a8d92c2041042f1 | 222 | md | Markdown | _honors/2012_fct_grant.md | renatopanda/renatopanda.github.io | 78f01ca66730823af58ac50f45354f0be0c6c195 | [
"MIT"
] | null | null | null | _honors/2012_fct_grant.md | renatopanda/renatopanda.github.io | 78f01ca66730823af58ac50f45354f0be0c6c195 | [
"MIT"
] | null | null | null | _honors/2012_fct_grant.md | renatopanda/renatopanda.github.io | 78f01ca66730823af58ac50f45354f0be0c6c195 | [
"MIT"
] | null | null | null | ---
layout: post
badge: grant awarded
title: FCT PhD Scholarship
description: Ph.D. Scholarship from the <i>Portuguese Fundação para a Ciência e Tecnologia</i> (FCT), SFRH/BD/91523/2012.
year: 2012
website:
---
| 22.2 | 122 | 0.707207 | eng_Latn | 0.74097 |
93fd95a10b666d373dfaacbc084f587fdbc9384d | 4,487 | md | Markdown | README.md | colomboe/CodeGenerator | f62ac122a20f106c9129fead46a88a546d308b8c | [
"Apache-2.0"
] | 38 | 2017-10-17T03:42:48.000Z | 2022-03-31T06:13:31.000Z | README.md | colomboe/CodeGenerator | f62ac122a20f106c9129fead46a88a546d308b8c | [
"Apache-2.0"
] | 14 | 2017-10-18T13:32:39.000Z | 2020-11-18T08:40:37.000Z | README.md | colomboe/CodeGenerator | f62ac122a20f106c9129fead46a88a546d308b8c | [
"Apache-2.0"
] | 25 | 2017-12-31T09:24:23.000Z | 2022-03-26T19:42:34.000Z | # CodeGenerator
An idea-plugin for code generation, support template customization.
// TODO: add demo
As we know, Intellij had provided useful code generators such as constructors,
getter/setters, equals, hashCode, overrides and delegates, etc. And Intellij
allows us to apply customized velocity templates for each generator. But we
cannot add our own generators.
Code Generator is here to help. Two types of generation are supported here
- Members(fields/method) based templates, such as serializers, equals, etc.
- Class based template, such as transformers, converters, etc. Normally new classes are created.
# Installation
1. Search `CodeGenerator` in Idea plugins
2. Download zip from from [Releases](https://github.com/lotabout/CodeGenerator/releases)
To install a plugin from the disk in idea:
1. Open the `Settings/Preferences` dialog box and select `Plugins` on the left pane.
2. On the right pane of the dialog, click the `Install plugin from disk` button.
3. Select the location of the zip file, click OK to continue.
4. Click `Apply` button of the Settings/Preferences dialog.
5. Restart IntelliJ IDEA to activate.
# Usage
1. Go to the `Settings/Preferences > Other Settings > CodeGenerator` to
create a new generator/template.
2. Right click on your java file, Select `Generate > CodeGenerator > [name of
your generator]` to run the generator.
According to the settings of your generator, there might be dialogs show up
asking to select members or classes that's required by your generator.
# Pipeline for Generators
Say we want to create a template for generating getters/setters, how will user
use your template? An example(the default intellij implementation) is:
1. A dialog show up listing all the fields that hadn't implement
getters/setters yet.
2. User select the members.
3. The code is generated using the getter/setter template.
Thus, as a template creator, we need to:

Here we call it a `pipeline` for generators. Currently two types of user
action are supported:
1. Member selection: generator user can select fields/methods.
2. Class selection: generator user can select a class.
Another example is: you might want to create templates that generate
convertors between two classes, so that you want the user to select the target
class to convert to.
In CodeGenerator, you can create a pipeline with several steps, CodeGenerator
will execute the steps sequencially to collect the context variables. Finally
generate the code use the template.

## Member Selection

Templates varies on what members it allows for selection, for example:
- Getters/Setters generator might want user to select only the fields that
have no getters/setters implemented.
- Delegate generators might want user to select the methods that belongs to
the field or its super classes.
Thus CodeGenerator allows generator creators to provide the members to select:
- set `availableMembers` to provide the members to select.
- set `selectedMembers` to select the members initially, not setting it means
select all available members.
Also after the selection, the template context will add some more variables:
- `fields1`: the selected fields, where `1` is the step postfix;
- `methods1`: the selected methods, if any;
- `members1`: the selected fields/methods.
Here is an example of the context variables:

Note in the begining, the `class0` variable refers to the class entry where
user starts code generation.
## Class Selection

Class selection is much simpler that template creator could specify the
initial class to select.
# Thanks to
- [CodeMaker](https://raw.githubusercontent.com/x-hansong/CodeMaker): where
the idea and part of code comes from.
- [generate-tostring](https://github.com/JetBrains/intellij-community/tree/master/plugins/generate-tostring):
Offical toString generator. Part of the code comes from it.
| 40.423423 | 128 | 0.787832 | eng_Latn | 0.98428 |
93fdf376da9738f3b401eeed2185d36e480e34c0 | 1,263 | md | Markdown | README.md | maipbui/BookZ | dfbed9a0e58971cc31b3da7e21cc8afc24c78ded | [
"MIT"
] | 1 | 2022-01-09T21:59:06.000Z | 2022-01-09T21:59:06.000Z | README.md | maipbui/BookZ | dfbed9a0e58971cc31b3da7e21cc8afc24c78ded | [
"MIT"
] | null | null | null | README.md | maipbui/BookZ | dfbed9a0e58971cc31b3da7e21cc8afc24c78ded | [
"MIT"
] | null | null | null | <h1><img src="https://github.com/maipbui/BookZ/blob/main/public/logo.png" width="30" height="30"/> BookZ</h1>
> BookZ is a website to find your favorite books for Google Books library.
> BookZ works just like web search. Try a search on BookZ.
> When we find a book with content that contains a match for your search terms,
> we'll link to it in your search results.
## Built with
- [React](https://github.com/facebook/react)
- [Google Books API](https://developers.google.com/books/docs/v1/getting_started)
## Features
- Browse by genre
- Basic Search
- Advanced filters
- Search by URL query parameters
## Getting Started
1. Clone the repo
```shell
git clone https://github.com/maipbui/BookZ.git
```
2. Change the current directory to the repo folder
```shell
cd [BookZ]
```
3. Get `API_KEY` from [Google Books API](https://developers.google.com/books/docs/v1/getting_started)
4. Modify `API_KEY` in `[BookZ]/src/components/SetURL.jsx`
```shell
const API_KEY = "YOUR_API_KEY";
```
5. Install npm packages
```shell
npm install
```
6. Run the app in the development mode.
```shell
npm start
```
## Try it out.
See Deployment with Heroku: [BookZ](https://bookz-react.herokuapp.com/)
## License
MIT license. See `LICENSE` for more information.
| 21.05 | 109 | 0.719715 | eng_Latn | 0.561026 |
93ff626f724d7867ac444b016ae5881d1107fc94 | 1,418 | md | Markdown | 2020/12/05/2020-12-05 01:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/12/05/2020-12-05 01:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/12/05/2020-12-05 01:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年12月05日01时数据
Status: 200
1.周扬青第一次上综艺
微博热度:1044631
2.超千万人正承受60分钟以上极端通勤
微博热度:711077
3.戴着皇冠的刘亦菲好像公主
微博热度:584053
4.Baby红毯没输过
微博热度:568023
5.累计收入不超6万元月份暂不预扣个税
微博热度:342089
6.GQ打光太死亡了
微博热度:306487
7.奔跑吧
微博热度:305184
8.GQ盛典红毯
微博热度:296999
9.国务院安委会挂牌督办重庆煤矿事故
微博热度:290004
10.顶楼
微博热度:288050
11.杨幂穿运动鞋走红毯真明智
微博热度:281726
12.孙俪被黑的最惨的一次
微博热度:274149
13.贾玲看到刘德华直播了吗
微博热度:272266
14.加方要求删除孟晚舟案对美加不利证词
微博热度:269953
15.困在花呗里的年轻人
微博热度:262348
16.章若楠颜值
微博热度:253295
17.郭敬明什么时候卖房子拍爵迹3
微博热度:247802
18.哈哈哈哈哈
微博热度:246456
19.这才是真正的标题党
微博热度:232962
20.关震雷李贝离婚
微博热度:229987
21.赖冠霖 别说藏语了
微博热度:229718
22.杨紫文成公主造型
微博热度:228707
23.GQ红毯没有迪丽热巴
微博热度:224091
24.重庆一煤矿发生事故23人被困
微博热度:216454
25.陈伟霆被蚊子叮到竟然毫不知情
微博热度:211078
26.倪妮被喊回来拍照的表情好绝望
微博热度:205807
27.陈飞宇的头快顶到GQ俩字了
微博热度:203969
28.亚冠
微博热度:192656
29.消费信贷给你的生活带来了什么影响
微博热度:191989
30.新西游记8
微博热度:191963
31.何穗诠释了GQ台阶的意义
微博热度:191949
32.哈妮克孜好美
微博热度:181037
33.带上月球的国旗选面料选了1年
微博热度:166661
34.离婚冷静期30天怎么计算
微博热度:157340
35.李易峰 贵公子走红毯
微博热度:156438
36.欧阳娜娜轻熟造型
微博热度:152957
37.被朱婧汐的美甲吓到
微博热度:151954
38.谭松韵礼裙上有口袋
微博热度:150756
39.满洲里公布一新冠确诊患者乘公交轨迹
微博热度:148403
40.谢娜小沈阳即兴模仿好搞笑
微博热度:147860
41.陈立农演技
微博热度:134831
42.朱婧汐问佟大为如何平衡事业和家庭
微博热度:133405
43.肖战
微博热度:112242
44.GQ年度人物盛典主题大片
微博热度:107432
45.西游记里你不知道的花絮
微博热度:81251
46.2020年度十大流行语
微博热度:74143
47.五星红旗再次闪耀月球
微博热度:72565
48.月亮
微博热度:59240
49.嫦娥的下一个高难度动作
微博热度:58681
50.见过最没面子的化妆品
微博热度:58663
| 6.95098 | 20 | 0.783498 | yue_Hant | 0.28893 |
93fff3016caf5013632491d779debe6c7607f7d8 | 149 | md | Markdown | _posts/2019-06-06-express.md | fyodor-rs/fyodor-rs.github.io | b362a5a6c7ec319b4689e4350c245075ae26db78 | [
"MIT"
] | null | null | null | _posts/2019-06-06-express.md | fyodor-rs/fyodor-rs.github.io | b362a5a6c7ec319b4689e4350c245075ae26db78 | [
"MIT"
] | null | null | null | _posts/2019-06-06-express.md | fyodor-rs/fyodor-rs.github.io | b362a5a6c7ec319b4689e4350c245075ae26db78 | [
"MIT"
] | null | null | null | ---
title: Express
date: 2019-06-06 19:00:00 +0800
categories: [learn, express]
tags: [express]
seo:
date_modified: 2019-06-06 19:00:00 +0800
---
| 14.9 | 42 | 0.677852 | fra_Latn | 0.143892 |
9e0037d872e985b1c319f54935945c3038f1cb45 | 544 | md | Markdown | Readme.md | covertspartan/docker-spark-jupyterlab-anaconda | 978fe3f2239c2467605d668ecd6d75fa4e38a374 | [
"MIT"
] | null | null | null | Readme.md | covertspartan/docker-spark-jupyterlab-anaconda | 978fe3f2239c2467605d668ecd6d75fa4e38a374 | [
"MIT"
] | null | null | null | Readme.md | covertspartan/docker-spark-jupyterlab-anaconda | 978fe3f2239c2467605d668ecd6d75fa4e38a374 | [
"MIT"
] | null | null | null | # docker-spark-jupyterlab-anaconda
Dockerhub image with spark and anaconda preinstalled, for launching spark jobs from notebooks.
The end user is expected to mount their spark-configs in `/usr/spark/conf`
You may build and run the image locally with the following command from within the root project directory:
```
docker build . --tag spark-jupyter-local-test
docker run -i -t -p 8888:8888 -P spark-jupyter-local-test:latest
```
Additional packages may be automatically installed when the notebook starts by defining the `PIP_DEPENDENCIES` | 45.333333 | 110 | 0.795956 | eng_Latn | 0.997251 |
9e003a9dc5dd61950e3a5c1e521da94c0de0abc7 | 5,777 | md | Markdown | articles/site-recovery/hyper-v-vmm-failover-failback.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-03T08:58:02.000Z | 2020-04-03T08:58:02.000Z | articles/site-recovery/hyper-v-vmm-failover-failback.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/site-recovery/hyper-v-vmm-failover-failback.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Einrichten von Failover/Failback an einem sekundären Hyper-V-Standort mit Azure Site Recovery
description: Erfahren Sie, wie Sie während der Notfallwiederherstellung mit Azure Site Recovery ein Failover von Hyper-V-VMs zu Ihrem sekundären lokalen Standort und ein Failback zum primären Standort ausführen.
services: site-recovery
author: rayne-wiselman
manager: carmonm
ms.service: site-recovery
ms.topic: conceptual
ms.date: 11/14/2019
ms.author: raynew
ms.openlocfilehash: d31355bcb0ce42874c19988738ba06138c7a0b7c
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/27/2020
ms.locfileid: "74082601"
---
# <a name="fail-over-and-fail-back-hyper-v-vms-replicated-to-your-secondary-on-premises-site"></a>Failover und Failback von Hyper-V-VMs, die nach Ihrem sekundären lokalen Standort repliziert werden
Der [Azure Site Recovery](site-recovery-overview.md)-Dienst verwaltet und koordiniert Replikation, Failover und Failback von lokalen Computern sowie virtuellen Azure-Computern (VMs).
In diesem Artikel wird beschrieben, wie ein Failover einer Hyper-V-VM, die in einer System Center Virtual Machine Manager-Cloud (VMM) verwaltet wird, auf einen sekundären VMM-Standort ausgeführt wird. Nach dem Failover erfolgt ein Failback zum lokalen Standort, wenn er verfügbar ist. In diesem Artikel werden folgende Vorgehensweisen behandelt:
> [!div class="checklist"]
> * Ausführen eines Failovers einer Hyper-V-VM aus einer primären VMM-Cloud auf eine sekundäre VMM-Cloud.
> * Erneutes Schützen vom sekundären Standort zum primären und Ausführen des Failbacks
> * Optionales Starten der Replikation vom primären zurück nach dem sekundären Standort
## <a name="failover-and-failback"></a>Failover und Failback
Failover und Failback weisen drei Phasen auf:
1. **Failover zum sekundären Standort:** Ausführen eines Failovers der Computer vom primären zum sekundären Standort.
2. **Failover vom sekundären Standort:** Replizieren der VMs vom sekundären zum primären Standort und Ausführen eines geplanten Failovers zum Ausführen eines Failbacks.
3. Starten Sie nach dem geplanten Failover optional erneut die Replikation vom primären nach dem sekundären Standort.
## <a name="prerequisites"></a>Voraussetzungen
- Führen Sie unbedingt eine [Notfallwiederherstellungs-Übung](hyper-v-vmm-test-failover.md) durch, um zu überprüfen, ob alles wie erwartet funktioniert.
- Um das Failback abzuschließen, stellen Sie sicher, dass die primären und sekundären VMM-Server mit Site Recovery verbunden sind.
## <a name="run-a-failover-from-primary-to-secondary"></a>Ausführen eines Failovers vom primären auf den sekundären Standort
Sie können ein reguläres oder geplantes Failover für virtuelle Hyper-V-Computer ausführen.
- Verwenden Sie ein reguläres Failover für unerwartete Ausfälle. Beim Ausführen dieses Failovers erstellt Site Recovery einen virtuellen Computer am sekundären Standort und fährt ihn hoch. Datenverlust kann in Abhängigkeit von ausstehenden Daten auftreten, die nicht synchronisiert wurden.
- Ein geplantes Failover kann zu Wartungszwecken oder bei erwarteten Ausfällen verwendet werden. Mit dieser Option vermeiden Sie jeglichen Datenverlust. Wenn ein geplantes Failover ausgelöst wird, werden die Quell-VMs heruntergefahren. Nicht synchronisierte Daten werden synchronisiert, und das Failover wird ausgelöst.
-
In diesem Verfahren erfahren Sie, wie Sie ein reguläres Failover durchführen.
1. Klicken Sie unter **Einstellungen** > **Replizierte Elemente** auf VM > **Failover**.
1. Klicken Sie auf **Der Computer wird vor Beginn des Failovers heruntergefahren**, wenn Site Recovery versuchen soll, Quell-VMs herunterzufahren, bevor das Failover ausgelöst wird. Site Recovery wird vor dem Auslösen des Failovers auch versuchen, lokale Daten zu synchronisieren, die noch nicht an den sekundären Standort gesendet wurden. Beachten Sie, dass das Failover auch dann fortgesetzt wird, wenn das Herunterfahren nicht erfolgreich ist. Der Fortschritt des Failovers wird auf der Seite **Aufträge** angezeigt.
2. Sie sollten nun die VM in der sekundären VMM-Cloud sehen.
3. Nachdem Sie die VM überprüft haben, **committen** Sie das Failover. Dadurch werden alle verfügbaren Wiederherstellungspunkte gelöscht.
> [!WARNING]
> **Brechen Sie ein aktuell ausgeführtes Failover nicht ab**: Bevor das Failover gestartet wird, wird die VM-Replikation beendet. Wenn Sie ein Failover in Bearbeitung abbrechen, wird das Failover beendet, die Replikation der VM wird jedoch nicht erneut durchgeführt.
## <a name="reverse-replicate-and-failover"></a>„Umgekehrt replizieren“ und „Failover“
Starten Sie die Replikation vom sekundären nach dem primären Standort, und führen Sie ein Failback auf den primären Standort aus. Sobald die virtuellen Computer wieder am primären Standort ausgeführt werden, können Sie sie im sekundären Standort replizieren.
1. Klicken Sie auf den virtuellen Computer > **Umgekehrt replizieren**.
2. Sobald der Auftrag abgeschlossen ist, klicken Sie auf den virtuellen Computer > **Failover**, überprüfen Sie die Failoverrichtung (von der sekundären VMM-Cloud), und wählen Sie den Quell- und den Zielspeicherort aus.
4. Initiieren Sie das Failover. Der Fortschritt des Failovers wird auf der Registerkarte **Aufträge** angezeigt.
5. Überprüfen Sie, ob der virtuelle Computer in der primären VMM-Cloud verfügbar ist.
6. Wenn Sie das Replizieren des primären virtuellen Computers zurück nach dem sekundären Standort starten möchten, klicken Sie auf **Umgekehrt replizieren**.
## <a name="next-steps"></a>Nächste Schritte
[Verwenden Sie den Schritt](hyper-v-vmm-disaster-recovery.md) zum Replizieren von virtuellen Hyper-V-Computern an einem sekundären Standort.
| 75.025974 | 519 | 0.80959 | deu_Latn | 0.998029 |
9e007f4f5b47c7d19f623c16b0bd49f76954f33c | 4,911 | md | Markdown | desktop-src/VPC/ivmvirtualmachine-removeactivationvalue.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-26T16:18:49.000Z | 2022-02-19T02:00:21.000Z | desktop-src/VPC/ivmvirtualmachine-removeactivationvalue.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-04-09T17:00:51.000Z | 2020-04-09T18:30:01.000Z | desktop-src/VPC/ivmvirtualmachine-removeactivationvalue.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-19T02:58:48.000Z | 2021-03-06T21:09:47.000Z | ---
title: IVMVirtualMachine RemoveActivationValue method (VPCCOMInterfaces.h)
description: Removes the value of the specified activation setting for this virtual machine.
ms.assetid: 8e9b9d95-aec9-4b73-afc3-cd0d7300f40f
keywords:
- RemoveActivationValue method Virtual PC
- RemoveActivationValue method Virtual PC , IVMVirtualMachine interface
- IVMVirtualMachine interface Virtual PC , RemoveActivationValue method
topic_type:
- apiref
api_name:
- IVMVirtualMachine.RemoveActivationValue
api_location:
- VPCCOMInterfaces.h
api_type:
- COM
ms.topic: reference
ms.date: 05/31/2018
---
# IVMVirtualMachine::RemoveActivationValue method
\[Windows Virtual PC is no longer available for use as of Windows 8. Instead, use the [Hyper-V WMI provider (V2)](https://docs.microsoft.com/windows/desktop/HyperV_v2/windows-virtualization-portal).\]
Removes the value of the specified activation setting for this virtual machine.
## Syntax
```C++
HRESULT RemoveActivationValue(
[in] BSTR activationKey
);
```
## Parameters
<dl> <dt>
*activationKey* \[in\]
</dt> <dd>
The key used to identify the activation value as stored in the "\*.vmc" file.
</dd> </dl>
## Return value
This method can return one of these values.
| Return code/value | Description |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------|
| <dl> <dt>**S\_OK**</dt> <dt>0</dt> </dl> | The operation was successful.<br/> |
| <dl> <dt>**E\_INVALIDARG**</dt> <dt>0x80000003</dt> </dl> | The parameter is **NULL** or empty.<br/> |
| <dl> <dt>**VM\_E\_VM\_UNKNOWN**</dt> <dt>0xA0040207</dt> </dl> | The configuration is unknown.<br/> |
| <dl> <dt>**VM\_E\_PREF\_NOT\_FOUND**</dt> <dt>0xA0040300</dt> </dl> | The preference was not found, or this configuration has no valid activation.<br/> |
| <dl> <dt>**DISP\_E\_EXCEPTION**</dt> <dt>0x80020009</dt> </dl> | An unexpected error has occurred.<br/> |
## Remarks
This method provides low-level access to any activation value. It can be used to remove activation values for customer-defined keys. Be careful if you use this method to remove system activation values, since some values cannot be changed while the virtual machine is running. When a virtual machine is started, a copy is made of its configuration values, which becomes its set of activation values. Activation values are maintained until the virtual machine is shut down or restarted. Note that Windows Virtual PC may only use the configuration to store values for certain keys, that is, the activation value may never be used.
> [!Note]
> The virtual machine session must be running before any activation values can be changed.
Activation keys are stored internally in a hierarchical manner similar to the registry keys in Windows. To specify a specific subkey, a "key path" is constructed which specifies the various keys in a slash mark delimited format.
For example, to remove the value of the "default\_action" key located in the following key tree:
``` syntax
<settings>
<undo_drives>
<default_action type="integer">1</default_action>
```
The *activationKey* path string would be specified as follows:
``` syntax
"settings/undo_drives/default_action"
```
## Requirements
| | |
|-------------------------------------|-----------------------------------------------------------------------------------------------|
| Minimum supported client<br/> | Windows 7 \[desktop apps only\]<br/> |
| Minimum supported server<br/> | None supported<br/> |
| End of client support<br/> | Windows 7<br/> |
| Product<br/> | Windows Virtual PC<br/> |
| Header<br/> | <dl> <dt>VPCCOMInterfaces.h</dt> </dl> |
| IID<br/> | IID\_IVMVirtualMachine is defined as f7092aa1-33ed-4f78-a59f-c00adfc2edd7<br/> |
## See also
<dl> <dt>
[**IVMVirtualMachine**](ivmvirtualmachine.md)
</dt> </dl>
| 40.254098 | 628 | 0.538383 | eng_Latn | 0.918589 |
9e01c4aa9edcffd4242f7ec2666b7173c8de97a2 | 4,939 | md | Markdown | docs/addons/tuner/Ax.md | gbmarc1/spock | 27cbcbb80cd3e4eac1c79032011afaf1a3e83f08 | [
"Apache-2.0"
] | 58 | 2020-08-07T20:28:28.000Z | 2022-03-14T17:32:37.000Z | docs/addons/tuner/Ax.md | gbmarc1/spock | 27cbcbb80cd3e4eac1c79032011afaf1a3e83f08 | [
"Apache-2.0"
] | 178 | 2020-08-11T15:02:07.000Z | 2022-03-30T07:18:06.000Z | docs/addons/tuner/Ax.md | gbmarc1/spock | 27cbcbb80cd3e4eac1c79032011afaf1a3e83f08 | [
"Apache-2.0"
] | 7 | 2020-08-11T13:44:50.000Z | 2022-02-24T14:35:55.000Z | # Ax Support
`spock` integrates with the Ax optimization framework through the provided Service API. See
[docs](https://ax.dev/api/service.html#module-ax.service.ax_client) for `AxClient` info.
All examples can be found [here](https://github.com/fidelity/spock/blob/master/examples).
### Defining the Backend
So let's continue with our Ax specific version of `tune.py`:
It's important to note that you can still use the `@spock` decorator to define any non hyper-parameters! For
posterity let's add some fixed parameters (those that are not part of hyper-parameter tuning) that we will use
elsewhere in our code.
```python
from spock.config import spock
@spock
class BasicParams:
n_trials: int
max_iter: int
```
Now we need to tell `spock` that we intend on doing hyper-parameter tuning and which backend we would like to use. We
do this by calling the `tuner` method on the `ConfigArgBuilder` object passing in a configuration object for the
backend of choice (just like in basic functionality this is a chained command, thus the builder object will still be
returned). For Ax one uses `AxTunerConfig`. This config mirrors all options that would be passed into
the `AxClient` constructor and the `AxClient.create_experiment`function call so that `spock` can setup the
Service API. (Note: The `@spockTuner`decorated classes are passed to the `ConfigArgBuilder` in the exact same
way as basic `@spock`decorated classes.)
```python
from spock.addons.tune import AxTunerConfig
# Ax config -- this will internally spawn the AxClient service API style which will be returned
# by accessing the tuner_status property on the ConfigArgBuilder object -- note here that we need to define the
# objective name that the client will expect to be within the data dictionary when completing trials
ax_config = AxTunerConfig(objective_name="accuracy", minimize=False)
# Use the builder to setup
# Call tuner to indicate that we are going to do some HP tuning -- passing in an ax study object
attrs_obj = ConfigArgBuilder(
LogisticRegressionHP,
BasicParams,
desc="Example Logistic Regression Hyper-Parameter Tuning -- Ax Backend",
).tuner(tuner_config=ax_config)
```
### Generate Functionality Still Exists
To get the set of fixed parameters (those that are not hyper-parameters) one simply calls the `generate()` function
just like they would for normal `spock` usage to get the fixed parameter `spockspace`.
Continuing in `tune.py`:
```python
# Here we need some of the fixed parameters first so we can just call the generate fnc to grab all the fixed params
# prior to starting the sampling process
fixed_params = attrs_obj.generate()
```
### Sample as an Alternative to Generate
The `sample()` call is the crux of `spock` hyper-parameter tuning support. It draws a hyper-parameter sample from the
underlying backend sampler and combines it with fixed parameters and returns a single `Spockspace` with all
useable parameters (defined with dot notation). For Ax -- Under the hood `spock` uses the Service API (with
an `AxClient`) -- thus it handles the underlying call to get the next trial. The `spock` builder object has a
`@property` called `tuner_status` that returns any necessary backend objects in a dictionary that the user needs to
interface with. In the case of Ax, this contains both the `AxClient` and `trial_index` (as dictionary keys). We use
the return of`tuner_status` to handle trial completion via the `complete_trial` call based on the metric of interested
(here just the simple validation accuracy -- remember during `AxTunerConfig` instantiation we set the `objective_name`
to 'accuracy' -- we also set the SEM to 0.0 since we are not using it for this example)
See [here](https://ax.dev/api/service.html#ax.service.ax_client.AxClient.complete_trial) for Ax documentation on
completing trials.
Continuing in `tune.py`:
```python
# Iterate through a bunch of ax trials
for _ in range(fixed_params.BasicParams.n_trials):
# Call sample on the spock object
hp_attrs = attrs_obj.sample()
# Use the currently sampled parameters in a simple LogisticRegression from sklearn
clf = LogisticRegression(
C=hp_attrs.LogisticRegressionHP.c,
solver=hp_attrs.LogisticRegressionHP.solver,
max_iter=hp_attrs.BasicParams.max_iter
)
clf.fit(X_train, y_train)
val_acc = clf.score(X_valid, y_valid)
# Get the status of the tuner -- this dict will contain all the objects needed to update
tuner_status = attrs_obj.tuner_status
# Pull the AxClient object and trial index out of the return dictionary and call 'complete_trial' on the
# AxClient object with the correct raw_data that contains the objective name
tuner_status["client"].complete_trial(
trial_index=tuner_status["trial_index"],
raw_data={"accuracy": (val_acc, 0.0)},
)
``` | 47.951456 | 119 | 0.753391 | eng_Latn | 0.993094 |
9e022caccdd29d85c6474f13017f7dd2218bdc2b | 785 | md | Markdown | algorithm/9-sort2.md | code-squad/blue-common | 1e0c13af435bc4aeab89c6152a24641f6b5327fd | [
"MIT"
] | 6 | 2017-03-30T06:26:16.000Z | 2018-03-23T14:08:47.000Z | algorithm/9-sort2.md | code-squad/blue-common | 1e0c13af435bc4aeab89c6152a24641f6b5327fd | [
"MIT"
] | 1 | 2017-03-23T06:54:33.000Z | 2017-03-23T06:54:33.000Z | algorithm/9-sort2.md | code-squad/blue-common | 1e0c13af435bc4aeab89c6152a24641f6b5327fd | [
"MIT"
] | 3 | 2017-03-24T10:05:03.000Z | 2020-01-17T06:55:49.000Z | # 
# 머지 소트
CodeSquad Master
Hoyoung Jung
---
<!-- page_number: true -->
# 오늘의 문제
이미 정렬된 두 배열을 입력받아서 새로운 정렬된 배열을 만드는 함수를 작성하시오.
```
int[] merge(int[] arr1, int[] arr2) {
//TODO: implement
}
```
---
# 머지소트가 머지?
만든 사람: 존 폰 노이만, 1945년

(출처: 야공만)
---
# Divide and Conquer

---
# Merge Sort
> 정렬되지 않은 리스트를 절반으로 잘라 두 부분 리스트로 나눈다.
> 각 부분 리스트에 대해 머지 소트 호출
> 두 부분 리스트에 대해 머지 수행
---
# Merge Sort
```
```
https://visualgo.net/en/sorting
---
# 구현하기
https://github.com/code-squad/blue-common/blob/master/codes/src/codesquad/MySort.java | 18.255814 | 123 | 0.680255 | kor_Hang | 0.996466 |
9e0244849360365dae26c0217a7b28e44395121f | 54,485 | md | Markdown | data-management-library/database/multitenant/multitenant2/content.md | mlmeng/learning-library | 73a7f0a9fe5b45d40da1669fd2c00a6781479bed | [
"UPL-1.0"
] | 1 | 2020-02-14T16:49:22.000Z | 2020-02-14T16:49:22.000Z | data-management-library/database/multitenant/multitenant2/content.md | mlmeng/learning-library | 73a7f0a9fe5b45d40da1669fd2c00a6781479bed | [
"UPL-1.0"
] | null | null | null | data-management-library/database/multitenant/multitenant2/content.md | mlmeng/learning-library | 73a7f0a9fe5b45d40da1669fd2c00a6781479bed | [
"UPL-1.0"
] | null | null | null | # Hands-on with Multitenant (Advanced) #
## Lab Introduction
This is a series of 12 hands-on labs designed to familiarize you with the Application Container functionality of Oracle Multitenant. In these labs, we follow the journey of a notional company, Walt’s Malts, as their business expands from a single store to a global powerhouse – “from startup to starship”.
## Setup
### Lab Assumptions
- Each participant has been provided a username and password to the tenancy c4u03.
- Each participant has completed the Environment Setup lab.
- Each participant has created an OCI compute instance using the database template.
There are two container databases running:
- CDB1 running on port 1523
- CDB2 running on port 1524
### Lab Setup
All the scripts for this lab are located in the /home/oracle/labs/multitenant/scripts folder.
1. To access the scripts, secure shell into the OCI compute instance.
2. Change to the ssh directory and ssh into your instance. The public IP address can be found by going to Compute -> Instance.
````
cd .ssh
ssh -i optionskey opc@<your public ip address>
oci os object bulk-download -bn Multitenant --download-dir /home/opc
sudo mv champion.zip /home/oracle
sudo chown oracle:oinstall /home/oracle/champion.zip
sudo su - oracle
unzip champion.zip
cd /home/oracle/labs/multitenant
````
3. Reset the container databases back to their original ports if they were changed in a previous lab. If any errors about dropping databases appear they can be ignored.
````
./resetCDB.sh
````
## Step 1: Instant SaaS
This section shows how Multitenant with Application Containers provides an instant SaaS architecture for an application formerly architected for standalone deployment.
The tasks you will accomplish in this lab are:
- Setup Application Root - wmStore_Master
- Install v1 of Application wmStore in Application Root
- Create and sync Application Seed and provision Application PDBs for four franchises: Tulsa, California, Tahoe, NYC
- Populate Application Tenant PDBs with demo data.
1. Connect to **CDB1**.
````
sqlplus /nolog
connect sys/oracle@localhost:1523/cdb1 as sysdba
````
2. Create and open the master application root
````
conn system/oracle@localhost:1523/cdb1;
create pluggable database wmStore_Master as application container
admin user wm_admin identified by oracle;
alter pluggable database wmStore_Master open;
````
3. Define the application master
````
conn system/oracle@localhost:1523/wmStore_Master;
alter pluggable database application wmStore begin install '1.0';
create tablespace wmStore_TBS datafile size 100M autoextend on next 10M maxsize 200M;
create user wmStore_Admin identified by oracle container=all;
grant create session, dba to wmStore_Admin;
alter user wmStore_Admin default tablespace wmStore_TBS;
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master;
create table wm_Campaigns
-- sharing = data
(Row_GUID raw(16) default Sys_GUID() primary key
,Name varchar2(30) not null unique
)
;
create table wm_Products
-- sharing = extended data
(Row_GUID raw(16) default Sys_GUID() primary key
,Name varchar2(30) not null unique
)
;
create table wm_Orders
-- sharing = metadata
(Row_GUID raw(16) default Sys_GUID() primary key
,Order_Number number(16,0) generated always as identity not null unique
,Order_Date date default current_date not null
,Campaign_ID raw(16)
)
;
alter table wm_Orders add constraint wm_Orders_F1
foreign key (Campaign_ID)
references wm_Campaigns(Row_GUID)
disable
;
create table wm_Order_Items
-- sharing = metadata
(Row_GUID raw(16) default Sys_GUID() primary key
,Order_ID raw(16) not null
,Item_Num number(16,0) not null
,Product_ID raw(16) not null
,Order_Qty number(16,0) not null
)
;
alter table wm_Order_Items add constraint wm_Order_Items_F1
foreign key (Order_ID)
references wm_Orders(Row_GUID)
disable
;
alter table wm_Order_Items add constraint wm_Order_Items_F2
foreign key (Product_ID)
references wm_Products(Row_GUID)
disable
;
create or replace view wm_Order_Details
-- sharing = metadata
(Order_Number
,Campaign_Name
,Item_Num
,Product_Name
,Order_Qty
) as
select o.Order_Number
, c.Name
, i.Item_Num
, p.Name
, i.Order_Qty
from wm_Orders o
join wm_Order_Items i
on i.Order_ID = o.Row_GUID
join wm_Products p
on i.Product_ID = p.Row_GUID
left outer join wm_Campaigns c
on o.Campaign_ID = c.Row_GUID
;
insert into wm_Campaigns (Row_GUID, Name) values ('01', 'Locals vs Yokels');
insert into wm_Campaigns (Row_GUID, Name) values ('02', 'Black Friday 2016');
insert into wm_Campaigns (Row_GUID, Name) values ('03', 'Christmas 2016');
insert into wm_Products (Row_GUID, Name) values ('01', 'Tornado Twisted');
insert into wm_Products (Row_GUID, Name) values ('02', 'Muskogee Magic');
insert into wm_Products (Row_GUID, Name) values ('03', 'Root 66 Beer Float');
insert into wm_Products (Row_GUID, Name) values ('04', 'Yokie Dokie Okie Eggnog');
commit;
alter pluggable database application wmStore end install '1.0';
````
4. Create the application seed
````
conn system/oracle@localhost:1523/wmStore_Master;
create pluggable database as seed
admin user wm_admin identified by oracle;
````
5. Open the application seed
````
connect sys/oracle@localhost:1523/wmStore_Master as SysDBA
alter pluggable database wmStore_Master$Seed open;
````
6. Sync the seed with the application wmStore
````
conn system/oracle@localhost:1523/wmStore_Master$Seed;
alter pluggable database application wmStore sync;
````
7. Provision the application databases for the 4 stores
````
conn system/oracle@localhost:1523/wmStore_Master;
create pluggable database Tulsa
admin user wm_admin identified by oracle;
create pluggable database California
admin user wm_admin identified by oracle;
create pluggable database Tahoe
admin user wm_admin identified by oracle;
create pluggable database NYC
admin user wm_admin identified by oracle;
alter pluggable database all open;
````
8. Create franchise specific data
````
conn system/oracle@localhost:1523/wmStore_Master;
@Franchise_Data_Lab1
````
## Step 2: PDB Exploration
This section will take a brief tour of the newly created SaaS estate.
The tasks you will accomplish in this lab are:
- Look at the PDBs that have been created so far
- Experiment with different classes of User
- Perform queries against different franchises
1. Connect to **CDB1**.
````
connect system/oracle@localhost:1523/cdb1
````
2. Show PDBs created so far
````
set linesize 180
column c0 noprint new_value CDB_Name
column c1 heading "Con ID" format 99
column c2 heading "PDB Name" format a30
column c3 heading "Con UID" format 99999999999
column c4 heading "Restricted?" format a11
column c5 heading "Open Mode" format a10
column c6 heading "Root?" format a5
column c7 heading "App PDB?" format a8
column c8 heading "Seed?" format a5
column c9 heading "Root Clone?" format a11
column c10 heading "Proxy?" format a6
column c11 heading "App Container Name" format a30
set termout off
select Sys_Context('Userenv', 'CDB_Name') c0
from dual
;
ttitle "PDBs in CDB &CDB_Name"
set termout on
select P.Con_ID c1
, P.Name c2
, P.CON_UID c3
, P.Restricted c4
, P.Open_Mode c5
, P.Application_Root c6
, P.Application_PDB c7
, P.Application_Seed c8
, P.Application_Root_Clone c9
, P.Proxy_PDB c10
, AC.Name c11
from v$PDBs P
left outer join v$PDBs AC
on AC.Con_ID = P.Application_Root_Con_ID
order by P.Name
, nvl(AC.Name,P.Name)
, P.Application_Root desc
, P.Application_Seed desc
, P.Name
;
````
3. You should be able to set your container to Tulsa because weStore_Admin is an Application Common user but it should fail if you try to set it to CDB$Root since that container is outside of the application container.
````
show user
alter session set container=wmStore_Master;
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master;
alter session set container = Tulsa;
alter session set container = CDB$Root;
````
4. You can connect directly as the various local users. Keep in mind these are local users, it just happens to be that they have the same password. Notice that the local user for Califorina cannot use the Tulsa container because it is local to the Califorina container.
````
connect wm_admin/oracle@localhost:1523/Tulsa;
alter session set container = Tulsa;
connect wm_admin/oracle@localhost:1523/California;
alter session set container = Tulsa;
````
5. When prompted give one of the PDBs that was created (Tulsa, California, NYC, or Tahoe). You can rerun this script giving a different store if you want to view the data.
````
@Lab2_Queries.sql
````
## Step 3: Upgrade from v1 to v2
This section we upgrade Application wmStore from v1 to v2. Despite each franchise having a separate tenant PDB, there is only one master application definition to be upgraded – in Application Root. We run the upgrade script only once, against the Application Root. It is then simply a matter of synchronizing the tenant PDBs for each franchise for them to be upgraded to the new version. Note that this model allows for granular (per tenant/franchise) upgrade schedules.
The tasks you will accomplish in this lab are:
- Upgrade application wmStore to v2
- Synchronize three of four Application Tenant PDBs
1. Create the upgrade of the pluggable databases.
````
conn system/oracle@localhost:1523/wmStore_Master;
alter pluggable database application wmStore begin upgrade '1.0' to '2.0';
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
alter table wm_Products add
(Local_Product_YN char(1) default 'Y' not null
)
;
alter table wm_Products add constraint Local_Product_Bool
check (Local_Product_YN in ('Y','N'))
;
create or replace view wm_Order_Details
-- sharing = metadata
(Order_Number
,Campaign_Name
,Item_Num
,Product_Name
,Local_Product_YN
,Order_Qty
) as
select o.Order_Number
, c.Name
, i.Item_Num
, p.Name
, p.Local_Product_YN
, i.Order_Qty
from wm_Orders o
join wm_Order_Items i
on i.Order_ID = o.Row_GUID
join wm_Products p
on i.Product_ID = p.Row_GUID
left outer join wm_Campaigns c
on o.Campaign_ID = c.Row_GUID
;
update wm_Products
set Local_Product_YN = 'N'
where Name in
('Tornado Twisted'
,'Muskogee Magic'
,'Root 66 Beer Float'
,'Yokie Dokie Okie Eggnog'
)
;
commit;
alter pluggable database application wmStore end upgrade;
````
2. Apply the upgrade to Tulsa
````
connect system/oracle@localhost:1523/Tulsa
alter pluggable database application wmStore sync;
````
3. Apply the upgrade to California
````
connect system/oracle@localhost:1523/California
alter pluggable database application wmStore sync;
````
4. Apply the upgrade to Tahoe
````
connect system/oracle@localhost:1523/Tahoe
alter pluggable database application wmStore sync;
````
5. Take a look at a pluggable the upgrade was applied to.
````
column Row_GUID noprint
column Name format a30 heading "Product Name"
column Local_Product_YN format a14 heading "Local Product?"
define Franchise = "Tulsa"
ttitle "Products in Franchise &Franchise"
set echo on
connect wmStore_Admin/oracle@localhost:1523/Tulsa
desc wm_Products
select *
from wm_Products
;
set echo off
````
6. Look at a pluggable that the upgrade was not applied to and look at the table definitions and data compared to one that was upgraded.
````
define Franchise = "NYC"
ttitle "Products in Franchise &Franchise"
set echo on
connect wmStore_Admin/oracle@localhost:1523/NYC
desc wm_Products
select *
from wm_Products
;
set echo off
````
## Step 4: Containers Queries
This section we introduce a very powerful cross-container aggregation capability – containers() queries. Containers() queries allow an application administrator to connect to Application Root and aggregate data with a single query across multiple Application Tenants (Franchises) – or across all of them. This is another example of how Multitenant, with Application Containers, allows you to manage many Application Tenants as one, when needed. Notice values in column Franchise come from Con$Name. Remember that containers() queries are executed in Root and all containers plugged into it.
The tasks you will accomplish in this lab are:
- Run Queries across containers
1. Connect to ``CDB1``.
````
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
````
2. Products in Tulsa and NYC
````
column c1 format a30 heading "Franchise"
column c2 format 9999999 heading "Order #"
column c3 format a30 heading "Campaign"
column c4 format 999999 heading "Item #"
column c5 format a30 heading "Product"
column c6 format 9,999 heading "Qty"
column c7 format 9,999,999 heading "Num Orders"
break on c1 on c3 on c5
column c4 noprint
ttitle "Products (in Tulsa and NYC)"
set echo on
select con$name c1
, Name c5
from containers(wm_Products)
where con$name in ('TULSA','NYC')
order by 1
, 2
;
set echo off
````
3. Order Counts Per Campaign
````
ttitle "Order Counts Per Campaign (Across All Franchises)"
set echo on
select con$name c1
, Campaign_Name c3
, count(*) c7
from containers(wm_Order_Details)
group by con$name
, Campaign_Name
order by 1
, 3 desc
, 2
;
set echo off
````
4. Order Volume Per Product
````
ttitle "Order Volume Per Product (Across All Franchises)"
set echo on
select con$name c1
, Product_Name c5
, count(*) c7
from containers(wm_Order_Details)
group by con$name
, Product_Name
order by 1
, 3 desc
, 2
;
set echo off
````
## Step 5: Application Root Clones and Compatibility
This section will explore the PDBs, users and data within the various pluggable databses created in the earlier section.
The tasks you will accomplish in this lab are:
- Create a pluggable database ``OE`` in the container database ``CDB1``
- Create a load against the pluggable database ``OE``
- Create a hot clone ``OE_DEV`` in the container database ``CDB2`` from the pluggable database ``OE``
1. Connect to ``CDB1``.
````
sqlplus /nolog
connect system/oracle@localhost:1523/cdb1
````
2. Review the pluggable databases in the container database
````
set linesize 180
column c0 noprint new_value CDB_Name
column c1 heading "Con ID" format 99
column c2 heading "PDB Name" format a30
column c3 heading "Con UID" format 99999999999
column c4 heading "Restricted?" format a11
column c5 heading "Open Mode" format a10
column c6 heading "Root?" format a5
column c7 heading "App PDB?" format a8
column c8 heading "Seed?" format a5
column c9 heading "Root Clone?" format a11
column c10 heading "Proxy?" format a6
column c11 heading "App Container Name" format a30
set termout off
select Sys_Context('Userenv', 'CDB_Name') c0
from dual
;
ttitle "PDBs in CDB &CDB_Name"
set termout on
select P.Con_ID c1
, P.Name c2
, P.CON_UID c3
, P.Restricted c4
, P.Open_Mode c5
, P.Application_Root c6
, P.Application_PDB c7
, P.Application_Seed c8
, P.Application_Root_Clone c9
, P.Proxy_PDB c10
, AC.Name c11
from v$PDBs P
left outer join v$PDBs AC
on AC.Con_ID = P.Application_Root_Con_ID
order by P.Name
, nvl(AC.Name,P.Name)
, P.Application_Root desc
, P.Application_Seed desc
, P.Name
;
````
3. Connect to the master database and set the compatibility to 2.0. Notice you will get an error because one of the databases is not currently at that version.
````
conn system/oracle@localhost:1523/wmStore_Master;
alter pluggable database application wmStore set compatibility version '2.0';
````
4. Run the query below and notice that there are applications that are not at the current version. If you look at the output from the first query you can see that the NYC and wmStore_Master$Seed are still at 1.0.
````
column CON_UID heading "Con UID" format 999999999999
column APP_NAME heading "Application Name" format a20 truncate
column APP_ID heading "App ID" format 99999
column APP_VERSION heading "Version" format a7
column APP_STATUS heading "Status" format a12
column APP_ID noprint
select * from DBA_App_PDB_Status;
````
5. Connect to NYC and bring that up to the current version.
````
conn system/oracle@localhost:1523/NYC;
alter pluggable database application wmStore sync;
````
6. Connect ot wmStore_Master$Seed and bring that up to the current version.
````
conn system/oracle@localhost:1523/wmStore_Master$Seed
alter pluggable database application wmStore sync;
````
7. Connect back to wmStore_Master and set the compatibility to 2.0. This time it should work.
````
conn system/oracle@localhost:1523/wmStore_Master
alter pluggable database application wmStore set compatibility version '2.0';
````
8. Look back at the list of PDBs now that the upgrades are complete.
````
conn system/oracle@localhost:1523/cdb1
set linesize 180
column c0 noprint new_value CDB_Name
column c1 heading "Con ID" format 99
column c2 heading "PDB Name" format a30
column c3 heading "Con UID" format 99999999999
column c4 heading "Restricted?" format a11
column c5 heading "Open Mode" format a10
column c6 heading "Root?" format a5
column c7 heading "App PDB?" format a8
column c8 heading "Seed?" format a5
column c9 heading "Root Clone?" format a11
column c10 heading "Proxy?" format a6
column c11 heading "App Container Name" format a30
set termout off
select Sys_Context('Userenv', 'CDB_Name') c0
from dual
;
ttitle "PDBs in CDB &CDB_Name"
set termout on
select P.Con_ID c1
, P.Name c2
, P.CON_UID c3
, P.Restricted c4
, P.Open_Mode c5
, P.Application_Root c6
, P.Application_PDB c7
, P.Application_Seed c8
, P.Application_Root_Clone c9
, P.Proxy_PDB c10
, AC.Name c11
from v$PDBs P
left outer join v$PDBs AC
on AC.Con_ID = P.Application_Root_Con_ID
order by P.Name
, nvl(AC.Name,P.Name)
, P.Application_Root desc
, P.Application_Seed desc
, P.Name
;
````
## Step 6: Expansion Beyond Single CDB and Application Root Replicas
This section we follow the global expansion of Walt's Malts. In order to comply with requirements of data sovereignty and latency Walt's Malts has had to expand into a second CDB, CDB2. (In reality this would be in a separate server.) It is very important to note that we still only have a single master application definition, despite the application now being deployed across multiple CDBs.
The tasks you will accomplish in this lab are:
- Create and Open Application Root Replicas (ARRs): wmStore_International, wmStore_West
- Create Proxy PDBs for the ARRs
- Synchronize the ARRs via their proxies
- Create App Seeds for the ARRs
- Provision the App PDBs for five new franchises
- Add franchise-specific products for new franchises
1. Connect to **CDB2**.
````
sqlplus /nolog
connect system/oracle@localhost:1524/cdb2
````
2. Create a datbase link to CDB1 to pull the data across
````
create public database link CDB1_DBLink
connect to system identified by oracle
using 'localhost:1523/cdb1';
````
3. Create and open the Application Root Replicas (ARRs)
````
create pluggable database wmStore_International as application container
from wmStore_Master@CDB1_DBLink;
create pluggable database wmStore_West as application container
from wmStore_Master@CDB1_DBLink;
alter pluggable database all open;
````
4. Create the CDB$Root-level DB Link to CDB2
````
connect system/oracle@localhost:1523/cdb1
create public database link CDB2_DBLink
connect to System identified by oracle
using 'localhost:1524/cdb2';
````
5. Create the Application-Root-level DB Links to CDB2
````
conn system/oracle@localhost:1523/wmStore_Master
create public database link CDB2_DBLink
connect to system identified by oracle
using 'localhost:1524/cdb2';
````
6. Create and open Proxy PDBs for the Application Root Replicas
````
create pluggable database wmStore_International_Proxy
as proxy from wmStore_International@CDB2_DBLink;
create pluggable database wmStore_West_Proxy
as proxy from wmStore_West@CDB2_DBLink;
alter pluggable database all open;
````
7. Synchronize the ARRs via their proxies. Notice you need to connect as sys to do this.
````
conn sys/oracle@localhost:1523/wmStore_International_Proxy as sysdba
alter pluggable database application wmStore sync;
conn sys/oracle@localhost:1523/wmStore_West_Proxy as sysdba
alter pluggable database application wmStore sync;
````
8. Create and open the Application Seed PDBs for wmStore_International and sync it with Application wmStore
````
conn system/oracle@localhost:1524/wmStore_International
create pluggable database as seed
admin user wm_admin identified by oracle;
conn sys/oracle@localhost:1524/wmStore_International as SysDBA
alter pluggable database wmStore_International$Seed open;
connect system/oracle@localhost:1524/wmStore_International$Seed
alter pluggable database application wmStore sync;
````
9. Create and open the Application Seed PDBs for wmStore_West and sync it with Application wmStore
````
conn system/oracle@localhost:1524/wmStore_West
create pluggable database as seed
admin user wm_admin identified by oracle;
conn sys/oracle@localhost:1524/wmStore_West as SysDBA
alter pluggable database wmStore_West$Seed open;
connect system/oracle@localhost:1524/wmStore_West$Seed
alter pluggable database application wmStore sync;
````
10. Connect to the wmStore_International Application Root Replica (ARR) and create a database link from that ARR to the CDB of the Master Root
````
connect system/oracle@localhost:1524/wmStore_International
create public database link CDB1_DBLink
connect to system identified by oracle
using 'localhost:1523/cdb1';
````
11. Provision Application PDBs for the UK, Denmark and France franchises
````
create pluggable database UK
admin user wm_admin identified by oracle;
create pluggable database Denmark
admin user wm_admin identified by oracle;
create pluggable database France
admin user wm_admin identified by oracle;
````
12. Connect to the wmStore_West Application Root Replica (ARR) and create a database link from that ARR to the CDB of the Master Root
````
connect system/oracle@localhost:1524/wmStore_West
create public database link CDB1_DBLink
connect to system identified by oracle
using 'localhost:1523/cdb1';
````
13. Provision Application PDBs for the Sant Monica and Japan franchises
````
create pluggable database Santa_Monica
admin user wm_admin identified by oracle;
create pluggable database Japan
admin user wm_admin identified by oracle;
````
14. Switch to the container root and open all of the pluggable databases
````
alter session set container=CDB$Root;
alter pluggable database all open;
````
15. Create franchise-specific data
````
@Franchise_Data_Lab6
````
## Step 7: Durable Location Transparency
This section demonstrates "durable location transparency". In the previous section we saw how Proxy PDBs can provide location transparency. The Proxy PDBs for the Application Root Replicas (ARRs) provided local context (in the master Application Root) for the ARRs, which are physically located in a different CDB. This is a good example of location transparency. In this section, we see how these ARR Proxies can provide "durable location transparency". That is, location transparency that survives the physical reconfiguration of the Application Estate – specifically by relocating an Application PDB for a particular franchise from one CDB to another.
The tasks you will accomplish in this lab are:
- Run a report against wmStore_Master
- Relocate Tahoe to wmStore_West
- Run the report again
1. Connect and run a report against wmStore_Master
````
sqlplus /nolog
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
set verify off
define Campaign = "Locals vs Yokels"
column c1 format a30 heading "Franchise"
column c2 format a10 heading "CDB"
column c3 format 9,999,999 heading "Num Orders"
ttitle "Business-Wide Count of Orders for Campaign &Campaign"
select con$name c1
, cdb$name c2
, count(*) c3
from containers(wm_Orders) o
, wm_Campaigns c
where o.Campaign_id = c.row_guid
and c.Name = '&Campaign'
group by con$name
, cdb$name
order by 3 desc
, 1
;
````
2. Relocate Tahoe to wmStore_West
````
connect system/oracle@localhost:1524/wmStore_West
create pluggable database Tahoe from Tahoe@CDB1_DBLink
relocate availability max;
connect sys/oracle@localhost:1524/cdb2 as SysDBA
alter pluggable database Tahoe open;
````
3. Rerun the report and take note of the changes in data based on the relocation.
````
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
set verify off
define Campaign = "Locals vs Yokels"
column c1 format a30 heading "Franchise"
column c2 format a10 heading "CDB"
column c3 format 9,999,999 heading "Num Orders"
ttitle "Business-Wide Count of Orders for Campaign &Campaign"
select con$name c1
, cdb$name c2
, count(*) c3
from containers(wm_Orders) o
, wm_Campaigns c
where o.Campaign_id = c.row_guid
and c.Name = '&Campaign'
group by con$name
, cdb$name
order by 3 desc
, 1
;
````
## Step 8: Data Sharing
This section we introduce the advanced concept of data sharing. We have already seen how Multitenant, with Application Containers, can provide an instant SaaS architecture for an application previously architected for standalone deployment. Technically this is done by installing a master application definition in an Application Root. Application PDBs for each tenant / franchise are plugged into this Application Root and the metadata for the database components of the Application definition is served from the Application root. However,so far all data, including data which may be considered part of the application definition ("seed data") has been local. In other words, there's a replica of this seed data in every Application PDB. In this lab we'll see how, in addition to metadata, common data may also be shared from Application Root. To do this we'll upgrade application wmStore to v3.0 and introduce various powerful data sharing capabilities.
The tasks you will accomplish in this lab are:
- Upgrade Application wmStore to v3.0
- Propagate the Upgrade to all franchises
- Query the wm_Products table in a franchise PDB to see the sources of data
1. Create the v3.0 upgrade in wmStore_Master
````
connect system/oracle@localhost:1523/wmStore_Master
alter pluggable database application wmStore begin upgrade '2.0' to '3.0';
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
create table wm_List_Of_Values
-- sharing = metadata -- the default
sharing = data
-- sharing = extended data
(Row_GUID raw(16) default Sys_GUID() primary key
,Type_Code varchar2(30) not null
,Value_Code varchar2(30) not null
)
;
alter table wm_List_Of_Values add constraint wm_List_Of_Values_U1
unique (Type_Code, Value_Code)
;
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Currency');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Currency', 'USD');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Currency', 'GBP');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Currency', 'DKK');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Currency', 'EUR');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Currency', 'JPY');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Country');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Country', 'USA');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Country', 'UK');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Country', 'Denmark');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Country', 'France');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Country', 'Japan');
commit;
alter table wm_Orders disable constraint wm_Orders_F1;
alter table wm_Order_Items disable constraint wm_Order_Items_F2;
delete from wm_Campaigns where Row_GUID in ('01','02','03');
delete from wm_Products where Row_GUID in ('01','02','03','04');
insert into wm_Campaigns (Row_GUID, Name) values ('01', 'Locals vs Yokels');
insert into wm_Campaigns (Row_GUID, Name) values ('02', 'Black Friday 2016');
insert into wm_Campaigns (Row_GUID, Name) values ('03', 'Christmas 2016');
insert into wm_Products (Row_GUID, Name) values ('01', 'Tornado Twisted');
insert into wm_Products (Row_GUID, Name) values ('02', 'Muskogee Magic');
insert into wm_Products (Row_GUID, Name) values ('03', 'Root 66 Beer Float');
insert into wm_Products (Row_GUID, Name) values ('04', 'Yokie Dokie Okie Eggnog');
commit;
alter pluggable database application wmStore end upgrade;
````
2. Sync some of the pluggable databases
````
connect system/oracle@localhost:1523/WMSTORE_MASTER$SEED
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1523/CALIFORNIA
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1523/NYC
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1523/TULSA
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/WMSTORE_INTERNATIONAL$SEED
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/DENMARK
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/FRANCE
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/UK
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/WMSTORE_WEST$SEED
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/JAPAN
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/SANTA_MONICA
alter pluggable database application wmStore sync;
connect system/oracle@localhost:1524/TAHOE
alter pluggable database application wmStore sync;
````
3. Queries against container CDB1
````
connect system/oracle@localhost:1523/cdb1
column c01 format 999999 heading "Con_ID"
column c02 format a30 heading "Container"
ttitle "Containers"
select Con_ID c01
, Name c02
from v$containers
order by 1;
````
4. Queries against container wmStore_Master
````
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
column c03 format a30 heading "Table Name"
column c04 format a20 heading "Sharing Type"
ttitle "Sharing Modes for Campaigns, Products and Orders"
select Object_Name c03
, Sharing c04
from DBA_Objects
where Owner = User
and Object_Name in ('WM_CAMPAIGNS','WM_PRODUCTS','WM_ORDERS')
order by Object_Name
;
````
5. Queries against container Tulsa
````
connect wmStore_Admin/oracle@localhost:1523/Tulsa
column c1 format a20 heading "Origin Con_ID"
column c2 format a30 heading "Product"
ttitle "Products Visible in Franchise Tulsa"
select Row_GUID c1
, Name c2
from wm_Products
;
````
## Step 9: Application Patches
This section we define an application patch. Patches are comparable to the application upgrades that we've seen in previous labs, but there are three important differences.
- The types of operation that are allowed in a patch are more limited. Essentially operations which are destructive are not allowed, including:
- Drop a table, column, index, trigger...
- create *or replace* view, package, procedure...
- Patches do not involve creation of Application Root Clones.
- Patches are not version-specific. This means that a single patch may be applied to multiple application versions.
The tasks you will accomplish in this lab are:
- Define patch 301 for application wmStore
- Propagate the patch to the Application Root Replicas. Then apply it to three franchises (but not to all)
1. Create patch 301
````
connect wmStore_Admin/oracle@localhost:1523/wmStore_Master
alter pluggable database application wmStore begin patch 301;
alter table wm_Orders add
(Financial_Quarter_Code varchar2(30) default 'Q4,FY2017' not null
)
;
create index wm_Orders_M1 on wm_Orders(Order_Date);
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Financial Quarter');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q1,FY2016');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q2,FY2016');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q3,FY2016');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q4,FY2016');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q1,FY2017');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q2,FY2017');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q3,FY2017');
insert into wm_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q4,FY2017');
commit;
update wm_Orders
set Financial_Quarter_Code = 'Q1,FY2016'
where Order_Date >= '01-JAN-16'
and Order_Date < '01-APR-16'
;
update wm_Orders
set Financial_Quarter_Code = 'Q2,FY2016'
where Order_Date >= '01-APR-16'
and Order_Date < '01-JUL-16'
;
update wm_Orders
set Financial_Quarter_Code = 'Q3,FY2016'
where Order_Date >= '01-JUL-16'
and Order_Date < '01-OCT-16'
;
update wm_Orders
set Financial_Quarter_Code = 'Q4,FY2016'
where Order_Date >= '01-OCT-16'
and Order_Date < '01-JAN-17'
;
update wm_Orders
set Financial_Quarter_Code = 'Q1,FY2017'
where Order_Date >= '01-JAN-17'
and Order_Date < '01-APR-17'
;
update wm_Orders
set Financial_Quarter_Code = 'Q2,FY2017'
where Order_Date >= '01-APR-17'
and Order_Date < '01-JUL-17'
;
update wm_Orders
set Financial_Quarter_Code = 'Q3,FY2017'
where Order_Date >= '01-JUL-17'
and Order_Date < '01-OCT-17'
;
update wm_Orders
set Financial_Quarter_Code = 'Q4,FY2017'
where Order_Date >= '01-OCT-17'
and Order_Date < '01-JAN-18'
;
commit;
alter pluggable database application wmStore end patch;
````
2. Apply the patch to some but not all of the databases
````
connect system/oracle@localhost:1523/Tulsa
alter pluggable database application wmStore sync to patch 301;
connect system/oracle@localhost:1523/California
alter pluggable database application wmStore sync to patch 301;
connect system/oracle@localhost:1523/NYC
alter pluggable database application wmStore sync to patch 301;
````
## Step 10: DBA Views
This section we introduce some of the DBA Views which are relevant to Application Containers.
The tasks you will accomplish in this lab are:
- Explore various DBA Views
1. DBA_PDBs
````
connect system/oracle@localhost:1523/cdb1
ttitle off
set linesize 180
column c1 heading "Con ID" format 9999
column c2 heading "PDB Name" format a30
column c3 heading "PDB ID" format a10
column c3 noprint
column c4 heading "Con UID" format a12
column c5 heading "Status" format a10
column c6 heading "Root?" format a5
column c7 heading "App PDB?" format a8
column c8 heading "Seed?" format a5
column c9 heading "Root Clone?" format a11
column c10 heading "Proxy?" format a6
column c11 heading "App Container Name" format a30
set echo on
desc DBA_PDBs
set echo off
set echo on
select P.Con_ID c1
, P.PDB_Name c2
, P.PDB_ID c3
, P.CON_UID c4
, P.Status c5
, P.Application_Root c6
, P.Application_PDB c7
, P.Application_Seed c8
, P.Application_Clone c9
, P.Is_Proxy_PDB c10
, AC.PDB_Name c11
from DBA_PDBs P
left outer join DBA_PDBs AC
on AC.Con_ID = P.Application_Root_Con_ID
order by 6 desc
, 9
, 8 desc
, 10 desc
, 7 desc
, 2
, 8
;
set echo off
````
2. DBA_APPLICATIONS
````
connect system/oracle@localhost:1523/wmStore_Master
column CON_UID heading "Con UID" format 9999999999
column APP_NAME heading "Application Name" format a20 truncate
column APP_ID heading "App ID" format 99999
column APP_VERSION heading "Version" format a7
column APP_VERSION_COMMENT heading "Comment" format a50
column APP_STATUS heading "Status" format a12
column APP_IMPLICIT heading "Implicit" format a8
column APP_CAPTURE_SERVICE heading "Capture Svc" format a30
column APP_CAPTURE_MODULE heading "Capture Mod" format a15
column PATCH_NUMBER heading "Patch #" format 999999
column PATCH_MIN_VERSION heading "Min Vers" format a8
column PATCH_STATUS heading "Status" format a10
column PATCH_COMMENT heading "Comment" format a50
column ORIGIN_CON_ID heading "Origin_Con_ID" format 999999999999
column STATEMENT_ID heading "Stmt ID" format 999999
column CAPTURE_TIME heading "Capture TS" format a9
column APP_STATEMENT heading "SQL Statement" format a50 truncate
column ERRORNUM heading "Error #" format 999999
column ERRORMSG heading "Error Message" format a50 truncate
column SYNC_TIME heading "Sync TS" format a9
set echo on
desc DBA_Applications
select * from DBA_Applications;
set echo off
````
3. DBA_APP_VERSIONS
````
set echo on
desc DBA_App_Versions
select * from DBA_App_Versions;
set echo off
````
4. DBA_APP_PATCHES
````
set echo on
desc DBA_App_Patches
select * from DBA_App_Patches;
set echo off
````
5. DBA_APP_PDB_STATUS
````
set echo on
desc DBA_App_PDB_Status
select * from DBA_App_PDB_Status;
set echo off
````
6. DBA_APP_STATEMENTS
````
set echo on
desc DBA_App_Statements
select * from DBA_App_Statements;
set echo off
````
7. DBA_APP_ERRORS
````
set echo on
connect system/oracle@localhost:1523/NYC
desc DBA_App_Errors
select * from DBA_App_Errors;
set echo off
````
## Step 11: Diagnosing, Correcting Problems, and Restarting Sync
This section we explore the restartability of the patching process.
The tasks you will accomplish in this lab are:
- Deliberately make a manual change to NYC that will conflict with applying patch 301
- Attempt to sync NYC to apply the patch – anticipating a failure
- Query relevant DBA views to identify the problem
- Resolve the problem and re-start the sync, which should now succeed
1. Create an index that will break the sync
````
connect wmStore_Admin/oracle@localhost:1523/NYC
create index wm_Orders_M1 on wm_Orders(Order_Date);
````
2. Try the sync and have it fail
````
connect system/oracle@localhost:1523/NYC
alter pluggable database application wmStore sync;
````
3. Check for errors
````
set linesize 180
column APP_NAME heading "Application Name" format a20 truncate
column APP_STATEMENT heading "SQL Statement" format a50 truncate
column ERRORNUM heading "Error #" format 999999
column ERRORMSG heading "Error Message" format a50 truncate
column SYNC_TIME heading "Sync TS" format a9
select * from DBA_App_Errors;
````
4. Correct the issue and try the sync again.
````
connect wmStore_Admin/oracle@localhost:1523/NYC
drop index wm_Orders_M1;
connect system/oracle@localhost:1523/NYC
alter pluggable database application wmStore sync;
````
## Step 12: Container Map
This section we explore another location transparency technology: Container Map. Here we follow the expansion of Walt's Malts through the acquisition of a formerly independent distributor of Walt's Malts products. This company is named Terminally Chill, and their niche was selling Walt's Malts produce through a number of small kiosks in various airports globally. The Terminally Chill application has a different design from the original wmStore application. Whereas wmStore was originally designed for standalone deployment, Terminally Chill used a single database to manage data for all kiosks in all airports. The application server tiers are designed to connect directly to a single database, with query predicates to retrieve data for the right airport and kiosk. In this lab, we'll see how Container Map can help accommodate applications of this design.
The tasks you will accomplish in this lab are:
- Setup Application PDBs for new Airport franchises
- Install v1 of Application "Terminal"
- Sync Application PDBs
- Create franchise-specific demonstration data
- Perform various queries to see how Container Map can deliver location transparency
1. Create the application root
````
connect system/oracle@localhost:1523/cdb1
create pluggable database Terminal_Master as application container
admin user tc_admin identified by oracle;
alter pluggable database Terminal_Master open;
````
2. Create the Application PDBs
````
connect system/oracle@localhost:1523/Terminal_Master
create pluggable database LHR
admin user tc_admin identified by oracle;
create pluggable database SFO
admin user tc_admin identified by oracle;
create pluggable database JFK
admin user tc_admin identified by oracle;
create pluggable database LAX
admin user tc_admin identified by oracle;
alter session set container=CDB$Root;
alter pluggable database all open;
````
3. Create the 1.0 Terminal Install
````
connect system/oracle@localhost:1523/Terminal_Master
alter pluggable database application Terminal begin install '1.0';
connect system/oracle@localhost:1523/Terminal_Master
create table tc_Kiosk_Map
(Kiosk varchar2(30) not null
)
partition by list (Kiosk)
(partition LHR values ('LHR T1','LHR T4','LHR T5')
,partition SFO values ('SFO INTL','SFO T2')
,partition JFK values ('JFK T1','JFK T2','JFK T3')
,partition LAX values ('LAX INTL','LAX 7/8')
)
;
alter database set Container_Map = 'SYSTEM.tc_Kiosk_Map';
create tablespace Terminal_TBS datafile size 100M autoextend on next 10M maxsize 200M;
create user Terminal_Admin identified by oracle container=all;
grant create session, dba to Terminal_Admin;
alter user Terminal_Admin default tablespace Terminal_TBS;
connect Terminal_Admin/oracle@localhost:1523/Terminal_Master
create table tc_Products
sharing = extended data
(Row_GUID raw(16) default Sys_GUID() primary key
,Name varchar2(30) not null unique
,Local_Product_YN char(1) default 'Y' not null
)
;
alter table tc_Products add constraint Local_Product_Bool
check (Local_Product_YN in ('Y','N'))
;
create table tc_Coupons
sharing = data
(Row_GUID raw(16) default Sys_GUID() primary key
,Coupon_Number number(16,0) generated always as identity not null unique
,Campaign_Code varchar2(30)
,Expiration_Date date default current_date+14
)
;
create table tc_Orders
sharing = metadata
(Row_GUID raw(16) default Sys_GUID() primary key
,Order_Number number(16,0) generated always as identity not null unique
,Order_Date date default current_date not null
,Kiosk_Code varchar2(30) not null
,Coupon_ID raw(16)
,Campaign_Code varchar2(30) null
)
;
alter table tc_Orders add constraint tc_Orders_F1
foreign key (Coupon_ID)
references tc_Coupons(Row_GUID)
disable
;
create table tc_Order_Items
sharing = metadata
(Row_GUID raw(16) default Sys_GUID() primary key
,Order_ID raw(16) not null
,Item_Num number(16,0) not null
,Product_ID raw(16) not null
,Order_Qty number(16,0) not null
)
;
alter table tc_Order_Items add constraint tc_Order_Items_F1
foreign key (Order_ID)
references tc_Orders(Row_GUID)
disable
;
alter table tc_Order_Items add constraint tc_Order_Items_F2
foreign key (Product_ID)
references tc_Products(Row_GUID)
disable
;
create table tc_List_Of_Values
sharing = data
(Row_GUID raw(16) default Sys_GUID() primary key
,Type_Code varchar2(30) not null
,Value_Code varchar2(30) not null
)
;
alter table tc_List_Of_Values add constraint tc_List_Of_Values_U1
unique (Type_Code, Value_Code)
;
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Airport');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Airport','LHR');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Airport','SFO');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Airport','JFK');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Airport','LAX');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Kiosk');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','LHR T1');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','LHR T4');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','LHR T5');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','SFO INTL');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','SFO T2');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','JFK T1');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','JFK T2');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','JFK T3');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','LAX INTL');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Kiosk','LAX 7/8');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Campaign');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Campaign','Foreign Getaway');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Campaign','Lost Weekend');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Campaign','Road Warrior');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Campaign','World Citizen');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Type', 'Financial Quarter');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q1,FY2016');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q2,FY2016');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q3,FY2016');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q4,FY2016');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q1,FY2017');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q2,FY2017');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q3,FY2017');
insert into tc_List_Of_Values (Type_Code, Value_Code) values ('Financial Quarter', 'Q4,FY2017');
commit;
insert into tc_Products (Row_GUID, Name, Local_Product_YN) values ('01', 'Tornado Twisted', 'N');
insert into tc_Products (Row_GUID, Name, Local_Product_YN) values ('02', 'Muskogee Magic', 'N');
insert into tc_Products (Row_GUID, Name, Local_Product_YN) values ('03', 'Root 66 Beer Float', 'N');
insert into tc_Products (Row_GUID, Name, Local_Product_YN) values ('04', 'Yokie Dokie Okie Eggnog', 'N');
commit;
alter table tc_Orders enable containers_default;
alter table tc_Orders enable container_map;
alter pluggable database application Terminal end install '1.0';
````
4. Sync the Application databases to install 1.0
````
connect system/oracle@localhost:1523/LHR
alter pluggable database application Terminal sync to '1.0';
connect system/oracle@localhost:1523/SFO
alter pluggable database application Terminal sync to '1.0';
connect system/oracle@localhost:1523/JFK
alter pluggable database application Terminal sync to '1.0';
connect system/oracle@localhost:1523/LAX
alter pluggable database application Terminal sync to '1.0';
````
5. Load the Terminal Data
````
@Terminal_Data_Lab12
````
6. Review the Container Map
````
connect Terminal_Admin/oracle@localhost:1523/Terminal_Master
column c1 format a30 heading "Airport"
column c2 format a30 heading "Kiosk"
column c3 format 999,999 heading "Num Orders"
define Kiosk = "LAX INTL"
ttitle "Order Count in Kiosk &Kiosk in Past Year"
select count(*)
from tc_Orders
where Order_Date > current_date-365
and Kiosk_Code = '&Kiosk'
;
ttitle "Order Count in Subset of Kiosks in Past Year"
select o.Kiosk_Code c2
, count(*) c3
from tc_Orders o
where o.Kiosk_Code in ('SFO INTL','LHR T5')
and o.Campaign_Code = 'Foreign Getaway'
and o.Order_Date > current_date-365
group by o.Kiosk_Code
;
```` | 32.743389 | 955 | 0.674883 | eng_Latn | 0.873818 |
9e02c3b9a44555c0146abb38f23cdf44a5831ae8 | 554 | markdown | Markdown | _posts/2016-01-14-cirque-du-soleil-headquarters.markdown | olimwei/project | 220ce25e2e2a7ea86c27617e7e936df39c3a1cfe | [
"MIT"
] | null | null | null | _posts/2016-01-14-cirque-du-soleil-headquarters.markdown | olimwei/project | 220ce25e2e2a7ea86c27617e7e936df39c3a1cfe | [
"MIT"
] | null | null | null | _posts/2016-01-14-cirque-du-soleil-headquarters.markdown | olimwei/project | 220ce25e2e2a7ea86c27617e7e936df39c3a1cfe | [
"MIT"
] | null | null | null | ---
layout: post
title: 太阳马戏团总部
date: 2016-01-14 15:22:50.000000000 +09:00
---
`14th Jan 2016`
<center>
<div>
<img src="http://project.olim.ca/assets/images/cirque-du-soleil-1.jpeg" alt="Cirque du soleil太阳马戏团">
</div>
</center>
<div>
由加拿大两位街头艺人Guy Laliberté和Gilles Ste-Croix创建的加拿大国宝级娱乐公司-太阳马戏团。其常年在拉斯维加斯演出,全球总部Cirque du soleil headquarters设立于它起源的蒙特利尔,现位于北郊的Saint-Michel区,是其娱乐项目研发基地。
</div>
<center>
<div>
<img src="http://project.olim.ca/assets/images/cirque-du-soleil-2.jpeg" alt="Cirque du soleil太阳马戏团">
</div>
</center>
中国复星集团买了太阳马戏团25%的股份。
| 20.518519 | 148 | 0.741877 | yue_Hant | 0.243426 |
9e02c9da8687f9b2b8e74328608c39c102824c6d | 291 | md | Markdown | README.md | rkabadi/pyedipug | 856d7951a29fe7788ccc27841c444473ab63fc7b | [
"MIT"
] | 1 | 2016-01-20T00:34:47.000Z | 2016-01-20T00:34:47.000Z | README.md | rkabadi/pyedipug | 856d7951a29fe7788ccc27841c444473ab63fc7b | [
"MIT"
] | 4 | 2016-01-20T00:31:59.000Z | 2020-12-04T21:04:15.000Z | README.md | rkabadi/pyedipug | 856d7951a29fe7788ccc27841c444473ab63fc7b | [
"MIT"
] | 2 | 2016-03-09T11:22:39.000Z | 2017-09-13T00:18:35.000Z | # pyedimax
Pyedimax is a python library for interfacing with the Edimax Smart Plug switches SP-1101W and SP-2101W
The code in pyedimax is leveraged from wendlers [ediplug-py](https://github.com/wendlers/ediplug-py) repository
# Requirements
- Python 2.7+
- Python 3.3+
- Requests library
| 26.454545 | 111 | 0.776632 | eng_Latn | 0.928195 |
9e034af45f7da09898717e7b1170306007820695 | 16,186 | md | Markdown | lessons/07_rnaseq_workflow.md | hbc/Intro-to-Unix | ffaf3cb0335a1316ecfb283651cea7918fb8ff3f | [
"CC-BY-4.0"
] | 4 | 2016-01-31T04:27:08.000Z | 2016-11-20T17:15:38.000Z | lessons/07_rnaseq_workflow.md | hbc/Intro-to-Unix | ffaf3cb0335a1316ecfb283651cea7918fb8ff3f | [
"CC-BY-4.0"
] | null | null | null | lessons/07_rnaseq_workflow.md | hbc/Intro-to-Unix | ffaf3cb0335a1316ecfb283651cea7918fb8ff3f | [
"CC-BY-4.0"
] | 3 | 2016-08-15T18:32:37.000Z | 2018-10-04T05:07:51.000Z | ---
title: "RNA-Seq workflow"
author: "Meeta Mistry, Bob Freeman"
date: "Thursday, May 5, 2016"
---
Approximate time: 90 minutes
## Learning Objectives:
* Continue through the RNA-Seq workflow to align reads to the reference genome
* Learning intricacies of various tools used in NGS analysis (parameters, usage, etc)
* Assessing input and output filetypes
## Running a Workflow
### Setting up
To get started with this lesson, we will login to the cluster but this time we are going to ask for 6 cores. We will do this by adding `-n 6` to our busb command:
```
ssh username@orchestra.med.harvard.edu
(enter password)
$ bsub -Is -n 6 -q training bash
```
Change directories into the `unix_workshop` directory and copy the `reference_data` folder into your project directory:
```
$ cd unix_workshop
$ cp -r reference_data rnaseq_project/data
```
Now move into the `rnaseq_project` directory.
$ cd rnaseq_project
You should have a directory tree setup similar to that shown below. it is best practice to have all files you intend on using for your workflow present within the same directory. In our case, we have our original FASTQ files and post-trimming data generated in the previous section. We also have all reference data files that will be used in downstream analyses.
```
rnaseq_project
├── data
│ ├── reference_data
│ │ └── chr1.fa
│ │ └── chr1-hg19_genes.gtf
| ├── untrimmed_fastq
│ │
│ └── trimmed_fastq
│ ├── Irrel_kd_1.qualtrim25.minlen35.fq
│ ├── Irrel_kd_2.qualtrim25.minlen35.fq
│ ├── Irrel_kd_3.qualtrim25.minlen35.fq
│ ├── Mov10_oe_1.qualtrim25.minlen35.fq
│ ├── Mov10_oe_2.qualtrim25.minlen35.fq
│ └── Mov10_oe_3.qualtrim25.minlen35.fq
|
├── meta
├── results
└── logs
```
We previously described a general overview of the steps involved in RNA-Seq analysis, and in this session we will take our clean reads and align them to the reference genome.

We'll first perform the commands for a single sample. Next, we'll create a script for the commands and test it. Finally, we'll modify the script to run on the cluster.
So let's get started by loading up some of the modules for tools we need for this section:
```
$ module load seq/samtools/1.2 seq/htseq/0.6.1p1 seq/STAR/2.4.0j
```
Create an output directory for our alignment files:
```bash
$ mkdir results/STAR
```
In the script, we will eventually loop over all of our files and have the cluster work on each one in serial, then in parallel. For now, we're going to work on just one to set up our workflow. To start we will use the trimmed first replicate in the Mov10 overexpression group, `Mov10_oe_1.qualtrim25.minlen35.fq`
> **NOTE: if you did not follow the last section, please execute the following command:** (this will copy over the required files into your home directory.)
>
> ```bash
> # ONLY run this if you did not follow the last section
> $ cp -r /groups/hbctraining/unix_workshop_other/trimmed_fastq data/
>
> ```
>
### Alignment to genome
The alignment process consists of choosing an appropriate reference genome to map our reads against, and performing the read alignment using one of several splice-aware alignment tools such as [STAR](https://github.com/alexdobin/STAR) or [HISAT2](https://ccb.jhu.edu/software/hisat2/manual.shtml). The choice of aligner is a personal preference and also dependent on the computational resources that are available to you.
For RNA-seq it is important to use a **splice-aware aligner** because we want to **account for reads that span exon-exon junctions**. Other DNA aligners map reads against the reference genome and when an intron is encountered a long gap would be introduced in the mapping. This is not desired and might lead to false mappings. Splice-aware aligners use transcript information/restrictions from the GTF file so that it accounts for these junctions in its mapping algorithm.
For this workshop we will be using STAR (Spliced Transcripts Alignment to a Reference), an aligner designed to specifically address many of the challenges of RNAseq read mapping by using a novel strategy for spliced alignments. STAR is shown to have **high accuracy** and outperforms other aligners by more than a **factor of 50 in mapping
speed (but also requires quite a bit of memory**). More details on the algorithm itself can be found in the [STAR publication](http://bioinformatics.oxfordjournals.org/content/early/2012/10/25/bioinformatics.bts635).
Aligning reads using STAR is a two step process:
1. Create a genome index
2. Map reads to the genome.
> A quick note on shared databases for human and other commonly used model organisms. The Orchestra cluster has a designated directory at `/groups/shared_databases/` in which there are files that can be accessed by any user. These files contain, but are not limited to, genome indices for various tools, reference sequences, tool specific data, and data from public databasese such as NCBI and PDB. So when using a tool and requires a reference of sorts, it is worth taking a quick look here because chances are it's already been taken care of for you.
Indexing of the reference genome has already been done for you. **You do not need to run this code**. For this step you need to provide a reference genome and an annotation file. For this workshop we are using reads that originate from a small subsection of chromosome 1 (~300,00 reads) and so we are using only chr1 as the reference genome, and have provided the appropriate indices. Depending on the size of your genome, this can take awhile.
The basic options to **generate genome indices** using STAR as follows:
* `--runThreadN`: number of threads
* `--runMode`: genomeGenerate mode
* `--genomeDir`: /path/to/store/genome_indices
* `--genomeFastaFiles`: /path/to/FASTA_file
* `--sjdbGTFfile`: /path/to/GTF_file
* `--sjdbOverhang`: readlength -1
```
** Do not run this**
STAR --runThreadN 5 --runMode genomeGenerate --genomeDir ./ --genomeFastaFiles chr1.fa --sjdbGTFfile chr1-hg19_genes.gtf --sjdbOverhang 99
```
> *NOTE:* In case of reads of varying length, the ideal value for `--sjdbOverhang` is max(ReadLength)-1. In most cases, the default value of 100 will work as well as the ideal value.
The basic options for **mapping reads** to the genome using STAR is as follows:
* `--runThreadN`: number of threads
* `--readFilesIn`: /path/to/FASTQ_file
* `--genomeDir`: /path/to/genome_indices
* `--outFileNamePrefix`: prefix for all output files
The STAR aligner first looks for the longest sequence that exactly matches one or more locations on the reference genome (MMP1). The remaining unmapped portion (MMP2) is mapped elsewhere. The separate MMPs are then clustered based on proximity and stitched together to create a complete read. **If STAR does not find an exact matching sequence** for both MMPs, the algorithm chooses to extend MMP1 (allowing for a certain number of mismatches) *or* will soft-clip poor quality sequences (e.g. if quality of the extension is low, or presence of other contaminating sequence).
> *NOTE:* STAR will extract splice junctions from the GTF file and use them to greatly improve accuracy of the mapping. While this is optional, and STAR can be run without annotations, **using annotations is highly recommended** whenever they are available. Without the GTF, the splice junctions are determined based on proximity and donor and acceptor motifs identfied on the two MMPs.

Additionally, default filtering is applied in which the maximum number of multiple alignments allowed for a read is set to 10. If a read exceeds this number there is no alignment output. To change the default you can use `--outFilterMultimapNmax`, but for this lesson we will leave it as default. The advanced parameters that we are going to use are described below:
* `--outSAMtype`: output filetype (SAM default)
* `--outSAMUnmapped`: what to do with unmapped reads
More details on STAR and its functionality can be found in the [user manual](https://github.com/alexdobin/STAR/blob/master/doc/STARmanual.pdf), we encourage you to peruse through to get familiar with all available options.
Now let's put it all together! The full STAR command is provided below.
> If you like you can copy-paste it directly into your terminal. Alternatively, you can manually enter the command, but it is advisable to first type out the full command in a text editor (i.e. [Sublime Text](http://www.sublimetext.com/) or [Notepad++](https://notepad-plus-plus.org/)) on your local machine and then copy paste into the terminal. This will make it easier to catch typos and make appropriate changes.
```
$ STAR --runThreadN 6 --genomeDir /groups/hbctraining/unix_workshop_other/reference_STAR \
--readFilesIn data/trimmed_fastq/Mov10_oe_1.qualtrim25.minlen35.fq \
--outFileNamePrefix results/STAR/Mov10_oe_1_ \
--outSAMtype BAM SortedByCoordinate \
--outSAMunmapped Within
```
***
**Exercise**
How many files do you see in your output directory? Using the `less` command take a look at `Mov10_oe_1_Log.final.out` and answer the following questions:
1. How many reads are uniquely mapped?
2. How many reads map to more than 10 locations on the genome?
3. How many reads are unmapped due to read length?
***
### SAM/BAM
The output we requested from STAR is a BAM file, and by default returns a file in SAM format. **BAM is a binary version of the SAM file, also known as Sequence Alignment Map format.** The SAM file is a tab-delimited text file that contains information for each individual read and its alignment to the genome. The file begins with an optional header (which starts with '@'), followed by an alignment section in which each line corresponds to alignment information for a single read. **Each alignment line has 11 mandatory fields** for essential mapping information and a variable number of fields for aligner specific information.
These fields are described briefly below, but for more detailed information the paper by [Heng Li et al](http://bioinformatics.oxfordjournals.org/content/25/16/2078.full) is a good start.


Let's take a quick look at our alignment. To do so we first convert our BAM file into SAM format using samtools and then pipe it to the `less` command. This allows us to look at the contents without having to write it to file (since we don't need a SAM file for downstream analyses).
```
$ samtools view -h results/STAR/Mov10_oe_1_Aligned.sortedByCoord.out.bam | less
```
Scroll through the SAM file and see how the fields correspond to what we expected.
### Assess the alignment (visualization)
Index the BAM file for visualization with IGV:
$ samtools index results/STAR/Mov10_oe_1_Aligned.sortedByCoord.out.bam
**Transfer files to your laptop using the command line**
We previously used FileZilla to transfer files from Orchestra to your laptop. However, there is another way to do so using the command line interface. Similar to the `cp` command to copy there is a command that allows you to securely copy files between computers. The command is called `scp` and allows files to be copied to, from, or between different hosts. It uses ssh for data transfer and provides the same authentication and same level of security as ssh.
First, identify the location of the _origin file_ you intend to copy, followed by the _destination_ of that file. Since the origin file is located on Orchestra, this requires you to provide remote host and login information.
The following 2 files need to be moved from Orchestra to your local machine,
`results/STAR/Mov10_oe_1_Aligned.sortedByCoord.out.bam`,
`results/STAR/Mov10_oe_1_Aligned.sortedByCoord.out.bam.bai`
```
$ scp user_name@orchestra.med.harvard.edu:/home/user_name/unix_workshop/rnaseq_project/results/Mov10_oe_1_Aligned.sortedByCoord.out.bam* /path/to/directory_on_laptop
```
> If you are not comfortable using the command line to, we encourgae participants to continue using Filezilla for the transfer.
**Visualize**
* Start [IGV](https://www.broadinstitute.org/software/igv/download) _You should have this previously installed on your laptop_
* Load the Human genome (hg19) into IGV using the dropdown menu at the top left of your screen. _Note: there is also an option to "Load Genomes from File..." under the "Genomes" pull-down menu - this is useful when working with non-model organisms_
* Load the .bam file using the **"Load from File..."** option under the **"File"** pull-down menu. *IGV requires the .bai file to be in the same location as the .bam file that is loaded into IGV, but there is no direct use for that file.*
### IGV screenshot

***
**Exercise**
Now that we have done this for one sample, let's try using the same commands to perform alignment on one of the control samples. Using `Irrel_kd_1_qualtrim25.minlen35.fq` walk through the alignment commands above. Copy over the resulting BAM and index file to your laptop and upload into IGV for visualization.
1. How does the MOV10 gene look in the control sample in comparison to the overexpression sample?
2. Take a look at a few other genes by typing into the search bar. For example, PPM1J and PTPN22. How do these genes compare?
***
### Counting reads
Once we have our reads aligned to the genome, the next step is to count how many reads have been mapped to each gene. Counting is done with a tool called [`htseq-count`](http://www-huber.embl.de/users/anders/HTSeq/doc/count.html). The input files required for counting include the BAM file and an associated gene annotation file in GTF format. `htseq-count` works by **taking the alignment coordinates for each read and cross-referencing that to the coordinates for features described in the GTF**.
<img src="../img/count-fig2.png", width=700>
Most commonly a **feature is considered to be a gene**, which is the union of all exons (which is a feature type) that map to that gene. There is no minimum overlap to determine whether or not a read is counted for a particular gene, rather it is the mode that the user chooses.
There are **three modes** available and are listed below in order of stringency, with most conservative at the top:
1. intersection-strict
2. union
3. intersection non-empty
We will be using the **'union' mode as it is default** and most commonly used. To find out more on the different modes and how they affect your output, take a look at the [manual](http://www-huber.embl.de/users/anders/HTSeq/doc/count.html)
Let's start by creating a directory for the output:
```
$ cd ~/unix_workshop/rnaseq_project/
$ mkdir results/counts
```
In it's most basic form the `htseq` command requires only the SAM file and the GTF file. We will add in a couple of additional parameters: `--stranded reverse`, to specify that we have a stranded library created via the dUTP method and `--format` to specify our input is in BAM format. By default htseq-count will **ignore any reads that map to multiple locations** on the genome. This results in undercounting but also helps reduce false positives. While multi-mappers are a feature that cannot be modified, there is a parameter that allows the user to filter reads by specifying a minimum alignment quality.
You will notice at the end of the command we have added a redirection symbol. Since htseq-count outputs results to screen, we need to re-direct it to file.
```
$ htseq-count --stranded reverse --format bam results/STAR/Mov10_oe_1_Aligned.sortedByCoord.out.bam data/reference_data/chr1-hg19_genes.gtf > results/counts/Mov10_oe_1.counts
```
***
**Exercise**
Take a look at the end of the file using the `tail` command. You should see a summary of how the reads were classified.
1. How many reads were assigned as no_feature? Why would they be classified this way?
2. How many reads were found to map to multiple locations?
***
*To share or reuse these materials, please find the attribution and license details at [license.md](https://github.com/hbc/Intro-to-Unix/blob/master/license.md).*
| 54.498316 | 630 | 0.765476 | eng_Latn | 0.997818 |
9e0445b46c6913492d1f31a4417402027ec08ac1 | 2,328 | md | Markdown | docs/mdx/using-set-expressions.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-02-08T05:59:39.000Z | 2019-02-12T03:27:49.000Z | docs/mdx/using-set-expressions.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mdx/using-set-expressions.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-05T00:07:53.000Z | 2021-04-05T00:07:53.000Z | ---
title: "Using Set Expressions | Microsoft Docs"
ms.date: 06/04/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: mdx
ms.topic: reference
ms.author: owend
ms.reviewer: owend
author: minewiskan
manager: kfile
---
# Using Set Expressions
A set consists of an ordered list of zero or more tuples. A set that does not contain any tuples is known as an empty set.
The complete expression of a set consists of zero or more explicitly specified tuples, framed in curly braces:
{ [ { *Tuple_expression* | *Member_expression* } [ , { *Tuple_expression* | *Member_expression* } ] ... ] }
The member expressions specified in a set expression are converted to one-member tuple expressions.
## Example
The following example shows two set expressions used on the Columns and Rows axes of a query:
`SELECT`
`{[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount]} ON COLUMNS,`
`{([Product].[Product Categories].[Category].&[4], [Date].[Calendar].[Calendar Year].&[2004]),`
`([Product].[Product Categories].[Category].&[1], [Date].[Calendar].[Calendar Year].&[2003]),`
`([Product].[Product Categories].[Category].&[3], [Date].[Calendar].[Calendar Year].&[2004])}`
`ON ROWS`
`FROM [Adventure Works]`
On the Columns axis, the set
{[Measures].[Internet Sales Amount], [Measures].[Internet Tax Amount]}
consists of two members from the Measures dimension. On the Rows axis, the set
{([Product].[Product Categories].[Category].&[4], [Date].[Calendar].[Calendar Year].&[2004]),
([Product].[Product Categories].[Category].&[1], [Date].[Calendar].[Calendar Year].&[2003]),
([Product].[Product Categories].[Category].&[3], [Date].[Calendar].[Calendar Year].&[2004])}
consists of three tuples, each of which contains two explicit references to members on the Product Categories hierarchy of the Product dimension and the Calendar hierarchy of the Date dimension.
For examples of functions that return sets, see [Working with Members, Tuples, and Sets (MDX)](../analysis-services/multidimensional-models/mdx/working-with-members-tuples-and-sets-mdx.md).
## See Also
[Expressions (MDX)](../mdx/expressions-mdx.md)
| 38.163934 | 201 | 0.671821 | eng_Latn | 0.831907 |
9e046459a6c987818ca11da02448d8910bd9cde7 | 1,008 | md | Markdown | README.md | banminkyoz/all-repos | a2349272fb88d5bc74b9331c8e797d97850676a4 | [
"MIT"
] | 1 | 2018-07-22T16:04:56.000Z | 2018-07-22T16:04:56.000Z | README.md | banminkyoz/all-repos | a2349272fb88d5bc74b9331c8e797d97850676a4 | [
"MIT"
] | null | null | null | README.md | banminkyoz/all-repos | a2349272fb88d5bc74b9331c8e797d97850676a4 | [
"MIT"
] | null | null | null | # all-repos
> Get all github repositories by username
[](https://travis-ci.org/banminkyoz/all-repos) [](http://badge.fury.io/js/all-repos) [](https://github.com/xojs/xo)
## Install
```
$ npm install all-repos --save
```
## Usage
```js
const allRepos = require('all-repos');
allRepos('banminkyoz').then(repos => {
console.log(repos);
}).catch(error => {
console.log(error);
});
/* Results:
[
{
name: 'neovim',
fullName: 'banminkyoz/neovim',
description: 'My neovim config XD',
stars: '1',
forks: 0,
forkFrom: '',
lastUpdated: '3 weeks ago',
url: 'https://github.com/banminkyoz/neovim'
},
...
]
*/
```
## Related
- [all-repos-cli](https://github.com/banminkyoz/all-repos-cli) - CLI for this module
## License
MIT © [Kyoz](mailto:banminkyoz@gmail.com) | 20.16 | 317 | 0.647817 | yue_Hant | 0.260155 |
9e04fac61cf78f46c738c1e8979fd5188d94f242 | 3,182 | md | Markdown | docs/ui/professional-ui-components/Chart/Axes/styling.md | shendrekbharath/docs | 70e53678d43702ed522fbe246bcf3728a8400028 | [
"Apache-2.0"
] | null | null | null | docs/ui/professional-ui-components/Chart/Axes/styling.md | shendrekbharath/docs | 70e53678d43702ed522fbe246bcf3728a8400028 | [
"Apache-2.0"
] | null | null | null | docs/ui/professional-ui-components/Chart/Axes/styling.md | shendrekbharath/docs | 70e53678d43702ed522fbe246bcf3728a8400028 | [
"Apache-2.0"
] | null | null | null | ---
title: Аxis styling
page_title: Axis Styling | Progress NativeScript UI Documentation
description: This article explains how the visual appearance of Telerik Chart's axis for NativeScript can be customized.
slug: axis-styling
tags: chart, overview, styling, nativescript, professional, ui
position: 8
publish: true
---
# RadChart Axes Styling
Styling the chart axes is done by using the corresponding customization properties exposed by the axes. All axes used in Telerik Chart for NativeScript define the following properties:
- {% typedoc_link classes:CartesianAxis,member:lineColor%} - defines the color of the axis' line
- {% typedoc_link classes:CartesianAxis,member:lineThickness%} - defines the thickness of the axis' line
- {% typedoc_link classes:CartesianAxis,member:lineHidden%} - defines if the axis line is hidden.
- {% typedoc_link classes:CartesianAxis,member:labelTextColor%} - defines the color of the axis' labels
- {% typedoc_link classes:CartesianAxis,member:labelSize%} - defines the text size of the axis' labels
- {% typedoc_link classes:CartesianAxis,member:labelFormat%} - defines the format used to display the axis' labels
- {% typedoc_link classes:CartesianAxis,member:labelMargin%} - defines the margin for the labels
- {% typedoc_link classes:CartesianAxis,member:labelRotationAngle%} - defines the angle of rotation for labels. Requires *Rotate* value for *labelFitMode* property
- {% typedoc_link classes:CartesianAxis,member:labelFitMode%} - defines the fit mode for labels. By default labels are positioned on single line but there are {% typedoc_link modules:AxisLabelFitMode,member:Multiline%} and {% typedoc_link modules:AxisLabelFitMode,member:Rotate%} options too.
- {% typedoc_link classes:CartesianAxis,member:labelLayoutMode%} - defines the layout mode for axis labels. With this property you can position labels in the {% typedoc_link modules:AxisLabelFitMode,member:Inner%} or {% typedoc_link modules:AxisLabelFitMode,member:Outer%} side of chart.
For the properties not specified exclusively the default values from the chart palette are used.
#### Example
To better illustrate the usage of Axis properties, we will use a simple scenario in which the Axes are customized:
<snippet id='axis-styling'/>
This is how the chart looks like now:
 
## References
Want to see this scenario in action?
Check our SDK examples repo on GitHub. You will find this and many other practical examples with NativeScript UI.
* [Customization Example](https://github.com/telerik/nativescript-ui-samples/tree/master/chart/app/examples/axes/customization)
Related articles you might find useful:
* [**Linear Axis**]({% slug chart-features-linear %})
* [**Logarithmic Axis**]({% slug chart-features-logarithmic %})
* [**DateTime Continuous Axis**]({% slug chart-features-datetimecontinuous %})
* [**Categorical Axis**]({% slug chart-features-categorical %})
* [**DateTime Categorical Axis**]({% slug chart-features-datetimecategorical %})
* [**Negative Values Axis**]({% slug chart-features-negative-values %}) | 64.938776 | 292 | 0.776556 | eng_Latn | 0.918946 |
9e050d1cfbfde7d6e9a9c2c37070c7a7aeca376d | 1,670 | md | Markdown | application/third_party/sendgrid-php/vendor/sendgrid/php-http-client/CHANGELOG.md | abhishekkadadi/onlineExam | 9331877aab79b8dfc2a13d9c018817067c0b6c23 | [
"MIT"
] | 216 | 2019-08-23T08:42:36.000Z | 2022-03-12T14:58:57.000Z | vendor/sendgrid/php-http-client/CHANGELOG.md | freereaper/mod | 99a84d23b5be8814716701994ace5e27d78d4609 | [
"MIT"
] | 118 | 2016-09-09T13:51:14.000Z | 2016-12-05T04:25:06.000Z | vendor/sendgrid/php-http-client/CHANGELOG.md | freereaper/mod | 99a84d23b5be8814716701994ace5e27d78d4609 | [
"MIT"
] | 93 | 2019-09-06T01:14:29.000Z | 2022-03-27T05:18:27.000Z | # Change Log
All notable changes to this project will be documented in this file.
This project adheres to [Semantic Versioning](http://semver.org/).
## [3.5.1] - 2016-11-17
### Fixed
- Pull request #13, fixed issue #12: [Change from to php union operator to combine curl options](https://github.com/sendgrid/php-http-client/pull/13)
- Thanks to [emil](https://github.com/emilva) for the pull request!
## [3.5.0] - 2016-10-18
### Added
- Pull request #11: [Added curlOptions property to customize curl instance](https://github.com/sendgrid/php-http-client/pull/11)
- Thanks to [Alain Tiemblo](https://github.com/ninsuo) for the pull request!
## [3.4.0] - 2016-09-27
### Added
- Pull request #9: [Add getters for certain properties](https://github.com/sendgrid/php-http-client/pull/9)
- Thanks to [Arjan Keeman](https://github.com/akeeman) for the pull request!
## [3.3.0] - 2016-09-13
### Added
- Pull request #6: [Library refactoring around PSR-2 / PSR-4 code standards](https://github.com/sendgrid/php-http-client/pull/6)
- Thanks to [Alexandr Ivanov](https://github.com/misantron) for the pull request!
## [3.1.0] - 2016-06-10
### Added
- Automatically add Content-Type: application/json when there is a request body
## [3.0.0] - 2016-06-06
### Changed
- Made the Request and Response variables non-redundant. e.g. request.requestBody becomes request.body
## [2.0.2] - 2016-02-29
### Fixed
- Renaming files to conform to PSR-0, git ignored the case in 2.0.1
## [2.0.1] - 2016-02-29
### Fixed
- Renaming files to conform to PSR-0
## [1.0.1] - 2016-02-29
### Fixed
- Composer/Packagist install issues resolved
## [1.0.0] - 2016-02-29
### Added
- We are live!
| 34.081633 | 149 | 0.701198 | eng_Latn | 0.727837 |
9e05c847d54bd8aca8161428c3029ba28f97d4f1 | 899 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c3001.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3001.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3001.md | kyser/cpp-docs.ru-ru | 085e0717cfcd00870d62803aed74a2d641034138 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ошибка компилятора C3001 | Документы Microsoft
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-diagnostics
ms.topic: error-reference
f1_keywords:
- C3001
dev_langs:
- C++
helpviewer_keywords:
- C3001
ms.assetid: d0e03478-1b44-47e5-8f5b-70415fa1f8bc
author: corob-msft
ms.author: corob
ms.workload:
- cplusplus
ms.openlocfilehash: a4c8275b1fc511ebf4e09b625f64cffae74a3ca6
ms.sourcegitcommit: 76b7653ae443a2b8eb1186b789f8503609d6453e
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 05/04/2018
---
# <a name="compiler-error-c3001"></a>Ошибка компилятора C3001
"текст_ошибки": требуется имя директивы OpenMP
За директивой pragma `omp` должна следовать директива.
Следующий пример приводит к возникновению ошибки C3001:
```
// C3001.c
// compile with: /openmp
int main()
{
#pragma omp // C3001 missing token
}
``` | 23.051282 | 61 | 0.733037 | kor_Hang | 0.082575 |
9e06696f640fdfb9073468bfecbeff2787bd06c8 | 2,716 | md | Markdown | README.md | hayes/dom-lite | a827730eec4c4d8c1840802b830cb2a04f5e2bfe | [
"MIT"
] | null | null | null | README.md | hayes/dom-lite | a827730eec4c4d8c1840802b830cb2a04f5e2bfe | [
"MIT"
] | null | null | null | README.md | hayes/dom-lite | a827730eec4c4d8c1840802b830cb2a04f5e2bfe | [
"MIT"
] | null | null | null | [1]: https://secure.travis-ci.org/litejs/dom-lite.png
[2]: https://travis-ci.org/litejs/dom-lite
[3]: https://coveralls.io/repos/litejs/dom-lite/badge.png
[4]: https://coveralls.io/r/litejs/dom-lite
[npm package]: https://npmjs.org/package/dom-lite
[GitHub repo]: https://github.com/litejs/dom-lite
@version 0.1.7
@date 2014-11-02
@stability 2 - Unstable
DOM lite – [![Build][1]][2] [![Coverage][3]][4]
========
A minimal DOM implementation
Examples
--------
```javascript
var document = require("dom-lite").document;
var el = document.createElement("h1");
el.id = 123;
el.className = "large";
var fragment = document.createDocumentFragment();
var text1 = document.createTextNode("hello");
var text2 = document.createTextNode(" world");
fragment.appendChild(text1);
fragment.appendChild(text2);
el.appendChild(fragment);
el.toString();
// <h1 id="123" class="large">hello world</h1>
el.outerHTML;
// <h1 id="123" class="large">hello world</h1>
el.innerHTML;
// hello world
```
Implemented features
--------------------
### Node
- nodeName
- nodeValue
- parentNode
- ownerDocument
- childNodes
- textContent
- firstChild
- lastChild
- previousSibling
- nextSibling
- innerHTML() - Read Only
- outerHTML() - Read Only
- hasChildNodes()
- appendChild()
- insertBefore()
- removeChild()
- replaceChild()
- cloneNode()
### DocumentFragment
Extends Node.
- nodeType
### HTMLElement
Extends Node.
- attributes
- nodeType
- localName
- tagName
- style
- className
- hasAttribute()
- getAttribute()
- setAttribute()
- removeAttribute()
- getElementById()
- getElementsByTagName()
- querySelector() - Only simple selectors
- querySelectorAll() - Only simple selectors
### Text
Extends Node.
- nodeType
- data
### Comment
Extends Node.
- nodeType
- data
### Document
Extends Node.
- nodeType
- createElement()
- createElementNS()
- createTextNode()
- createComment()
- createDocumentFragment()
- getElementById()
- getElementsByTagName()
- querySelector()
- querySelectorAll()
Coding Style Guidelines
-----------------------
- Use tabs for indentation, align with spaces
- Use lowerCamelCase for method and variable names
- Use UpperCamelCase for constructor names
- Commit files with Unix-style line endings
- Do not use spaces in file and directory names
Consider substituting a dash (-) where you would normally use spaces.
- Rebase before pushing
- Fix tests before push or pull request
External links
--------------
- [GitHub repo][]
- [npm package][]
- [DOM spec](http://dom.spec.whatwg.org/)
### Licence
Copyright (c) 2014 Lauri Rooden <lauri@rooden.ee>
[The MIT License](http://lauri.rooden.ee/mit-license.txt)
| 17.081761 | 73 | 0.688144 | yue_Hant | 0.464387 |
9e06a3ad8e6ee99c82e6cd19a5f20aba548b1fc1 | 1,172 | md | Markdown | README.md | mcvidomi/poim2motif | 528602f25dff4124d7858a4f55d726887cd96d17 | [
"MIT"
] | 2 | 2015-07-23T08:30:13.000Z | 2021-05-03T09:42:12.000Z | README.md | mcvidomi/poim2motif | 528602f25dff4124d7858a4f55d726887cd96d17 | [
"MIT"
] | null | null | null | README.md | mcvidomi/poim2motif | 528602f25dff4124d7858a4f55d726887cd96d17 | [
"MIT"
] | 1 | 2015-07-23T08:30:16.000Z | 2015-07-23T08:30:16.000Z | # poim2motif
Assessing motifs in Positional Oligomer Importance Matrices (POIMs).
# required installations
1. shogun toolbox<br />
http://www.shogun-toolbox.org/doc/en/3.0.0/installation.html<br />
install with python interface
2. openopt<br />
http://openopt.org/Install
3. R
https://cran.r-project.org/mirrors.html <br />
install the additional package bioconductor package by enter in R<br />
source("http://bioconductor.org/biocLite.R")<br />
biocLite("seqLogo")<br />
# tutorial
"run_toy.py" is the toy Example
"run_real1.py" is the real Example. For this real experiment you need to download one folder of the real data from: http://www.fml.tuebingen.mpg.de/raetsch/projects/lsmkl and adjust the datapath in "run_real1.py"
1. create POIM<br />
by set CURRENT_TASK to TASK_1
2. compute Motif<br />
by set CURRENT_TASK to TASK_2
With "path" you specify the folder path, including all results.
You can choose the Name of the Experiment by experiment_name, which will be the name of the folder including all results.
If you have any questions, please write me a mail at marina.vidovic@tu-berlin.de.
Good luck!
| 31.675676 | 212 | 0.730375 | eng_Latn | 0.963039 |
9e06f07a2fb2343a738737b2505f192faca7e107 | 5,104 | md | Markdown | README.md | zzorba/react-native-menu | d2cec2166f7f62cb9d2d588f02a27e88728e4e9e | [
"MIT"
] | 2 | 2020-08-21T06:56:29.000Z | 2020-08-21T07:43:05.000Z | README.md | zzorba/react-native-menu | d2cec2166f7f62cb9d2d588f02a27e88728e4e9e | [
"MIT"
] | null | null | null | README.md | zzorba/react-native-menu | d2cec2166f7f62cb9d2d588f02a27e88728e4e9e | [
"MIT"
] | null | null | null | <div align="center">
<img src="assets/menu2.gif" alt="Item" height="450px">
</div>
# react-native-side-drawer
[](https://npmjs.org/package/react-native-side-drawer)
[](http://makeapullrequest.com)
[](https://github.com/pedreviljoen/react-native-menu/blob/master/LICENSE)
[](https://npmjs.org/package/react-native-side-drawer)
[](https://github.com/prettier/prettier)
[](https://circleci.com/gh/pedreviljoen/react-native-menu/tree/master)
[](https://greenkeeper.io/)
[](https://snyk.io//test/github/pedreviljoen/react-native-menu?targetFile=package.json)
[](https://www.codefactor.io/repository/github/pedreviljoen/react-native-menu)
[](https://packagequality.com/#?package=react-native-side-drawer)
> Simple & lightweight side menu drawer
## Contents
- [Contents](#contents)
- [Install](#install)
- [Usage](#usage)
- [Props](#props)
- [Contribute](CONTRIBUTING.md)
- [License](#license)
## Install
```sh
yarn add react-native-side-drawer
```
OR
```sh
npm install react-native-side-drawer
```
## Usage
```javascript
import React from 'react'
import { View, Text, StyleSheet, TouchableOpacity } from 'react-native'
import MenuDrawer from 'react-native-side-drawer'
class Example extends React.Component {
constructor(props) {
super(props);
this.state = {
open: false
};
}
toggleOpen = () => {
this.setState({ open: !this.state.open });
};
drawerContent = () => {
return (
<TouchableOpacity onPress={this.toggleOpen} style={styles.animatedBox}>
<Text>Close</Text>
</TouchableOpacity>
);
};
render() {
return (
<View style={styles.container}>
<MenuDrawer
open={this.state.open}
drawerContent={this.drawerContent()}
drawerPercentage={45}
animationTime={250}
overlay={true}
opacity={0.4}
>
<TouchableOpacity onPress={this.toggleOpen} style={styles.body}>
<Text>Open</Text>
</TouchableOpacity>
</MenuDrawer>
</View>
);
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: "#fff",
alignItems: "center",
justifyContent: "center",
marginTop: 30,
zIndex: 0
},
animatedBox: {
flex: 1,
backgroundColor: "#38C8EC",
padding: 10
},
body: {
flex: 1,
alignItems: 'center',
justifyContent: 'center',
backgroundColor: '#F04812'
}
})
```
## Props
<table width="80%">
<tr>
<th>Property</th>
<th>Description</th>
<th>Type</th>
<th>Default Value</th>
</tr>
<tr>
<td><code>open</code></td>
<td>Value toggling open and close of drawer</td>
<td><code>Boolean</code></td>
<td><code>false (closed)</code></td>
</tr>
<tr>
<td><code>drawerContent</code></td>
<td>Drawer contents</td>
<td><code>React.Component</code></td>
<td><code>Text component: Close</code></td>
</tr>
<tr>
<td><code>drawerPercentage</code></td>
<td>Value between 0 - 100, depicting the percentage of the screen the drawer will open</td>
<td><code>Integer</code></td>
<td><code>45</code></td>
</tr>
<tr>
<td><code>animationTime</code></td>
<td>Value depicting the time (in ms) the menu will slide open & close</td>
<td><code>Integer</code></td>
<td><code>200</code></td>
</tr>
<tr>
<td><code>overlay</code></td>
<td>Value toggling menu overlay or push. When overlay is true, the menu will overlay the background screen. When overlay is false, the menu will push the background screen to the side</td>
<td><code>Boolean</code></td>
<td><code>true</code></td>
</tr>
<tr>
<td><code>opacity</code></td>
<td>Value between 0-1 for the opacity fade of background when the menu is open</td>
<td><code>Float</code></td>
<td><code>0.4</code></td>
</tr>
</table>
## Coming soon
- [x] iOS SafeArea support
- [x] Custom width of drawer and sliding time
- [x] Opacity fade of background screen
## License
MIT | 30.562874 | 206 | 0.638911 | eng_Latn | 0.257406 |
9e07123d973415e066879c0f6f8eea6683c1dd52 | 4,457 | md | Markdown | http-headers/README.md | omarkurt/werdlists | 732c67a01a826c22810b8226dc59ee8e1d2fd523 | [
"Apache-2.0"
] | null | null | null | http-headers/README.md | omarkurt/werdlists | 732c67a01a826c22810b8226dc59ee8e1d2fd523 | [
"Apache-2.0"
] | null | null | null | http-headers/README.md | omarkurt/werdlists | 732c67a01a826c22810b8226dc59ee8e1d2fd523 | [
"Apache-2.0"
] | null | null | null | | _Folder Name_ | _Description of Contents_
|:----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------
| [about-http-headers](about-http-headers.txt) | Descriptions for the most common HTTP header fields
| [access-control-headers](access-control-headers.txt) | Cross Origin Resource Sharing (CORS) header name list
| [amazon-http-headers](amazon-http-headers.txt) | HTTP headers specific to Amazon
| [cloudfront-request-headers](cloudfront-request-headers.txt) | CloudFront HTTP request headers used by Amazon AWS
| [cors-request-headers](cors-request-headers.txt) | list of CORS header names used only in HTTP requests
| [cors-response-headers](cors-response-headers.txt) | list of CORS header names used only in HTTP responses
| [custom-header-names](custom-header-names.txt) | HTTP request and response header names unspecified in RFC's
| [custom-request-headers](custom-request-headers.txt) | non-standard HTTP request headers, i.e. lack RFC specs
| [custom-response-headers](custom-response-headers.txt) | non-standard HTTP response headers, i.e. lack RFC specs
| [envoy-httpconnman-headers](envoy-httpconnman-headers.txt) | headers used by the Envoy HTTP connection manager
| [http-request-headers](http-request-headers.txt) | the names of all standard HTTP request header fields
| [http-response-headers](http-response-headers.txt) | the names of all standard HTTP response header fields
| [iana-headers-list](iana-headers-list.txt) | A detailed list of message headers specified by IANA
| [iana-http-headers](iana-http-headers.txt) | Uniquely Sorted List of IANA Message Headers Assignments
| [meetup-request-headers](meetup-request-headers.txt) | HTTP request headers used by the MeetUp.com API
| [meetup-response-headers](meetup-response-headers.txt) | HTTP response headers used by the MeetUp.com API
| [mozdev-docs-headers](mozdev-docs-headers.txt) | header list from left sidebar of Mozilla Developer Docs
| [permanent-message-headers](permanent-message-headers.csv) | [Permanent Message Header Field Names](https://iana.org/assignments/message-headers/perm-headers.csv)
| [provisional-message-headers](provisional-message-headers.csv) | [Provisional Message Header Field Names](https://iana.org/assignments/message-headers/prov-headers.csv)
| [platform-request-headers](platform-request-headers.txt) | request headers that are specific to certain platforms
| [platform-response-headers](platform-response-headers.txt) | response headers particular to certain platforms
| [rfc-request-headers](rfc-request-headers.txt) | request header names that appear in an IETF RFC document
| [rfc-response-headers](rfc-response-headers.txt) | response header names that appear in an IETF RFC document
| [security-policy-headers](security-policy-headers.txt) | CSP (Content-Security-Policy) header name list
| [ssrf-headers-addr](ssrf-headers-addr.txt) | SSRF (Server-Side Request Forgery) request header names ending with the substring `-Addr`
| [ssrf-headers-address](ssrf-headers-address.txt) | SSRF request header names ending with the substring `-Address`
| [ssrf-headers-all](ssrf-headers-all.txt) | All of the SSRF request header names files' contents combined subsequent to unique sorting
| [ssrf-headers-dns](ssrf-headers-dns.txt) | SSRF request header names ending with the substring `-DNS`
| [ssrf-headers-host](ssrf-headers-host.txt) | SSRF request header names ending with the substring `-Host`
| [ssrf-headers-ip](ssrf-headers-ip.txt) | SSRF request header names ending with the substring `-IP`
| [ssrf-headers-server](ssrf-headers-server.txt) | SSRF request header names ending with the substring `-Server`
| [ssrf-headers-vanilla](ssrf-headers-vanilla.txt) SSRF request header names without any specific appendage
| [ssrf-http-headers](ssrf-http-headers.txt) | HTTP headers that can be used for Server-Side Request Forgery
| [ssrf-request-headers](ssrf-request-headers.txt) | request headers that can be used in SSRF attacks
| [ssrf-response-headers](ssrf-response-headers.txt) | response headers host names for SSRF can be parsed from
| [tusio-http-headers](tusio-http-headers.txt) | headers used by resumable file transfer protocol, see [tus.io](https://tus.io "Open Protocol for Resumable File Uploads")
* * *
| 106.119048 | 172 | 0.740184 | eng_Latn | 0.840113 |
9e077073c0bd14ee0abf3c2b8c08cba307204b91 | 6,134 | md | Markdown | doc/src/manual/variables.md | andreasvarga/julia | d279aede19db29c5c31696fb213e3101e2230944 | [
"MIT"
] | 1 | 2021-11-10T02:02:26.000Z | 2021-11-10T02:02:26.000Z | doc/src/manual/variables.md | Seanpm2001-languages/julia | bbf762da72f1f7285e21a2bfa01d61b408c5a8b6 | [
"MIT"
] | null | null | null | doc/src/manual/variables.md | Seanpm2001-languages/julia | bbf762da72f1f7285e21a2bfa01d61b408c5a8b6 | [
"MIT"
] | null | null | null | # [Variables](@id man-variables)
A variable, in Julia, is a name associated (or bound) to a value. It's useful when you want to
store a value (that you obtained after some math, for example) for later use. For example:
```julia-repl
# Assign the value 10 to the variable x
julia> x = 10
10
# Doing math with x's value
julia> x + 1
11
# Reassign x's value
julia> x = 1 + 1
2
# You can assign values of other types, like strings of text
julia> x = "Hello World!"
"Hello World!"
```
Julia provides an extremely flexible system for naming variables. Variable names are case-sensitive,
and have no semantic meaning (that is, the language will not treat variables differently based
on their names).
```jldoctest
julia> x = 1.0
1.0
julia> y = -3
-3
julia> Z = "My string"
"My string"
julia> customary_phrase = "Hello world!"
"Hello world!"
julia> UniversalDeclarationOfHumanRightsStart = "人人生而自由,在尊严和权利上一律平等。"
"人人生而自由,在尊严和权利上一律平等。"
```
Unicode names (in UTF-8 encoding) are allowed:
```jldoctest
julia> δ = 0.00001
1.0e-5
julia> 안녕하세요 = "Hello"
"Hello"
```
In the Julia REPL and several other Julia editing environments, you can type many Unicode math
symbols by typing the backslashed LaTeX symbol name followed by tab. For example, the variable
name `δ` can be entered by typing `\delta`-*tab*, or even `α̂⁽²⁾` by `\alpha`-*tab*-`\hat`-
*tab*-`\^(2)`-*tab*. (If you find a symbol somewhere, e.g. in someone else's code,
that you don't know how to type, the REPL help will tell you: just type `?` and
then paste the symbol.)
Julia will even let you redefine built-in constants and functions if needed (although
this is not recommended to avoid potential confusions):
```jldoctest
julia> pi = 3
3
julia> pi
3
julia> sqrt = 4
4
```
However, if you try to redefine a built-in constant or function already in use, Julia will give
you an error:
```jldoctest
julia> pi
π = 3.1415926535897...
julia> pi = 3
ERROR: cannot assign a value to variable MathConstants.pi from module Main
julia> sqrt(100)
10.0
julia> sqrt = 4
ERROR: cannot assign a value to variable Base.sqrt from module Main
```
## [Allowed Variable Names](@id man-allowed-variable-names)
Variable names must begin with a letter (A-Z or a-z), underscore, or a subset of Unicode code
points greater than 00A0; in particular, [Unicode character categories](http://www.fileformat.info/info/unicode/category/index.htm)
Lu/Ll/Lt/Lm/Lo/Nl (letters), Sc/So (currency and other symbols), and a few other letter-like characters
(e.g. a subset of the Sm math symbols) are allowed. Subsequent characters may also include ! and
digits (0-9 and other characters in categories Nd/No), as well as other Unicode code points: diacritics
and other modifying marks (categories Mn/Mc/Me/Sk), some punctuation connectors (category Pc),
primes, and a few other characters.
Operators like `+` are also valid identifiers, but are parsed specially. In some contexts, operators
can be used just like variables; for example `(+)` refers to the addition function, and `(+) = f`
will reassign it. Most of the Unicode infix operators (in category Sm), such as `⊕`, are parsed
as infix operators and are available for user-defined methods (e.g. you can use `const ⊗ = kron`
to define `⊗` as an infix Kronecker product). Operators can also be suffixed with modifying marks,
primes, and sub/superscripts, e.g. `+̂ₐ″` is parsed as an infix operator with the same precedence as `+`.
A space is required between an operator that ends with a subscript/superscript letter and a subsequent
variable name. For example, if `+ᵃ` is an operator, then `+ᵃx` must be written as `+ᵃ x` to distinguish
it from `+ ᵃx` where `ᵃx` is the variable name.
A particular class of variable names is one that contains only underscores. These identifiers can only be assigned values but cannot be used to assign values to other variables.
More technically, they can only be used as an [L-value](https://en.wikipedia.org/wiki/Value_(computer_science)#lrvalue), but not as an
[R-value](https://en.wikipedia.org/wiki/R-value):
```julia-repl
julia> x, ___ = size([2 2; 1 1])
(2, 2)
julia> y = ___
ERROR: syntax: all-underscore identifier used as rvalue
```
The only explicitly disallowed names for variables are the names of the built-in [Keywords](@ref base-keywords):
```julia-repl
julia> else = false
ERROR: syntax: unexpected "else"
julia> try = "No"
ERROR: syntax: unexpected "="
```
Some Unicode characters are considered to be equivalent in identifiers.
Different ways of entering Unicode combining characters (e.g., accents)
are treated as equivalent (specifically, Julia identifiers are [NFC](http://www.macchiato.com/unicode/nfc-faq)-normalized).
Julia also includes a few non-standard equivalences for characters that are
visually similar and are easily entered by some input methods. The Unicode
characters `ɛ` (U+025B: Latin small letter open e) and `µ` (U+00B5: micro sign)
are treated as equivalent to the corresponding Greek letters. The middle dot
`·` (U+00B7) and the Greek
[interpunct](https://en.wikipedia.org/wiki/Interpunct) `·` (U+0387) are both
treated as the mathematical dot operator `⋅` (U+22C5).
The minus sign `−` (U+2212) is treated as equivalent to the hyphen-minus sign `-` (U+002D).
## Stylistic Conventions
While Julia imposes few restrictions on valid names, it has become useful to adopt the following
conventions:
* Names of variables are in lower case.
* Word separation can be indicated by underscores (`'_'`), but use of underscores is discouraged
unless the name would be hard to read otherwise.
* Names of `Type`s and `Module`s begin with a capital letter and word separation is shown with upper
camel case instead of underscores.
* Names of `function`s and `macro`s are in lower case, without underscores.
* Functions that write to their arguments have names that end in `!`. These are sometimes called
"mutating" or "in-place" functions because they are intended to produce changes in their arguments
after the function is called, not just return a value.
For more information about stylistic conventions, see the [Style Guide](@ref).
| 37.402439 | 177 | 0.74405 | eng_Latn | 0.998044 |
9e07d3a6dcdde4de6b9a58b330cb668c33701d1f | 6,361 | md | Markdown | node_modules/gemini-scrollbar/README.md | ivyauxilio/mefour | b96aa12b1380cffe86528106c56500ae8c981e43 | [
"Unlicense",
"MIT"
] | null | null | null | node_modules/gemini-scrollbar/README.md | ivyauxilio/mefour | b96aa12b1380cffe86528106c56500ae8c981e43 | [
"Unlicense",
"MIT"
] | 1 | 2021-02-02T18:25:39.000Z | 2021-02-02T18:25:39.000Z | node_modules/gemini-scrollbar/README.md | ivyauxilio/mefour | b96aa12b1380cffe86528106c56500ae8c981e43 | [
"Unlicense",
"MIT"
] | 1 | 2020-03-02T11:11:06.000Z | 2020-03-02T11:11:06.000Z | # gemini-scrollbar
[](https://www.npmjs.com/package/gemini-scrollbar)


Custom overlay-scrollbars with native scrolling mechanism for web applications (if needed).
*There is a __React__ wrapper too — [react-gemini-scrollbar](https://github.com/noeldelgado/react-gemini-scrollbar).*
###### Problem Description
Nowadays, some OS’s provides “overlay-scrollbars” natively. Those scrollbars look nice and work well (mostly mobile browsers and OSX opt-in). The problem came when you have to customize the remaining ‘ugly’ scrollbars out there. e.g: “*having a sidebar with a dark background + native-__non-floating__-scrollbars*” ...hum, ugly. Even when this problem can be merely visual, for me is a way of enhancing the user experience.
###### Constraints
- Fallbacks to use the native scrollbars when the OS/browser supports “overlay-scrollbars”.
- Mimics the scrollbar behaviour when replaced with the custom ones (click, drag...).
- IE9+ support.
###### Solution Proposal
Check the scrollbar size. If the scrollbar size is zero (which means the scrollbars are already “over the content”) then we **do nothing**. Otherwise we simply “hide” native scrollbar and show custom in its place.
## Demo
https://noeldelgado.github.io/gemini-scrollbar/
## Dependencies
None
## Installation
**NPM**
```sh
npm i gemini-scrollbar --save
```
**Bower**
```sh
bower install gemini-scrollbar --save
```
## Usage
**JS**
```js
var GeminiScrollbar = require('gemini-scrollbar')
var myScrollbar = new GeminiScrollbar({
element: document.querySelector('.my-scrollbar')
}).create();
```
**LESS**
```less
@import (inline) "<path-to-gemini-scrollbar>/gemini-scrollbar.css";
```
**CSS**
```css
@import url(<path-to-gemini-scrollbar>/gemini-scrollbar.css);
```
Or, you can add the relevant files in your document.
```html
<link href="<path-to-gemini-scrollbar>/gemini-scrollbar.css" rel="stylesheet">
<script src="<path-to-gemini-scrollbar>/index.js"></script>
```
## Options
name | type | default | description
|:--- | :--- | :--- | :---
**element *** | HTMLElement | `null` | The element to apply scrollbars
autoshow | Boolean | `false` | Show scrollbars upon hovering
createElements | Boolean | `true` | Create and append the require HTMLElements at runtime.
forceGemini | Boolean | `false` | Force Gemini scrollbars even if native overlay-scrollbars are available. Useful for development.
onResize | Function | `null` | Hook by which clients can be notified of resize events.
minThumbSize | Number `(px)` | `20` | Sets the minimum size of the thumbs.
\* `required`
## Basic Methods
name | description
|:--- | :---
create | Bind the events, create the required elements and display the scrollbars.
update | Recalculate the viewbox and scrollbar dimensions.
destroy | Unbind the events and remove the custom scrollbar elements.
## Other Mehods
name | description
|:-- | :--
getViewElement | Returns the scrollable element
## Customization
You can change the styles of the scrollbars using CSS. e.g:
```css
/* override gemini-scrollbar default styles */
/* vertical scrollbar track */
.gm-scrollbar.-vertical {
background-color: #f0f0f0
}
/* horizontal scrollbar track */
.gm-scrollbar.-horizontal {
background-color: transparent;
}
/* scrollbar thumb */
.gm-scrollbar .thumb {
background-color: rebeccapurple;
}
.gm-scrollbar .thumb:hover {
background-color: fuchsia;
}
```
## Notes
- **native overlay-scrollbar:** We check the scrollbar size [using this approach](http://davidwalsh.name/detect-scrollbar-width) by David Walsh. If the scrollbar size is zero (which means the scrollbars are “over the content”) then we do nothing but add the `gm-prevented` class selector to the element, which contains the non-standard `-webkit-overflow-scrolling: touch;` declaration for web devices to use momentum-based scrolling. No event binding, element creation... nothing, in this case, we leave the OS/browser do its job. Why? you already have nice looking scrollbars for free.
- **::-webkit-scrollbar:** If you plan to use gemini-scrollbar on your application I highly recommend you removing any Webkit scrollbar styles you may have, why? using the `-webkit-` prefixed pseudo-elements will cause Webkit turning off its built-in scrollbar rendering, interfering with our scrollbar-size-check. You can read a bit more about this issue on [this commit](../../issues/1).
- **create method:** The custom scrollbars will **not** render until you call the `create` method on the instance. i.e: `myScrollbar.create();`
- **required height:** To avoid unexpected results, it is recommended that you specify the `height` property with a value to the element you applying the custom scrollbars (or to its parent).
- **body tag:** If you want to apply custom scrollbars to `body`, make sure to declare a `height` value either to the `:root` pseudo-class or to the `html` element. e.g:
```css
html {
height: 100%;
/* or */
height: 100vh;
overflow: hidden;
}
```
- **createElements option:** The `createElements` option specify wheater or not gemini-scrollbar should create and append the require HTMLElements at runtime. Its default value is `true`. Passing this option as `false` will assume that you to have added the required markup with the specific CSS class selectors on them for it to work. i.e:
```html
<!-- (createElements: false) example markup -->
<div class="something-scrollable">
<div class="gm-scrollbar -vertical">
<div class="thumb"></div>
</div>
<div class="gm-scrollbar -horizontal">
<div class="thumb"></div>
</div>
<div class="gm-scroll-view">
All your content goes here.
</div>
</div>
```
This way you can be sure the library will not touch/change your nodes structure. You can read more about the reason of this option [on this commit](https://github.com/noeldelgado/gemini-scrollbar/commit/2bb73c82f9d1588fb267fba08518adfe1170885c).
## Related
- [react-gemini-scrollbar](https://github.com/noeldelgado/react-gemini-scrollbar) - React wrapper
## License
MIT © [Noel Delgado](http://pixelia.me/)
| 36.982558 | 586 | 0.722056 | eng_Latn | 0.916028 |
9e08bd7be4f833d717a288147cf41ccf1cf05379 | 7,149 | md | Markdown | windows-driver-docs-pr/kernel/avoiding-misalignment-of-fixed-precision-data-types.md | iudezeGit/windows-driver-docs | 05d3b2b90a33d27a95bf30e07293ec05f90e06d1 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-15T02:44:11.000Z | 2022-02-15T02:44:11.000Z | windows-driver-docs-pr/kernel/avoiding-misalignment-of-fixed-precision-data-types.md | iudezeGit/windows-driver-docs | 05d3b2b90a33d27a95bf30e07293ec05f90e06d1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/kernel/avoiding-misalignment-of-fixed-precision-data-types.md | iudezeGit/windows-driver-docs | 05d3b2b90a33d27a95bf30e07293ec05f90e06d1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Avoiding Misalignment of Fixed-Precision Data Types
description: Avoiding Misalignment of Fixed-Precision Data Types
ms.assetid: 4e214bd8-b622-447a-b484-bd1d5d239de7
keywords: ["file system control codes WDK 64-bit", "FSCTL WDK 64-bit", "control codes WDK 64-bit", "I/O control codes WDK kernel , 32-bit I/O in 64-bit drivers", "IOCTLs WDK kernel , 32-bit I/O in 64-bit drivers", "pointer precision WDK 64-bit", "fixed-precision data types WDK 64-bit", "misaligned fixed-precision data types"]
ms.date: 06/16/2017
ms.localizationpriority: medium
---
# Avoiding Misalignment of Fixed-Precision Data Types
Unfortunately, it is possible for a data type to have the same size, but different alignment requirements, for 32-bit and 64-bit programming. Thus not all IOCTL/FSCTL buffer misalignment problems can be avoided by changing pointer-precision data types to fixed-precision types. This means that kernel-mode driver IOCTLs and FSCTLs that pass buffers containing certain fixed-precision data types (or pointers to them) may also need to be thunked.
### Which Data Types Are Affected
The problem affects fixed-precision data types that are themselves structures. This is because the rules for determining alignment requirements for structures are platform-specific.
For example, **\_\_int64**, LARGE\_INTEGER, and KFLOATING\_SAVE must be aligned on a 4-byte boundary on x86 platforms. However, on Itanium-based machines, they must be aligned on an 8-byte boundary.
To determine the alignment requirement for a given data type on a particular platform, use the **TYPE\_ALIGNMENT** macro on that platform.
### How To Fix the Problem
In the following example, the IOCTL is a METHOD\_NEITHER IOCTL, so the **Irp->UserBuffer** pointer is passed directly from the user-mode application to the kernel-mode driver. No validation is performed on buffers used in IOCTLs and FSCTLs. Thus a call to [**ProbeForRead**](https://msdn.microsoft.com/library/windows/hardware/ff559876) or [**ProbeForWrite**](https://msdn.microsoft.com/library/windows/hardware/ff559879) is required before the buffer pointer can be safely dereferenced.
Assuming that the 32-bit application has passed a valid value for **Irp->UserBuffer**, the LARGE\_INTEGER structure pointed to by **p->DeviceTime** will be aligned on a 4-byte boundary. **ProbeForRead** checks this alignment against the value passed in its *Alignment* parameter, which in this case is **TYPE\_ALIGNMENT** (LARGE\_INTEGER). On x86 platforms, this macro expression returns 4 (bytes). However, on Itanium-based machines, it returns 8, causing **ProbeForRead** to raise a STATUS\_DATATYPE\_MISALIGNMENT exception.
**Note** Removing the **ProbeForRead** call does not fix the problem, but only makes it harder to diagnose.
```cpp
typedef struct _IOCTL_PARAMETERS2 {
LARGE_INTEGER DeviceTime;
} IOCTL_PARAMETERS2, *PIOCTL_PARAMETERS2;
#define SETTIME_FUNCTION 1
#define IOCTL_SETTIME CTL_CODE(FILE_DEVICE_UNKNOWN, \
SETTIME_FUNCTION, METHOD_NEITHER, FILE_ANY_ACCESS)
...
case IOCTL_SETTIME:
PIOCTL_PARAMETERS2 p = (PIOCTL_PARAMETERS2)Irp->UserBuffer;
try {
if (Irp->RequestorMode != KernelMode) {
ProbeForRead ( p->DeviceTime,
sizeof( LARGE_INTEGER ),
TYPE_ALIGNMENT( LARGE_INTEGER ));
}
status = DoSomeWork(p->DeviceTime);
} except( EXCEPTION_EXECUTE_HANDLER ) {
```
The following sections tell how to fix the problem described above. Note that all code snippets have been edited for brevity.
### Solution 1: Copy the Buffer
The safest way to avoid misalignment problems is to make a copy of the buffer before accessing its contents, as in the following example.
```cpp
case IOCTL_SETTIME: {
PIOCTL_PARAMETERS2 p = (PIOCTL_PARAMETERS2)Irp->UserBuffer;
#if _WIN64
IOCTL_PARAMETERS2 LocalParams2;
RtlCopyMemory(&LocalParams2, p, sizeof(IOCTL_PARAMETERS2));
p = &LocalParams2;
#endif
status = DoSomeWork(p->DeviceTime);
break;
}
```
This solution can be optimized for better performance by first checking whether the buffer contents are correctly aligned. If so, the buffer can be used as is. Otherwise, the driver makes a copy of the buffer.
```cpp
case IOCTL_SETTIME: {
PIOCTL_PARAMETERS2 p = (PIOCTL_PARAMETERS2)Irp->UserBuffer;
#if _WIN64
IOCTL_PARAMETERS2 LocalParams2;
if ( (ULONG_PTR)p & (TYPE_ALIGNMENT(IOCTL_PARAMETERS2)-1)) {
// The buffer contents are not correctly aligned for this
// platform, so copy them into a properly aligned local
// buffer.
RtlCopyMemory(&LocalParams2, p, sizeof(IOCTL_PARAMETERS2));
p = &LocalParams2;
}
#endif
status = DoSomeWork(p->DeviceTime);
break;
}
```
### Solution 2: Use the UNALIGNED Macro
The **UNALIGNED** macro tells the C compiler to generate code that can access the **DeviceTime** field without taking an alignment fault. Note that using this macro on Itanium-based platforms is likely to make your driver significantly larger and slower.
```cpp
typedef struct _IOCTL_PARAMETERS2 {
LARGE_INTEGER DeviceTime;
} IOCTL_PARAMETERS2;
typedef IOCTL_PARAMETERS2 UNALIGNED *PIOCTL_PARAMETERS2;
```
### Pointers Are Also Affected
The misalignment problem described earlier can also occur in buffered I/O requests. In the following example, the IOCTL buffer contains an embedded pointer to a LARGE\_INTEGER structure.
```cpp
typedef struct _IOCTL_PARAMETERS3 {
LARGE_INTEGER *pDeviceCount;
} IOCTL_PARAMETERS3, *PIOCTL_PARAMETERS3;0
#define COUNT_FUNCTION 1
#define IOCTL_GETCOUNT CTL_CODE(FILE_DEVICE_UNKNOWN, \
COUNT_FUNCTION, METHOD_BUFFERED, FILE_ANY_ACCESS)
```
Like the METHOD\_NEITHER IOCTL and FSCTL buffer pointers described earlier, pointers embedded in buffered I/O requests are also passed directly from the user-mode application to the kernel-mode driver. No validation is performed on these pointers. Thus a call to [**ProbeForRead**](https://msdn.microsoft.com/library/windows/hardware/ff559876) or [**ProbeForWrite**](https://msdn.microsoft.com/library/windows/hardware/ff559879), enclosed in a **try/except** block, is required before the embedded pointer can be safely dereferenced.
As in the earlier example, assuming that the 32-bit application has passed a valid value for **pDeviceCount**, the LARGE\_INTEGER structure pointed to by **pDeviceCount** will be aligned on a 4-byte boundary. **ProbeForRead** and **ProbeForWrite** check this alignment against the value of the *Alignment* parameter, which in this case is TYPE\_ALIGNMENT (LARGE\_INTEGER). On x86 platforms, this macro expression returns 4 (bytes). However, on Itanium-based machines, it returns 8, causing **ProbeForRead** or **ProbeForWrite** to raise a STATUS\_DATATYPE\_MISALIGNMENT exception.
This problem can be fixed either by making a properly aligned copy of the LARGE\_INTEGER structure, as in Solution 1, or by using the UNALIGNED macro as follows:
```cpp
typedef struct _IOCTL_PARAMETERS3 {
LARGE_INTEGER UNALIGNED *pDeviceCount;
} IOCTL_PARAMETERS3, *PIOCTL_PARAMETERS3;
```
| 48.304054 | 580 | 0.760106 | eng_Latn | 0.896741 |
9e08eb5e6fd86966cd4bc2299fb06d051c631ae5 | 1,701 | md | Markdown | docs/ops/arithmetic/Round_5.md | pazamelin/openvino | b7e8ef910d7ed8e52326d14dc6fd53b71d16ed48 | [
"Apache-2.0"
] | 1,127 | 2018-10-15T14:36:58.000Z | 2020-04-20T09:29:44.000Z | docs/ops/arithmetic/Round_5.md | pazamelin/openvino | b7e8ef910d7ed8e52326d14dc6fd53b71d16ed48 | [
"Apache-2.0"
] | 439 | 2018-10-20T04:40:35.000Z | 2020-04-19T05:56:25.000Z | docs/ops/arithmetic/Round_5.md | pazamelin/openvino | b7e8ef910d7ed8e52326d14dc6fd53b71d16ed48 | [
"Apache-2.0"
] | 414 | 2018-10-17T05:53:46.000Z | 2020-04-16T17:29:53.000Z | # Round {#openvino_docs_ops_arithmetic_Round_5}
**Versioned name**: *Round-5*
**Category**: *Arithmetic unary*
**Short description**: *Round* performs element-wise round operation with given tensor.
**Detailed description**: Operation takes one input tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value. In case of halves, the rule is to round them to the nearest even integer if `mode` attribute is `half_to_even` or rounding in such a way that the result heads away from zero if `mode` attribute is `half_away_from_zero`.
Input = [-4.5, -1.9, -1.5, 0.5, 0.9, 1.5, 2.3, 2.5]
round(Input, mode = `half_to_even`) = [-4.0, -2.0, -2.0, 0.0, 1.0, 2.0, 2.0, 2.0]
round(Input, mode = `half_away_from_zero`) = [-5.0, -2.0, -2.0, 1.0, 1.0, 2.0, 2.0, 3.0]
**Attributes**:
* *mode*
* **Description**: If set to `half_to_even` then the rule is to round halves to the nearest even integer, if set to `half_away_from_zero` then rounding in such a way that the result heads away from zero.
* **Range of values**: `half_to_even` or `half_away_from_zero`
* **Type**: string
* **Default value**: `half_to_even`
* **Required**: *no*
**Inputs**
* **1**: A tensor of type *T*. **Required.**
**Outputs**
* **1**: The result of element-wise round operation. A tensor of type *T*.
**Types**
* *T*: any numeric type.
**Example**
```xml
<layer ... type="Round">
<data mode="half_to_even"/>
<input>
<port id="0">
<dim>256</dim>
<dim>56</dim>
</port>
</input>
<output>
<port id="1">
<dim>256</dim>
<dim>56</dim>
</port>
</output>
</layer>
```
| 29.327586 | 368 | 0.617284 | eng_Latn | 0.944564 |
9e0932fc188e206a4629c25b6c7e9918167247b2 | 654 | md | Markdown | _publications/2022_pub111.md | auranic/andrei-zinovyev.github.io | b1ab4d9d18b6ac81198acc7905aa43d115e90826 | [
"MIT"
] | null | null | null | _publications/2022_pub111.md | auranic/andrei-zinovyev.github.io | b1ab4d9d18b6ac81198acc7905aa43d115e90826 | [
"MIT"
] | null | null | null | _publications/2022_pub111.md | auranic/andrei-zinovyev.github.io | b1ab4d9d18b6ac81198acc7905aa43d115e90826 | [
"MIT"
] | null | null | null | ---
title: 'Coloring Panchromatic Nighttime Satellite Images: Comparing the Performance of Several Machine Learning Methods'
collection: publications
category: journal
permalink: /publication/2022_pub111
year: 2022
pubtype: 'MLN'
citation: 'Rybnikova N, Portnov BA, Mirkes EM, Zinovyev A, Brook A and Gorban AN <a href=" https://ieeexplore.ieee.org/document/9431102">Coloring Panchromatic Nighttime Satellite Images: Comparing the Performance of Several Machine Learning Methods.</a> IEEE Transactions on Geoscience and Remote Sensing, 2022, 60:4702715,1-15, doi: 10.1109/TGRS.2021.3076011'
paperurl: ' https://ieeexplore.ieee.org/document/9431102'
---
| 54.5 | 360 | 0.795107 | kor_Hang | 0.28593 |
9e096e0a353ff4db2525f62114e9c2031a3a77c8 | 2,677 | md | Markdown | dynamicsax2012-technet/pol-set-up-the-accounts-receivable-parameters-to-calculate-interest.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/pol-set-up-the-accounts-receivable-parameters-to-calculate-interest.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/pol-set-up-the-accounts-receivable-parameters-to-calculate-interest.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: (POL) Set up the accounts receivable parameters to calculate interest
TOCTitle: (POL) Set up the accounts receivable parameters to calculate interest
ms:assetid: 9a412380-af50-4b06-9a74-5cf1fdb2109e
ms:mtpsurl: https://technet.microsoft.com/en-us/library/JJ678309(v=AX.60)
ms:contentKeyID: 49387031
ms.date: 04/18/2014
mtps_version: v=AX.60
audience: Application User
ms.search.region: Poland
---
# (POL) Set up the accounts receivable parameters to calculate interest
_**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2_
In Poland, two types of interest rates are used:
- Tax interest rates, also referred to as statutory interest rates, which are specified by the Ministry of Finance.
- Free-hand interest rates, which are negotiated between the vendor and the customer.
> [!NOTE]
> <P>Free-hand interest may or may not be calculated, based on the agreement between the customer and the vendor. However, tax interest calculation is mandatory.</P>
Generally, the vendor sets the settlement period of no longer than 30 days. However, the following situations can arise:
- If the vendor and customer agree on a settlement period longer than 30 days or if the settlement period is not set, then free-hand interest can be calculated from the 31st day until the due date. Alternatively, tax interest can be calculated until the date of payment.
- If the settlement period is less than 30 days, the vendor can calculate tax interest from the due date until the date of payment.
You can define the parameters for interest calculation in **Accounts receivable parameters** form.
> [!NOTE]
> <P>This topic has not been fully updated for Microsoft Dynamics AX 2012 R2.</P>
1. Click **Accounts receivable** \> **Setup** \> **Accounts receivable parameters**.
2. Click **Collections**. In the **Interest calculation** field, select the transactions for which interest is to be calculated.
3. Press CTRL+S or close the form.
## See also
[(POL) Set up customer posting profiles](pol-set-up-customer-posting-profiles.md)
[(POL) Set up interest codes](pol-set-up-interest-codes.md)
[(POL) Set up a number sequence code for the interest note and voucher](pol-set-up-a-number-sequence-code-for-the-interest-note-and-voucher.md)
[(POL) Calculate tax interest and free-hand interest](pol-calculate-tax-interest-and-free-hand-interest.md)
[(POL) Post and print an interest note](pol-post-and-print-an-interest-note.md)
[(POL) View the calculated interest](pol-view-the-calculated-interest.md)
[(POL) Accounts receivable parameters (modified form)](https://technet.microsoft.com/en-us/library/jj678183\(v=ax.60\))
| 38.797101 | 272 | 0.763541 | eng_Latn | 0.986394 |
9e0b236fb74f84889c78d034a44b7f2c3f1db9fd | 2,973 | md | Markdown | README.md | iblazhko/fake-bootstrap | cb8430b431c5778e4cfd106c13a1a83e10ad1916 | [
"MIT"
] | null | null | null | README.md | iblazhko/fake-bootstrap | cb8430b431c5778e4cfd106c13a1a83e10ad1916 | [
"MIT"
] | null | null | null | README.md | iblazhko/fake-bootstrap | cb8430b431c5778e4cfd106c13a1a83e10ad1916 | [
"MIT"
] | null | null | null | # FAKE Bootstrapper for .NET Core
## Description
This is a set of simple bootstrapper scripts for .NET Core projects.
Build system is based on [FAKE](https://fake.build/), a DSL for build tasks.
This bootstrapper can be used as is with no modifications, or use this as a
starting point to create your own build system.
One of the benefits of using this build system is that it can run all the tests across all test projects in the solution in one go, something that .NET Core CLI cannot do at the moment.
## Prerequisites
Solution is expected to have following structure:
src\
SolutionName.sln
Project1\
Project1.csproj
Project1.cs
Project1.Tests\
Project1.Tests.csproj
Project1Tests.cs
Project2\
Project2.fsproj
Project2.fs
Project2.Tests\
Project2.Tests.fproj
Project2Tests.fs
- source code is in the `src` directory
- project directory name matches project file name
- test projects have `Tests` suffix
- test projects are implemeted using `xUnit` or other framework that provides .NET Core - compatible runner (i.e. runnable via `dotnet test`)
- test projects can use either C# or F#
.NET Core needs to be installed so that `dotnet` command is available.
If you are on Linux, you can install PowerShell Core:
<https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-linux>
and use exactly the same build scripts on both Linux and Windows.
## Installation
Copy `build` directory to the root of your solution, so that it at the same
level with `src`:
\build\
build.fsx
build.ps1
build.sh
\src\
Solution.sln
...
If you are planning on using Linux bash scripts, make them executable:
```sh
chmod u+x ./build/*.sh
```
If you are planning on using only PowerShell scripts, bash scripts (`*.sh`) can be removed.
## Usage
### Platform-specific scripts
#### Windows
In PowerShell propmt:
```PowerShell
build\build.ps1 [-Target Target] \
[-Configuration Configuration] \
[-Runtime Runtime]
```
#### Linux
In shell propmt:
```bash
build/build.sh [-t|--target Target] \
[-c|--configuration Configuration] \
[-r|--runtime Runtime]
```
Alternatively, if you have PowerShell Core installed, run same command as for Windows in PowerShell prompt (via `pwsh`).
### Build parameters
- `Target`: name of the build target
- `Clean`: run `dotnet clean`
- `Restore`: run `dotnet restore`
- `Build`: run `dotnet build`
- `Tests`: run `dotnet test` for all `*.Tests` projects
- `FullBuild` *(default)*: `Build` + `Tests`
- `Purge`: special target (implemented as separate script) to perform full cleanup: remove `bin` and `obj` build output directories, Fake CLI and lockfiles
- `Configuration`: `Release`*(default)*|`Debug`
- `Runtime`: `linux-x64`*(default)*|`win-x64`
| 29.147059 | 185 | 0.676757 | eng_Latn | 0.981799 |
9e0bb7293883905e46a12161b176e9b89eacb6b8 | 7,368 | md | Markdown | articles/sql-database/sql-database-connect-query-ssms.md | pravpatel/azure-docs | 28aa432b0c4e20a28e3fa62d6364364ec5d2294e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-connect-query-ssms.md | pravpatel/azure-docs | 28aa432b0c4e20a28e3fa62d6364364ec5d2294e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-connect-query-ssms.md | pravpatel/azure-docs | 28aa432b0c4e20a28e3fa62d6364364ec5d2294e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'SSMS: Connect and query data in Azure SQL Database | Microsoft Docs'
description: Learn how to connect to SQL Database on Azure by using SQL Server Management Studio (SSMS). Then run Transact-SQL (T-SQL) statements to query and edit data.
keywords: connect to sql database,sql server management studio
services: sql-database
author: CarlRabeler
manager: craigg
ms.service: sql-database
ms.custom: mvc,DBs & servers
ms.topic: quickstart
ms.date: 11/28/2017
ms.author: carlrab
---
# Azure SQL Database: Use SQL Server Management Studio to connect and query data
[SQL Server Management Studio][ssms-install-latest-84g] (SSMS) is an integrated environment for managing any SQL infrastructure, from SQL Server to SQL Database for Microsoft Windows. This quickstart demonstrates how to use SSMS to connect to an Azure SQL database, and then use Transact-SQL statements to query, insert, update, and delete data in the database.
## Prerequisites
This quickstart uses as its starting point the resources created in one of these quickstarts:
[!INCLUDE [prerequisites-create-db](../../includes/sql-database-connect-query-prerequisites-create-db-includes.md)]
#### Install the latest SSMS
Before you start, make sure you have installed the newest version of [SSMS][ssms-install-latest-84g].
## SQL server connection information
[!INCLUDE [prerequisites-server-connection-info](../../includes/sql-database-connect-query-prerequisites-server-connection-info-includes.md)]
## Connect to your database
Use SQL Server Management Studio to establish a connection to your Azure SQL Database server.
> [!IMPORTANT]
> An Azure SQL Database logical server listens on port 1433. If you are attempting to connect to an Azure SQL Database logical server from within a corporate firewall, this port must be open in the corporate firewall for you to successfully connect.
>
1. Open SQL Server Management Studio.
2. In the **Connect to Server** dialog box, enter the following information:
| Setting | Suggested value | Description |
| ------------ | ------------------ | ----------- |
| **Server type** | Database engine | This value is required. |
| **Server name** | The fully qualified server name | The name should be something like this: **mynewserver20170313.database.windows.net**. |
| **Authentication** | SQL Server Authentication | SQL Authentication is the only authentication type that we have configured in this tutorial. |
| **Login** | The server admin account | This is the account that you specified when you created the server. |
| **Password** | The password for your server admin account | This is the password that you specified when you created the server. |
||||

3. Click **Options** in the **Connect to server** dialog box. In the **Connect to database** section, enter **mySampleDatabase** to connect to this database.

4. Click **Connect**. The Object Explorer window opens in SSMS.

5. In Object Explorer, expand **Databases** and then expand **mySampleDatabase** to view the objects in the sample database.
## Query data
Use the following code to query for the top 20 products by category using the [SELECT](https://msdn.microsoft.com/library/ms189499.aspx) Transact-SQL statement.
1. In Object Explorer, right-click **mySampleDatabase** and click **New Query**. A blank query window opens that is connected to your database.
2. In the query window, enter the following query:
```sql
SELECT pc.Name as CategoryName, p.name as ProductName
FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid;
```
3. On the toolbar, click **Execute** to retrieve data from the Product and ProductCategory tables.

## Insert data
Use the following code to insert a new product into the SalesLT.Product table using the [INSERT](https://msdn.microsoft.com/library/ms174335.aspx) Transact-SQL statement.
1. In the query window, replace the previous query with the following query:
```sql
INSERT INTO [SalesLT].[Product]
( [Name]
, [ProductNumber]
, [Color]
, [ProductCategoryID]
, [StandardCost]
, [ListPrice]
, [SellStartDate]
)
VALUES
('myNewProduct'
,123456789
,'NewColor'
,1
,100
,100
,GETDATE() );
```
2. On the toolbar, click **Execute** to insert a new row in the Product table.
<img src="./media/sql-database-connect-query-ssms/insert.png" alt="insert" style="width: 780px;" />
## Update data
Use the following code to update the new product that you previously added using the [UPDATE](https://msdn.microsoft.com/library/ms177523.aspx) Transact-SQL statement.
1. In the query window, replace the previous query with the following query:
```sql
UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';
```
2. On the toolbar, click **Execute** to update the specified row in the Product table.
<img src="./media/sql-database-connect-query-ssms/update.png" alt="update" style="width: 780px;" />
## Delete data
Use the following code to delete the new product that you previously added using the [DELETE](https://msdn.microsoft.com/library/ms189835.aspx) Transact-SQL statement.
1. In the query window, replace the previous query with the following query:
```sql
DELETE FROM [SalesLT].[Product]
WHERE Name = 'myNewProduct';
```
2. On the toolbar, click **Execute** to delete the specified row in the Product table.
<img src="./media/sql-database-connect-query-ssms/delete.png" alt="delete" style="width: 780px;" />
## Next steps
- To learn about creating and managing servers and databases with Transact-SQL, see [Learn about Azure SQL Database servers and databases](sql-database-servers-databases.md).
- For information about SSMS, see [Use SQL Server Management Studio](https://msdn.microsoft.com/library/ms174173.aspx).
- To connect and query using the Azure portal, see [Connect and query with the Azure portal SQL Query editor](sql-database-connect-query-portal.md).
- To connect and query using Visual Studio Code, see [Connect and query with Visual Studio Code](sql-database-connect-query-vscode.md).
- To connect and query using .NET, see [Connect and query with .NET](sql-database-connect-query-dotnet.md).
- To connect and query using PHP, see [Connect and query with PHP](sql-database-connect-query-php.md).
- To connect and query using Node.js, see [Connect and query with Node.js](sql-database-connect-query-nodejs.md).
- To connect and query using Java, see [Connect and query with Java](sql-database-connect-query-java.md).
- To connect and query using Python, see [Connect and query with Python](sql-database-connect-query-python.md).
- To connect and query using Ruby, see [Connect and query with Ruby](sql-database-connect-query-ruby.md).
<!-- Article link references. -->
[ssms-install-latest-84g]: https://docs.microsoft.com/sql/ssms/sql-server-management-studio-ssms
| 45.481481 | 362 | 0.729099 | eng_Latn | 0.943778 |
9e0c992502de410182d872ec7a7846edef042a81 | 6,477 | md | Markdown | his/install-and-config-guides/installation-help2.md | AishwaryaVarma/biztalk-docs | 8a6681659fe5740fd6df3ce5aa5be1c392bc7cb8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-14T19:45:26.000Z | 2019-06-14T19:45:26.000Z | his/install-and-config-guides/installation-help2.md | AishwaryaVarma/biztalk-docs | 8a6681659fe5740fd6df3ce5aa5be1c392bc7cb8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-19T08:49:33.000Z | 2019-11-19T08:49:33.000Z | his/install-and-config-guides/installation-help2.md | OPS-E2E-PPE/biztalk-docs | dfcd48d9ae3142ba3484aac52cb35f6ec8f3881c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-08-05T18:53:58.000Z | 2019-08-05T18:53:58.000Z | ---
title: "Installation help | Microsoft Docs"
ms.custom: ""
ms.date: 10/24/2016
ms.prod: "host-integration-server"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: 4933ab5c-56c4-468c-a203-352eedd90ac6
caps.latest.revision: 6
author: "gplarsen"
ms.author: "hisdocs; plarsen"
manager: "anneta"
robots: noindex,nofollow
---
# Installation guidance
Use this topic to navigate through the Installation user interface.
## Autorun
**Planning**
We recommend that you read the Host Integration Server [Installation Guide](../install-and-config-guides/installation-guide1.md), which covers the following topics:
- System requirements
- Feature installation and configuration prerequisites
- Supported host systems
- Product installation and configuration (upgrades and setup options)
- Installing and configuring HIS and Enterprise Single Sign-on (ESSO)
- Uninstalling HIS and ESSO
**Installation**
Click **Host Integration Server** to close launch the Installation Wizard.
**Exit**
Click **Exit** to close the Autorun program.
## Welcome
The Installation Wizard for Host Integration Server is similar to other Windows-based applications. The Installation Wizard calls Windows Installer and coordinates the installation process from beginning to end, terminating when the last component is installed. If you do not have the software prerequisites installed, then the Installation Wizard will install them from the Web or a pre-downloaded CAB file.
## License Agreement
The License agreement dialog box allows you to read and review the license use terms for Host Integration Server. Please read the license use terms.
1. Click **Yes** to accept the license use terms and continue with the installation process. Click **No** to reject the license use terms and close the installation wizard.
2. Click **Cancel** to close the Installation Wizard.
3. Click **Next** to continue with the installation process.
## Component Installation
The Installation Wizard displays components that are available with this version of Host Integration Server.
**Available Components**
1. Select to install either **Server** or **Client** components. By default, the Installation Wizard pre-selects Server components.
When installing Server, you may choose whether to install Enterprise Single Sign-On. The Installation Wizard installs all other server components.
When installing Client, you may not choose whether to install individual components. The Installation Wizard installs all client components.
2. Optionally, click the plus sign (+) to expand the list of available components. If a software perquisite is not present on the computer, then the Installation Wizard will disable a component and display a disabled empty checkbox.
**Description**
In the **Description** field, the Installation Wizard displays additional information on the available components.
**Space allocation**
The Installation Wizard displays the space required to install the product. Optionally, click **Space Allocated Details** to view available space on the selected storage drive.
**Install to**
3. Choose an installation path. By default, the Installation Wizard will install the product into the folder `C:\Program Files\Microsoft Host Integration Server 20xx\`. Click **Browse** to select a different installation folder.
4. Click **Back** to return to the previous dialog.
5. Click **Cancel** to close the Installation Wizard.
6. Click **Next** to continue with the installation process.
## Installation Summary
**Summary**
This is the Installation Summary screen. Please review the perquisites and components that will be installed. Click Back to make changes on the previous screen. Click Install to continue with Setup.
**Set**
1. Optionally, click **Set** to select automatic logon and enter your credentials. If the Setup Wizard needs to reboot the computer during setup, then the Setup Wizard will use these credentials to automatically log into the computer and continue the installation process.
2. Click **Back** to return to the previous dialog
3. Click **Cancel** to close the Installation Wizard.
4. Click **Next** to continue with the installation process.
## Installation Completed
**Logfile**
Click **Logfile** to review the setup log. The Host Integration Server setup log contains information, warning and errors from the installation process. You can use this log file for troubleshooting and when running setup in unattended mode.
**Check for Updates**
Click the **Check for Updates** button to launch the Windows Update, and then install available updates to Host Integration Server.
**Launch Host Integration Server Configuration**
Click **Finish** to close the Host Integration Server Setup Wizard and launch the Host Integration Server Configuration wizard.
## Microsoft Update
**Check for Updates**
Click the **Check for Updates** button to launch the Windows Update, and then install available updates to Host Integration Server.
**Launch Host Integration Server Configuration**
Click **Finish** to close the Host Integration Server Setup Wizard and launch the Host Integration Server Configuration wizard.
## Program Maintenance
**Modify**
This option displays the Component Selection dialog box that you can use to select the features that you want to install.
**Repair**
Repair installation errors in the program. This option fixes missing or corrupt files, shortcuts, and registry entries.
**Remove**
Remove Host Integration Server from your computer.
## Uninstall Complete
**Logfile**
Click **Logfile** to review the setup log. The Host Integration Server setup log contains information, warning and errors from the uninstall process. You can use this log file for accounting troubleshooting.
**Finish**
Click **Finish** to close the Host Integration Server Installation Wizard.
## Installation Cancelled
The Installation Wizard is canceling the installation process.
## Upgrade Summary
The Installation Wizard displays summary information on the upgrade process.
## See Also
[Installing HIS 2013](../install-and-config-guides/installing-his-2013.md) | 42.058442 | 411 | 0.746179 | eng_Latn | 0.970223 |
9e0ca6576369b4925dc310f2fb4d414ab02cff05 | 635 | md | Markdown | works-ml/docs/StringIndexerBuilder-sparksink.md | predictiveworks/cdap-spark | 1916ba5f167b8d118bd7aa1d0d866865dc6f2963 | [
"Apache-2.0"
] | 3 | 2020-02-17T00:59:30.000Z | 2021-02-09T14:54:15.000Z | works-ml/docs/StringIndexerBuilder-sparksink.md | predictiveworks/cdap-spark | 1916ba5f167b8d118bd7aa1d0d866865dc6f2963 | [
"Apache-2.0"
] | 4 | 2020-05-13T13:30:57.000Z | 2022-02-16T01:18:28.000Z | works-ml/docs/StringIndexerBuilder-sparksink.md | predictiveworks/cdap-spark | 1916ba5f167b8d118bd7aa1d0d866865dc6f2963 | [
"Apache-2.0"
] | 3 | 2020-09-15T07:18:39.000Z | 2021-11-24T12:36:20.000Z |
# String Indexer Builder
## Description
This machine learning plugin represents a building stage for an Apache Spark ML "String Indexer model".
## Configuration
**Reference Name**: Name used to uniquely identify this plugin for lineage, annotating metadata, etc.
### Model Configuration
**Model Name**: The unique name of the machine learning model.
**Model Stage***: The stage of the ML model. Supported values are 'experiment', 'staging', 'production'
and 'archived'. Default is 'experiment'.
### Data Configuration
**Input Field**: The name of the field in the input schema that contains the features to build the model from.
| 35.277778 | 110 | 0.75748 | eng_Latn | 0.975007 |
9e0cfd6011600dce4641123b3dfcde9980597be7 | 2,766 | md | Markdown | Docs/Test-PsNetUping.md | tinuwalther/PsNetTools | 41e5ae8f34294456517150df96c3905094a29835 | [
"MIT"
] | 15 | 2019-01-13T16:17:57.000Z | 2021-09-02T11:27:00.000Z | Docs/Test-PsNetUping.md | tinuwalther/PsNetTools | 41e5ae8f34294456517150df96c3905094a29835 | [
"MIT"
] | 12 | 2019-01-19T18:37:25.000Z | 2019-10-03T07:21:21.000Z | Docs/Test-PsNetUping.md | tinuwalther/PsNetTools | 41e5ae8f34294456517150df96c3905094a29835 | [
"MIT"
] | 3 | 2019-12-11T12:29:16.000Z | 2022-02-10T20:53:08.000Z | ---
external help file: PsNetTools-help.xml
Module Name: PsNetTools
online version: https://github.com/tinuwalther/PsNetTools
schema: 2.0.0
---
# Test-PsNetUping
## SYNOPSIS
Test the connectivity over an Udp port
## SYNTAX
```
Test-PsNetUping -Destination <String[]> -UdpPort <Int32[]> [-MinTimeout <Int32>] [-MaxTimeout <Int32>]
[<CommonParameters>]
```
## DESCRIPTION
Test connectivity to an endpoint over the specified Udp port
## EXAMPLES
### EXAMPLE 1
```
Test-PsNetUping -Destination sbb.ch, google.com -UdpPort 53, 139 -MaxTimeout 100
```
### EXAMPLE 2
```
Test the connectivity to one Destination and one Udp Port with a max. timeout of 100ms
```
Test-PsNetUping -Destination sbb.ch -UdpPort 53 -MaxTimeout 100
### EXAMPLE 3
```
Test the connectivity to two Destinations and one Udp Port with a max. timeout of 100ms
```
Test-PsNetUping -Destination sbb.ch, google.com -UdpPort 53 -MaxTimeout 100
EXAMPLE
Test the connectivity to two Destinations and two Udp Ports with a max.
timeout of 100ms
Test-PsNetUping -Destination sbb.ch, google.com -UdpPort 53, 139 -MaxTimeout 100 | Format-Table
## PARAMETERS
### -Destination
A String or an Array of Strings with Names or IP Addresses to test \<string\>
```yaml
Type: String[]
Parameter Sets: (All)
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -UdpPort
An Integer or an Array of Integers with Udp Ports to test \<int\>
```yaml
Type: Int32[]
Parameter Sets: (All)
Aliases: RemotePort
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -MinTimeout
Min.
Timeout in ms, default is 0
```yaml
Type: Int32
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: 0
Accept pipeline input: False
Accept wildcard characters: False
```
### -MaxTimeout
Max.
Timeout in ms, default is 1000
```yaml
Type: Int32
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: 1000
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### Hashtable
## OUTPUTS
### PSCustomObject
## NOTES
Author: Martin Walther
## RELATED LINKS
[https://github.com/tinuwalther/PsNetTools](https://github.com/tinuwalther/PsNetTools)
| 21.44186 | 316 | 0.714027 | yue_Hant | 0.399398 |
9e0d01fb27c506af01ae53c543d546da577c0f42 | 4,572 | md | Markdown | docs/code-quality/ca2224-override-equals-on-overloading-operator-equals.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-12-04T01:36:07.000Z | 2019-12-04T01:36:07.000Z | docs/code-quality/ca2224-override-equals-on-overloading-operator-equals.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca2224-override-equals-on-overloading-operator-equals.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'CA2224: 같음 연산자를 오버로드할 때 Equals를 재정의하세요.'
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- CA2224
- OverrideEqualsOnOverloadingOperatorEquals
- OverrideEqualsOnOverridingOperatorEquals
helpviewer_keywords:
- OverrideEqualsOnOverloadingOperatorEquals
- CA2224
ms.assetid: 7312afd9-84ba-417f-923e-7a159b53bf70
author: gewarren
ms.author: gewarren
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: 3fa6bfa5b590d330d791eb8c735099e619ffaf3a
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 04/23/2019
ms.locfileid: "62541912"
---
# <a name="ca2224-override-equals-on-overloading-operator-equals"></a>CA2224: 같음 연산자를 오버로드할 때 Equals를 재정의하세요.
|||
|-|-|
|TypeName|OverrideEqualsOnOverloadingOperatorEquals|
|CheckId|CA2224|
|범주|Microsoft.Usage|
|변경 수준|주요 변경 아님|
## <a name="cause"></a>원인
Public 형식이 같음 연산자를 구현 하지만 재정의 하지 않습니다 <xref:System.Object.Equals%2A?displayProperty=fullName>합니다.
## <a name="rule-description"></a>규칙 설명
같음 연산자의 기능에 액세스 하는 구문이 편리한 방법을 제공지 않습니다는 <xref:System.Object.Equals%2A> 메서드. 해당 논리는 동일 해야 같음 연산자를 구현 하는 경우 <xref:System.Object.Equals%2A>합니다.
코드는이 규칙을 위반 하는 경우 C# 컴파일러는 경고가 발생 합니다.
## <a name="how-to-fix-violations"></a>위반 문제를 해결하는 방법
이 규칙 위반 문제를 해결 하는 같음 연산자 구현의 제거 하거나 재정의 <xref:System.Object.Equals%2A> 두 메서드에 동일한 값을 반환 합니다. 같음 연산자는 일관 되지 않은 동작을 제공 하지 않습니다, 경우의 구현을 제공 하 여 위반을 해결할 수 있습니다 <xref:System.Object.Equals%2A> 를 호출 하는 <xref:System.Object.Equals%2A> 기본 클래스의 메서드.
## <a name="when-to-suppress-warnings"></a>경고를 표시 하는 경우
같음 연산자의 상속 된 구현으로 동일한 값을 반환 하는 경우이 규칙에서 경고를 표시 하지 않아도 안전 합니다 <xref:System.Object.Equals%2A>합니다. 이 문서의 예제에서는이 규칙에서 경고를 표시 안전 하 게 수 유형을 포함 합니다.
## <a name="examples-of-inconsistent-equality-definitions"></a>일관성 없는 같음 정의의 예
다음 예제에서는 일치 하지 않는 같음 정의 사용 하 여 형식을 보여 줍니다. `BadPoint` 같음 연산자의 사용자 지정 구현을 제공 하 여 같음의 의미를 변경 하지만 재정의 하지 않습니다 <xref:System.Object.Equals%2A> 동일 하 게 동작 하 게 합니다.
[!code-csharp[FxCop.Usage.OperatorEqualsRequiresEquals#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_1.cs)]
다음 코드의 동작을 테스트 `BadPoint`합니다.
[!code-csharp[FxCop.Usage.TestOperatorEqualsRequiresEquals#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_2.cs)]
이 예제는 다음과 같은 출력을 생성합니다.
```txt
a = ([0] 1,1) and b = ([1] 2,2) are equal? No
a == b ? No
a1 and a are equal? Yes
a1 == a ? Yes
b and bcopy are equal ? No
b == bcopy ? Yes
```
다음 예제에서는이 규칙을 위반 기술적으로 일치 하지 않는 방식으로 작동 하지 않습니다는 형식을 보여 줍니다.
[!code-csharp[FxCop.Usage.ValueTypeEquals#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_3.cs)]
다음 코드의 동작을 테스트 `GoodPoint`합니다.
[!code-csharp[FxCop.Usage.TestValueTypeEquals#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_4.cs)]
이 예제는 다음과 같은 출력을 생성합니다.
```txt
a = (1,1) and b = (2,2) are equal? No
a == b ? No
a1 and a are equal? Yes
a1 == a ? Yes
b and bcopy are equal ? Yes
b == bcopy ? Yes
```
## <a name="class-example"></a>클래스 예제
다음 예제에서는이 규칙을 위반 하는 클래스 (참조 형식)를 보여 줍니다.
[!code-csharp[FxCop.Usage.OverrideEqualsClassViolation#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_5.cs)]
다음 예제에서는 재정의 하 여 위반 수정 <xref:System.Object.Equals%2A?displayProperty=fullName>합니다.
[!code-csharp[FxCop.Usage.OverrideEqualsClassFixed#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_6.cs)]
## <a name="structure-example"></a>구조 예제
다음 예제에서는이 규칙을 위반 하는 구조체 (값 형식)를 보여 줍니다.
[!code-csharp[FxCop.Usage.OverrideEqualsStructViolation#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_7.cs)]
다음 예제에서는 재정의 하 여 위반 수정 <xref:System.ValueType.Equals%2A?displayProperty=fullName>합니다.
[!code-csharp[FxCop.Usage.OverrideEqualsStructFixed#1](../code-quality/codesnippet/CSharp/ca2224-override-equals-on-overloading-operator-equals_8.cs)]
## <a name="related-rules"></a>관련된 규칙
[CA1046: 참조 형식에 같음 연산자 오버 로드 하지 마십시오.](../code-quality/ca1046-do-not-overload-operator-equals-on-reference-types.md)
[CA2225: 연산자 오버 로드는 명명 된 대체](../code-quality/ca2225-operator-overloads-have-named-alternates.md)
[CA2226: 연산자에는 대칭 오버 로드가 있어야 합니다.](../code-quality/ca2226-operators-should-have-symmetrical-overloads.md)
[CA2218: Equals GetHashCode를 재정의 합니다.](../code-quality/ca2218-override-gethashcode-on-overriding-equals.md)
[CA2231: ValueType.Equals를 재정의할 때 같음 연산자를 오버로드하십시오.](../code-quality/ca2231-overload-operator-equals-on-overriding-valuetype-equals.md) | 37.47541 | 238 | 0.7535 | kor_Hang | 0.999461 |
9e0d4762da01a263d45493074887556242d763f9 | 2,341 | md | Markdown | v2.3/getting-started/mesos/installation/dc-os/framework.md | asincu/calico | 094e15ac0f388006b86fa50d0fa673a56c3d41c6 | [
"Apache-2.0"
] | 2 | 2015-03-06T14:26:51.000Z | 2019-09-04T15:00:43.000Z | v2.3/getting-started/mesos/installation/dc-os/framework.md | asincu/calico | 094e15ac0f388006b86fa50d0fa673a56c3d41c6 | [
"Apache-2.0"
] | 1 | 2018-08-27T21:53:14.000Z | 2018-08-27T21:53:14.000Z | v2.3/getting-started/mesos/installation/dc-os/framework.md | asincu/calico | 094e15ac0f388006b86fa50d0fa673a56c3d41c6 | [
"Apache-2.0"
] | null | null | null | ---
title: Calico DC/OS Installation Guide
---
The following guide walks through installing Calico for DC/OS using the Universe
package repostiory.
#### Installing etcd
To get started, first install etcd from Universe:

#### Installing Calico
Then install Calico from Universe.

It will take a few minutes for Calico to finish
installing on your cluster. You can check the status of the installation by
visiting Calico's web status interface:
- Go to the **Services** tab
- Select "calico-install-framework" in the list of running services
(note that it may take a few minutes for Calico
to appear).
- Once the Calico service is `Healthy`,
Select the "calico-install-framework" task.
- Click the Endpoint URL to open the Calico status page in a new tab.

## Further Reading
This concludes the installation of Calico for DC/OS! Before you start
launching IP-per-container applications with Calico policy,
review the following information which may apply to your deployment.
#### AWS
DC/OS users on Amazon Web Services should view
[Calico's AWS reference]({{site.baseurl}}/{{page.version}}/reference/public-cloud/aws)
for information on how to configure AWS networking for use with Calico.
#### Note on Cluster Impact
The Installation method detailed above will affect availability of all Agents
in the cluster in order to work around two limitations in DC/OS 1.8:
1. [Mesos-Agents require a restart to detect newly added CNI networks](https://issues.apache.org/jira/browse/MESOS-6567).
2. [DC/OS does not configure Docker with a Cluster-Store](https://dcosjira.atlassian.net/browse/DCOS-155)
a requirement for Multi-host docker networking.
Because of these two limitations, Calico-DC/OS will restart each agent process
and restart each docker daemon. Learn how to handle this installation steps manually
and prevent cluster availability impact by viewing the [Custom Install Guide](custom).
#### Deploying Applications
Once installed, see the [standard usage guides]({{site.baseurl}}/{{page.version}}/getting-started/mesos#tutorials)
| 37.758065 | 121 | 0.774883 | eng_Latn | 0.968874 |
9e0d4aae3da37b6f15c6e6eba00290234a624bff | 1,590 | md | Markdown | index.md | pdirmeyer/Python_gizmos | 47dd89e238cecafaa835823becaa372afdf2111a | [
"MIT"
] | null | null | null | index.md | pdirmeyer/Python_gizmos | 47dd89e238cecafaa835823becaa372afdf2111a | [
"MIT"
] | null | null | null | index.md | pdirmeyer/Python_gizmos | 47dd89e238cecafaa835823becaa372afdf2111a | [
"MIT"
] | null | null | null | # Python Gizmos
#### _Paul Dirmeyer_
## A place for useful and fun Python code I've written
**[Soundings](https://github.com/pdirmeyer/Python_gizmos/tree/main/Soundings)** - Code to produce detailed skew-T log-P thermodynamic diagrams. Uses the data server at the University of Wyoming as a source of current and historical sounding data. Customizable to include shading of CAPE and CIN, stability regimes, printing of key quantities.
* [Module](https://github.com/pdirmeyer/Python_gizmos/blob/main/Soundings/sounding_plotter.py)
* [Notebook](https://github.com/pdirmeyer/Python_gizmos/blob/main/Soundings/Sounding_Plotter.ipynb) showing examples.
**[Gradient_maker](https://github.com/pdirmeyer/Python_gizmos/tree/main/Gradient_maker)** - Module to generate custom color palettes and colormaps for use in `matplotlib` - with a number of samples and examples.
* [Documentation](https://github.com/pdirmeyer/Python_gizmos/blob/main/Gradient_maker/Gradient_maker_documentation.md) including the background, motivation and color theory.
* [Module](https://github.com/pdirmeyer/Python_gizmos/blob/main/Gradient_maker/gradient_maker.py)
* [Notebook](https://github.com/pdirmeyer/Python_gizmos/blob/main/Gradient_maker/Gradient_maker_example.ipynb) showing examples.
**[Games](https://github.com/pdirmeyer/Python_gizmos/tree/main/Games)** - A place for games.
* [SoccerLeague](https://github.com/pdirmeyer/Python_gizmos/blob/main/Games/SoccerLeague.ipynb) - Create and run your own pseudo Premier League football season. Generates realistic match and table results based on team ratings.
| 88.333333 | 342 | 0.801258 | eng_Latn | 0.589307 |
9e0d80b2bf54365d71c132b98772caf5a675cf17 | 2,031 | md | Markdown | README.md | Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News | 24ca483e015beef552f6641344c24f20aed91db8 | [
"MIT"
] | 3 | 2019-09-23T13:30:46.000Z | 2022-01-17T18:48:28.000Z | README.md | Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News | 24ca483e015beef552f6641344c24f20aed91db8 | [
"MIT"
] | null | null | null | README.md | Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News | 24ca483e015beef552f6641344c24f20aed91db8 | [
"MIT"
] | null | null | null | # Sentiment-Analysis-of-Russian-Language-News
## Patrick Stoyer; pjs96@pitt.edu; 10/26/2019
The intention of these project was to analyze the sentiment of news sources in Russian, both from inside and outside of Russia. The data was scraped from the following five news sites: Reuters, BBC, Kommersant, Radio Svoboda, Tass.
[Here](https://github.com/Data-Science-for-Linguists-2019/Class-Plaza/blob/master/guestbooks/guestbook_patrick.md) is my guestbook. [Here](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/blob/master/progress_report.md) is my progress report. [Here](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/blob/master/final_report.md) is my final report.
- [This folder](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/tree/master/data/info) contains csv files with the url, name and date of each article scraped.
- [This folder](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/tree/master/data/data_sample) contains folders with a few sample articles from the overall data set.
- [This folder](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/tree/master/gensim/outputs) contains output from lda and lsa topic modeling on various data sets.
- [This Jupyter notebook](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/blob/master/Topic_Modeling.ipynb) contains code that creates topic models of the data and looks at some scores associated with each of them.
- [This notebook](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/blob/master/data_overview.ipynb) gives an overview of the data.
- [This notebook](https://github.com/Data-Science-for-Linguists-2019/Sentiment-Analysis-of-Russian-Language-News/blob/master/overview_2.ipynb) continues the previous notebook's overview of data.
| 126.9375 | 439 | 0.805022 | eng_Latn | 0.476464 |
9e0f9423fbe9f7c0767a9dc8061868c64402b233 | 1,919 | md | Markdown | docs/BUILD_WASM.md | pdf-ist-internal/pdfium-lib | 8aa8a1e289f230418bde2cc8239ca081b9a4c8c4 | [
"MIT"
] | null | null | null | docs/BUILD_WASM.md | pdf-ist-internal/pdfium-lib | 8aa8a1e289f230418bde2cc8239ca081b9a4c8c4 | [
"MIT"
] | null | null | null | docs/BUILD_WASM.md | pdf-ist-internal/pdfium-lib | 8aa8a1e289f230418bde2cc8239ca081b9a4c8c4 | [
"MIT"
] | null | null | null | # Build for WASM
1. Execute all **general** steps
2. Get Emscripten SDK:
```python3 make.py run build-emsdk```
3. Get PDFium:
```python3 make.py run build-pdfium-wasm```
4. Patch:
```python3 make.py run patch-wasm```
5. PDFium Linux dependencies
```./build/linux/pdfium/build/install-build-deps.sh```
6. Compile:
```python3 make.py run build-wasm```
7. Install libraries:
```python3 make.py run install-wasm```
8. Test:
```python3 make.py run test-wasm```
9. Generate javascript libraries:
```python3 make.py run generate-wasm```
Obs:
- The file **make.py** need be executed with python version 3.
- You need run all steps in a Linux machine (real, vm or docker) to it works.
- With docker you can skip steps 2 and 3.
## Docker
You can use docker to build and test on local machine before deploy.
Build the image with command:
```docker build -t pdfium-wasm -f docker/wasm/Dockerfile docker/wasm```
Test with command:
```docker run -v ${PWD}:/app -it pdfium-wasm echo "test"```
Now you can execute any command with pattern:
```docker run -v ${PWD}:/app -it pdfium-wasm [COMMAND]```
Obs: This is the recommended way to build and is used on CD server.
## Run on browser
You can test the sample using commands:
```
python3 make.py run test-wasm
python -m http.server --directory sample-wasm/build
```
or with docker you can use:
```
docker run -v ${PWD}:/app -it pdfium-wasm python3 make.py run test-wasm
python3 -m http.server --directory sample-wasm/build
```
## Run on terminal
You can test the sample using commands:
```
python3 make.py run test-wasm
node sample-wasm/build/index.js
```
or with docker you can use:
```
docker run -v ${PWD}:/app -it pdfium-wasm python3 make.py run test-wasm
docker run -v ${PWD}:/app -it pdfium-wasm node sample-wasm/build/index.js
```
## Web demo
You can test pdfium on web browser here:
https://pdfviewer.github.io/
| 21.322222 | 77 | 0.692548 | eng_Latn | 0.850824 |
9e0fde49f17e876a966271c9aa5e60a6b085c4ee | 296 | md | Markdown | readme.md | sstutz/php-project-template | beade154d7f08b9caa0b915de5d2610506aa39b5 | [
"MIT"
] | null | null | null | readme.md | sstutz/php-project-template | beade154d7f08b9caa0b915de5d2610506aa39b5 | [
"MIT"
] | null | null | null | readme.md | sstutz/php-project-template | beade154d7f08b9caa0b915de5d2610506aa39b5 | [
"MIT"
] | 1 | 2019-03-04T01:56:47.000Z | 2019-03-04T01:56:47.000Z | # Modern, QA Focused PHP Project Template
A framwork agnostic PHP project template with sensible defaults focused on code quality.
## Requirements
To make use of this setup some additional software is required.
* [docker](https://www.docker.com/)
* [make](https://www.gnu.org/software/make/)
| 26.909091 | 88 | 0.760135 | eng_Latn | 0.916295 |
9e104f2be3e5dacfc8471bd7ab038ee3c16870cd | 5,060 | md | Markdown | README.md | ethan-nelson/bartiromo | b767a15c440274a176b4dc33b353ec34af062eed | [
"MIT"
] | null | null | null | README.md | ethan-nelson/bartiromo | b767a15c440274a176b4dc33b353ec34af062eed | [
"MIT"
] | 2 | 2017-04-25T03:06:17.000Z | 2017-09-24T21:59:15.000Z | README.md | ethan-nelson/bartiromo | b767a15c440274a176b4dc33b353ec34af062eed | [
"MIT"
] | null | null | null |
This is a web application designed to assist in crowdsourced filtering and identification of features. The current focus is on weather and climate systems, but it can be applied to other situations as well.
## Design Structure
The application is built with [Flask](http://flask.pocoo.org), a Python web app module. Overall, the application is designed to be lean (in terms of codebase) and lightweight (in terms of user browser load).
* `app.py` holds the app initialization, database and form class definitions, and the app views (i.e. the queries used to populate pages).
* `static/` contains design files and eventually Javascript functions that will be used in image click classification projects.
* `templates/` holds html templates for all pages. The `default.html` file is the base template and that controls most of the branding.
## Installation
It is ideal to use a Python virtual environment to maintain a consistent library environment. Additionally, pip can be used to install all the required libraries with one command. Thus, once the repository has been cloned or downloaded and you have navigated into the directory:
~~~
$ virtualenv ./env
$ env/bin/pip install -r requirements.txt
~~~
The Python portion is now installed.
## Database
Next you need to tackle setting up a database to store all information for the website. The database can be Mysql, Postgresql, or another kind that is compatible with [SQLAlchemy](http://www.sqlalchemy.org/). Please follow the instructions of the database you want to use for installation as they will vary greatly by operating system and database system. For Ubuntu 16.04, this entails `sudo apt-get install postgresql postgresql-server`.
Once it is configured, ensure you have a user and an empty database created. As an example for Postgresql,
~~~
$ sudo -u postgres psql
# CREATE USER "micro" PASSWORD 'micro';
# CREATE DATABASE "micro" OWNER "micro";
# \q
~~~
Database connection information is accessed via the environmental variable `DATABASE_URL` like the following form:
~~~
$ export DATABASE_URL="mysql://user:pass@host/db"
~~~
### Database Schema
Tables and default user credentials must also be created for the app to run. Included in `app.py` is a function that will generate the tables and create a default user of name `admin` and password `micro`. This is called by:
~~~
$ env/bin/python -c "import app; app.create_database()"
~~~
*Make sure you log in to the app and change the admin credentials.*
Alternatively, you can call the individual functions yourself in case you want to delete existing tables or populate the tables with your own set of users. These capabilities are accessible by:
~~~
$ env/bin/python
>>> import app
>>> app.db.drop_all() # To delete existing tables in the database
>>> app.db.create_all() # To add the required tables and columns
~~~
and so on. The namespace `app.db` provides you access to [Flask SQLAlchemy](http://flask-sqlalchemy.pocoo.org/) functions. See the `create_database()` function in `app.py` for an example of adding a user to the database programmatically.
## Running the Application
With the Python modules and database configured, you are now ready to run the application.
The simplest way to run the app is to leverage Flask's internal web server, Werkzeug, using the python environment you installed:
~~~
$ env/bin/python app.py
~~~
By default, the app is served on 0.0.0.0:5432 using this mechanism, but that can be altered on the last line of the `app.py` file.
For deploying to Heroku, a PaaS, a Procfile is included in the repository that uses gunicorn to serve the website.
## Data Structure
* Projects are created with a given goal. Projects have a name, a description visible on the home page, and instructions displayed on the task page.
* Tasks are individual components of a project that are to be classified. In the case of image identification, a task is a single image.
* Results are user-chosen entries for a given task. In the case of image identification, a result would be "yes" in response to the instructions for a given project.
## URL Paths
* `/` will lead to the homepage of the project.
* `/login/`, `/logout/`, and `/register/` provide their respectively-named functions.
* `/user/` accesses the logged in user's profile and includes a link to the password change page at `/user/password/`.
* `/leaderboard/` provides an overview of user contributions to all projects.
* `/project/<integer>/` will provide a task from the respective project.
* `/admin/` is the administrative interface homepage, providing links to project creation, project results, task addition pages, and options to hide projects.
## Feedback and Forthcoming Improvements
There's a lot on the docket still to be added to this. First in line are the inclusion of customizable image sizes and the ability to change the type of classification (namely only yes-no and free text).
If you have any immediate feedback or run into problems, feel free to open an issue on GitHub or send me an email: git@ethan-nelson.com
| 52.164948 | 439 | 0.767391 | eng_Latn | 0.997928 |
9e105b45d5be6117ce25e9cdfa2f51fa80709744 | 1,513 | md | Markdown | README.md | gajus/timeout-idle-promise | 302a7ba2d051af18c96c66771a515840f9d676bd | [
"BSD-3-Clause"
] | 7 | 2019-11-01T15:15:25.000Z | 2020-11-26T13:20:29.000Z | README.md | gajus/timeout-idle-promise | 302a7ba2d051af18c96c66771a515840f9d676bd | [
"BSD-3-Clause"
] | null | null | null | README.md | gajus/timeout-idle-promise | 302a7ba2d051af18c96c66771a515840f9d676bd | [
"BSD-3-Clause"
] | null | null | null | # timeout-idle-promise
[](https://travis-ci.org/gajus/timeout-idle-promise)
[](https://coveralls.io/github/gajus/timeout-idle-promise)
[](https://www.npmjs.org/package/timeout-idle-promise)
[](https://github.com/gajus/canonical)
[](https://twitter.com/kuizinas)
Detects when a promise is idle (does not create asynchronous events) for longer than permitted amount of time.
## API
```js
import {
timeoutIdlePromise,
TimeoutError,
} from 'timeout-idle-promise';
/**
* @param {Function} promiseFactory
* @param {number} maximumIdleTime Idle timeout in milliseconds.
* @throws TimeoutError
*/
timeoutIdlePromise(promiseFactory);
```
## Example Usage
```js
// Rejected with Idle promise timeout.
timeoutIdlePromise(() => {
return new Promise((resolve) => {
});
}, 1000);
// Resolved.
timeoutIdlePromise(() => {
return new Promise((resolve) => {
setTimeout(() => {
setTimeout(() => {
setTimeout(() => {
resolve();
}, 500);
}, 500);
}, 500);
});
}, 1000);
```
| 29.096154 | 160 | 0.693325 | yue_Hant | 0.283349 |
9e112f3d01a4ed1f72cbda735cead834c61d1c0c | 4,159 | md | Markdown | _posts/2016-11-21-bash-in-windows10.md | fredrikaverpil/fredrikaverpil.github.io | 667203d34ed64be5e00e8e804b4ed854adc292b7 | [
"MIT"
] | 13 | 2015-11-30T08:46:28.000Z | 2021-11-19T03:46:17.000Z | _posts/2016-11-21-bash-in-windows10.md | fredrikaverpil/fredrikaverpil.github.io | 667203d34ed64be5e00e8e804b4ed854adc292b7 | [
"MIT"
] | 16 | 2015-11-28T11:14:55.000Z | 2021-11-19T10:11:51.000Z | _posts/2016-11-21-bash-in-windows10.md | fredrikaverpil/fredrikaverpil.github.io | 667203d34ed64be5e00e8e804b4ed854adc292b7 | [
"MIT"
] | 24 | 2016-02-22T00:21:09.000Z | 2021-11-19T03:58:56.000Z | ---
layout: post
title: Bash on Ubuntu on Windows
tags: [windows, linux, bash]
---
This is a quick intro to – and some personal notes on working with – Bash in Windows 10 (Anniversary Update or Insider build requred). This will be updated on a sporadic basis.
<!--more-->
## Information on Bash
### What is Bash on Ubuntu on Windows?
Bash on Ubuntu on Windows is part of the "Windows Subsystem for Linux" (WSL). Read more over at the [WSL MSDN page](https://msdn.microsoft.com/en-us/commandline/wsl/about). This page also covers installation guide, command reference, account permissions, interoperability, FAQ and release notes.
### WSL developments and news
- [Windows Subsystem for Linux blog](https://blogs.msdn.microsoft.com/wsl/)
- [Posts in Windows Insider Program](https://blogs.windows.com/blog/tag/windows-insider-program/)
- [Release notes](https://msdn.microsoft.com/en-us/commandline/wsl/release_notes)
Recent (October, 2016) noteworthy news:
**Official Ubuntu 16.04 support.** Ubuntu 16.04 (Xenial) is installed for all new Bash on Ubuntu on Windows instances starting in build 14951. This replaces Ubuntu 14.04 (Trusty). Existing user instances will not be upgraded automatically. Users on the Windows Insider program can upgrade manually from 14.04 to 16.04 using the do-release-upgrade command.
**Windows / WSL interoperability.** Users can now launch Windows binaries directly from a WSL command prompt. This is the number one request from our users on the WSL User Voice page. Some examples include:
```bash
export PATH=$PATH:/mnt/c/Windows/System32
notepad.exe
ipconfig.exe | grep IPv4 | cut -d: -f2
ls -la | findstr.exe foo.txt
cmd.exe /c dir
```
### Report issues and vote for new features
- Report issues at [Github](https://github.com/Microsoft/BashOnWindows)
- Vote on new features via [UserVoice](https://wpdev.uservoice.com/forums/266908-command-prompt-console-bash-on-ubuntu-on-windo/category/161892-bash)
## Using Bash
### Enter bash
You can run `bash` in a terminal window to enter the Linux subsystem. Or you can launch the "Bash on Ubuntu on Windows" application from the start menu.
### Using sensible colors
I don't know if I'm not oldtimer enough, but the default colors sceheme in bash is simply [hideous and quite painful to look at](https://github.com/Microsoft/vscode/issues/7556).
[Here's](https://medium.com/@iraklis/fixing-dark-blue-colors-on-windows-10-ubuntu-bash-c6b009f8b97c#.sjuyltkek) one guy's solution to this problem. I'll update this page with whatever solution I find the best suitable.
### Issues I've come across
In short, interoperability (except launching applications) between WSL/Windows doesn't seem to work:
- Symlinking files between WSL and /mnt/c won't work
- [Modifying files in WSL from Windows will break things](https://blogs.msdn.microsoft.com/commandline/2016/11/17/do-not-change-linux-files-using-windows-apps-and-tools/)
However, it's fine to modify files stored in your Windows filesystem from within bash. So if you were in `/mnt/c/dev/project` and launched `code.exe ./`, Visual Code would open the current folder.
### Managing the WSL installation
#### Re-install the Linux subsystem
From cmd.exe with Administrator privileges:
# Uninstall
lxrun /uninstall /full
# Reinstall
lxrun /install.
#### Update Ubuntu
This equals an `apt update && apt dist-upgrade -y`:
lxrun /update
#### Set default user
lxrun /setdefaultuser <userName>
#### Which version of Ubuntu am I running?
```bash
$ lsb_release -a
No LSB modules are available
Distributor ID: Ubuntu
Description: Ubuntu 16.04.1 LTS
Release: 16.04
Codename: xenial
```
### Access WSL from Windows (read-only access)
C:\Users\<windows_username>\AppData\Local\lxss\home\<linux_username> # user home
C:\Users\<windows_username>\AppData\Local\lxss\rootfs # root
### Access Windows from WSL (write access)
/mnt/c
### Run bash command from cmd.exe
Runs the command, prints the output and exits back to the Windows command prompt.
bash -c "<command>"
### Python development
Install pip: `apt-get install python-pip`
| 33.007937 | 358 | 0.742486 | eng_Latn | 0.877528 |
9e113f247f3db8cb44e12da0f460467e83553d3f | 9,017 | md | Markdown | articles/active-directory-b2c/trustframeworkpolicy.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/trustframeworkpolicy.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/trustframeworkpolicy.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: TrustFrameworkPolicy – Služba Azure Active Directory B2C | Dokumenty společnosti Microsoft
description: Zadejte element TrustFrameworkPolicy vlastní zásady ve službě Azure Active Directory B2C.
services: active-directory-b2c
author: msmimart
manager: celestedg
ms.service: active-directory
ms.workload: identity
ms.topic: reference
ms.date: 01/31/2020
ms.author: mimart
ms.subservice: B2C
ms.openlocfilehash: c964a7bde0b7db9357c73fc79d2df3170075fcc1
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/28/2020
ms.locfileid: "78186382"
---
# <a name="trustframeworkpolicy"></a>TrustFrameworkPolicy
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
Vlastní zásada je reprezentována jako jeden nebo více souborů ve formátu XML, které na sebe odkazují v hierarchickém řetězci. Elementy XML definují prvky zásady, jako je například schéma deklarací identity, transformace deklarací identity, definice obsahu, zprostředkovatelé deklarací identity, technické profily, cesta uživatele a kroky orchestrace. Každý soubor zásad je definován v rámci prvku **TrustFrameworkPolicy** nejvyšší úrovně souboru zásad.
```XML
<TrustFrameworkPolicy
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="https://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06"
PolicySchemaVersion="0.3.0.0"
TenantId="mytenant.onmicrosoft.com"
PolicyId="B2C_1A_TrustFrameworkBase"
PublicPolicyUri="http://mytenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase">
...
```
Prvek **TrustFrameworkPolicy** obsahuje následující atributy:
| Atribut | Požaduje se | Popis |
|---------- | -------- | ----------- |
| PolicySchemaVersion | Ano | Verze schématu, která má být použita ke spuštění zásady. Hodnota musí být`0.3.0.0` |
| TenantObjectId | Ne | Jedinečný identifikátor objektu klienta Azure Active Directory B2C (Azure AD B2C). |
| TenantId | Ano | Jedinečný identifikátor klienta, ke kterému tato zásada patří. |
| PolicyId | Ano | Jedinečný identifikátor zásady. Tento identifikátor musí být předponou *B2C_1A_* |
| PublicPolicyUri | Ano | Identifikátor URI pro zásadu, která je kombinací ID klienta a ID zásady. |
| Režim nasazení | Ne | Možné hodnoty: `Production` `Development`, nebo . `Production` je výchozí možnost. Tuto vlastnost použijte k ladění zásad. Další informace naleznete v [tématu Shromažďování protokolů](troubleshoot-with-application-insights.md). |
| Koncový bod Uživatele JourneyRecorder | Ne | Koncový bod, který se používá při `Development` **DeploymentMode** je nastavena na . Hodnota musí `urn:journeyrecorder:applicationinsights`být . Další informace naleznete v [tématu Shromažďování protokolů](troubleshoot-with-application-insights.md). |
Následující příklad ukazuje, jak určit element **TrustFrameworkPolicy:**
``` XML
<TrustFrameworkPolicy
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="https://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06"
PolicySchemaVersion="0.3.0.0"
TenantId="mytenant.onmicrosoft.com"
PolicyId="B2C_1A_TrustFrameworkBase"
PublicPolicyUri="http://mytenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase">
```
## <a name="inheritance-model"></a>Model dědičnosti
Tyto typy souborů zásad se obvykle používají v cestě uživatele:
- **Základní** soubor, který obsahuje většinu definic. Chcete-li pomoci s odstraňováním potíží a dlouhodobou údržbou zásad, doporučujeme provést v tomto souboru minimální počet změn.
- Soubor **rozšíření,** který obsahuje jedinečné změny konfigurace pro vašeho tenanta. Tento soubor zásad je odvozen ze základního souboru. Tento soubor slouží k přidání nových funkcí nebo přepsání existujících funkcí. Tento soubor můžete například použít k federate s novými poskytovateli identit.
- Soubor **předávající strany (RP),** který je souborem zaměřeným na úlohy, který je vyvolán přímo aplikací předávající strany, jako je například webová, mobilní nebo desktopová aplikace. Každý jedinečný úkol, jako je registrace nebo přihlášení, resetování hesla nebo úprava profilu, vyžaduje vlastní soubor zásad RP. Tento soubor zásad je odvozen ze souboru Přípony.
Aplikace předávající strany volá soubor zásad RP k provedení určité úlohy. Například zahájit tok přihlášení. Rozhraní identity experience framework v Azure AD B2C přidá všechny prvky nejprve ze souboru Base a potom ze souboru rozšíření a nakonec ze souboru zásad RP sestavit aktuální zásady v platnosti. Prvky stejného typu a názvu v souboru RP přepsat tyto prvky v rozšíření a rozšíření přepíše Base. Následující diagram znázorňuje vztah mezi soubory zásad a aplikacemi předávající strany.

Model dědičnosti je následující:
- Nadřazené zásady a podřízené zásady jsou stejné schéma.
- Podřízené zásady na libovolné úrovni můžete dědit z nadřazené zásady a rozšířit přidáním nové prvky.
- Počet úrovní není nijak omezen.
Další informace naleznete [v tématu Začínáme s vlastními zásadami](custom-policy-get-started.md).
## <a name="base-policy"></a>Základní zásady
Chcete-li zdědit zásadu z jiné zásady, musí být prvek **BasePolicy** deklarován v rámci prvku **TrustFrameworkPolicy** souboru zásad. Element **BasePolicy** je odkaz na základní zásady, ze kterých je tato zásada odvozena.
Element **BasePolicy** obsahuje následující prvky:
| Element | Výskyty | Popis |
| ------- | ----------- | --------|
| TenantId | 1:1 | Identifikátor vašeho klienta Azure AD B2C. |
| PolicyId | 1:1 | Identifikátor nadřazené zásady. |
Následující příklad ukazuje, jak zadat základní zásady. Tato **B2C_1A_TrustFrameworkExtensions** politika je odvozena z **B2C_1A_TrustFrameworkBase** politiky.
``` XML
<TrustFrameworkPolicy
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="https://www.w3.org/2001/XMLSchema"
xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06"
PolicySchemaVersion="0.3.0.0"
TenantId="mytenant.onmicrosoft.com"
PolicyId="B2C_1A_TrustFrameworkExtensions"
PublicPolicyUri="http://mytenant.onmicrosoft.com/B2C_1A_TrustFrameworkExtensions">
<BasePolicy>
<TenantId>yourtenant.onmicrosoft.com</TenantId>
<PolicyId>B2C_1A_TrustFrameworkBase</PolicyId>
</BasePolicy>
...
</TrustFrameworkPolicy>
```
## <a name="policy-execution"></a>Provádění zásad
Aplikace předávající strany, například webová, mobilní nebo desktopová aplikace, volá [zásady předávající strany (RP).](relyingparty.md) Soubor zásad RP provede určitou úlohu, například přihlášení, resetování hesla nebo úpravu profilu. Zásady RP konfiguruje seznam deklarací, které aplikace předávající strany obdrží jako součást vydaného tokenu. Více aplikací může používat stejné zásady. Všechny aplikace obdrží stejný token s deklaracemi identity a uživatel prochází stejnou cestou uživatele. Jedna aplikace může používat více zásad.
Uvnitř souboru zásad RP zadáte element **DefaultUserJourney,** který odkazuje na [UserJourney](userjourneys.md). Cesta uživatele je obvykle definována v zásadách Base nebo Extensions.
B2C_1A_signup_signin politika:
```XML
<RelyingParty>
<DefaultUserJourney ReferenceId="SignUpOrSignIn">
...
```
B2C_1A_TrustFrameWorkBase nebo B2C_1A_TrustFrameworkExtensionPolicy:
```XML
<UserJourneys>
<UserJourney Id="SignUpOrSignIn">
...
```
Cesta uživatele definuje obchodní logiku toho, čím uživatel prochází. Každá cesta uživatele je sada kroků orchestrace, která provádí řadu akcí, v pořadí z hlediska ověřování a shromažďování informací.
**SocialAndLocalAccounts** soubor zásad v [počáteční mašle](custom-policy-get-started.md#custom-policy-starter-pack) obsahuje SignUpOrSignIn, ProfileEdit, PasswordReset cesty uživatele. Můžete přidat další cesty uživatelů pro jiné scénáře, jako je změna e-mailové adresy nebo propojení a odpojení účtu na sociální síti.
Kroky orchestrace může volat [technický profil](technicalprofiles.md). Technický profil poskytuje rámec s vestavěným mechanismem pro komunikaci s různými typy stran. Technický profil může například provádět tyto akce mimo jiné:
- Vykreslete uživatelské prostředí.
- Umožněte uživatelům přihlásit se pomocí sociálního nebo podnikového účtu, jako je Facebook, účet Microsoft, Google, Salesforce nebo jakýkoli jiný poskytovatel identity.
- Nastavte ověření telefonu pro vícefaktorové ověřování.
- Čtení a zápis dat do a z úložiště identit Azure AD B2C.
- Zavolejte vlastní službu Restful API.

Prvek **TrustFrameworkPolicy** obsahuje následující prvky:
- Zásady BasePolicy, jak je uvedeno výše
- [BuildingBlocks](buildingblocks.md)
- [ClaimsProviders](claimsproviders.md)
- [UserJourneys](userjourneys.md)
- [RelyingParty](relyingparty.md)
| 56.35625 | 536 | 0.793058 | ces_Latn | 0.999379 |
9e131a5f7ae69be48f5ed242cfa42fcee570dd1f | 5,465 | md | Markdown | README.md | kgryte/shields-badge-url-coveralls | b5570dd756c969c4ded0c5aa1d0653bd1e095bba | [
"MIT"
] | 1 | 2016-01-04T21:14:28.000Z | 2016-01-04T21:14:28.000Z | README.md | kgryte/shields-badge-url-coveralls | b5570dd756c969c4ded0c5aa1d0653bd1e095bba | [
"MIT"
] | null | null | null | README.md | kgryte/shields-badge-url-coveralls | b5570dd756c969c4ded0c5aa1d0653bd1e095bba | [
"MIT"
] | null | null | null | Coveralls Badge URLs
===
[![NPM version][npm-image]][npm-url] [![Build Status][build-image]][build-url] [![Coverage Status][coverage-image]][coverage-url] [![Dependencies][dependencies-image]][dependencies-url]
> Creates [Shields.io][shields] badge URLs for [Coveralls][coveralls].
## Installation
``` bash
$ npm install shields-badge-url-coveralls
```
## Usage
``` javascript
var urls = require( 'shields-badge-url-coveralls' );
```
#### urls( opts )
Creates [Shields.io][shields] badge URLs for [Coveralls][coveralls].
``` javascript
var opts = {
'owner': 'dstructs',
'repo': 'matrix'
};
var out = urls( opts );
/*
{
"image": "https://img.shields.io/coveralls/dstructs/matrix/master.svg?style=flat",
"url": "https://coveralls.io/r/dstructs/matrix?branch=master"
}
*/
```
The `function` accepts the following `options`:
* __owner__: repository owner (*required*).
* __repo__: repository name (*required*).
* __branch__: repository branch. Default: `master`.
* __style__: badge style. Default: `flat`.
* __format__: badge format. Default: `svg`.
## Examples
``` javascript
var getKeys = require( 'object-keys' ).shim();
var url = require( 'url' );
var list = require( 'npm-list-author-packages' );
var repoUrls = require( 'npm-repo-url' );
var badgeUrls = require( 'shields-badge-url-coveralls' );
// Generate badge URLs for all of an author's packages...
list( {'username': 'kgryte'}, onList );
function onList( error, list ) {
var opts;
if ( error ) {
throw error;
}
if ( !list.length ) {
return;
}
opts = {
'packages': list
};
repoUrls( opts, onUrls );
}
function onUrls( error, results ) {
var badge;
var parts;
var urls;
var pkgs;
var path;
var i;
if ( error ) {
throw error;
}
urls = results.data;
pkgs = getKeys( urls );
// Note: we assume all repository URLs are of the form: git://github.com/{{owner}}/{{repo}}.git
for ( i = 0; i < pkgs.length; i++ ) {
parts = url.parse( urls[ pkgs[i] ] );
path = parts.pathname.split( '/' );
badge = badgeUrls({
'owner': path[ 1 ],
'repo': path[ 2 ].slice( 0, -4 )
});
console.log( badge );
}
}
```
To run the example code from the top-level application directory,
``` bash
$ node ./examples/index.js
```
---
## CLI
### Installation
To use the module as a general utility, install the module globally
``` bash
$ npm install -g shields-badge-url-coveralls
```
### Usage
``` bash
Usage: shields-coveralls --owner=<owner> --repo=<repo> [options]
Options:
-h, --help Print this message.
-V, --version Print the package version.
--owner owner Repository owner.
--repo repo Repository name.
--branch branch Repository branch. Default: 'master'.
--style style Badge style. Default: 'flat'.
--format format Badge format. Default: 'svg'.
```
### Examples
``` bash
$ shields-coveralls --owner=dstructs --repo=matrix
# => {"image":"https://img.shields.io/coveralls/dstructs/matrix/master.svg?style=flat","url":"https://coveralls.io/r/dstructs/matrix?branch=master"}
```
---
## Tests
### Unit
This repository uses [tape][tape] for unit tests. To run the tests, execute the following command in the top-level application directory:
``` bash
$ make test
```
All new feature development should have corresponding unit tests to validate correct functionality.
### Test Coverage
This repository uses [Istanbul][istanbul] as its code coverage tool. To generate a test coverage report, execute the following command in the top-level application directory:
``` bash
$ make test-cov
```
Istanbul creates a `./reports/coverage` directory. To access an HTML version of the report,
``` bash
$ make view-cov
```
### Browser Support
This repository uses [Testling][testling] for browser testing. To run the tests in a (headless) local web browser, execute the following command in the top-level application directory:
``` bash
$ make test-browsers
```
To view the tests in a local web browser,
``` bash
$ make view-browser-tests
```
<!-- [![browser support][browsers-image]][browsers-url] -->
---
## License
[MIT license](http://opensource.org/licenses/MIT).
## Copyright
Copyright © 2016. Athan Reines.
[npm-image]: http://img.shields.io/npm/v/shields-badge-url-coveralls.svg
[npm-url]: https://npmjs.org/package/shields-badge-url-coveralls
[build-image]: http://img.shields.io/travis/kgryte/shields-badge-url-coveralls/master.svg
[build-url]: https://travis-ci.org/kgryte/shields-badge-url-coveralls
[coverage-image]: https://img.shields.io/codecov/c/github/kgryte/shields-badge-url-coveralls/master.svg
[coverage-url]: https://codecov.io/github/kgryte/shields-badge-url-coveralls?branch=master
[dependencies-image]: http://img.shields.io/david/kgryte/shields-badge-url-coveralls.svg
[dependencies-url]: https://david-dm.org/kgryte/shields-badge-url-coveralls
[dev-dependencies-image]: http://img.shields.io/david/dev/kgryte/shields-badge-url-coveralls.svg
[dev-dependencies-url]: https://david-dm.org/dev/kgryte/shields-badge-url-coveralls
[github-issues-image]: http://img.shields.io/github/issues/kgryte/shields-badge-url-coveralls.svg
[github-issues-url]: https://github.com/kgryte/shields-badge-url-coveralls/issues
[tape]: https://github.com/substack/tape
[istanbul]: https://github.com/gotwarlost/istanbul
[testling]: https://ci.testling.com
[coveralls]: https://coveralls.io
[shields]: http://shields.io/
| 24.397321 | 185 | 0.686368 | eng_Latn | 0.466155 |
9e1331d36e95b88f269a38b36696a0fe170ca589 | 5,553 | md | Markdown | README.md | UJA-Desarrollo-Agil/d-agil-2021-2022-practica-2-vrivas | d2154266754b8abe688ff01d5d40a57f85809ebb | [
"MIT"
] | null | null | null | README.md | UJA-Desarrollo-Agil/d-agil-2021-2022-practica-2-vrivas | d2154266754b8abe688ff01d5d40a57f85809ebb | [
"MIT"
] | null | null | null | README.md | UJA-Desarrollo-Agil/d-agil-2021-2022-practica-2-vrivas | d2154266754b8abe688ff01d5d40a57f85809ebb | [
"MIT"
] | null | null | null | # Víctor Rivas
Undum is a game framework for building a sophisticated form of
hypertext interactive fiction.
If that means nothing to you, then let's go back a few steps. Remember
those Choose Your Own Adventure, or Fighting Fantasy books? Where you
got to choose what your character does next? Well if you think of that
in a web-page you have hypertext interactive fiction, or HIF. Instead
of turning to a particular page, you click a link, and the next bit of
content appears.
The problem is that those kinds of games are pretty limited. Every
time the player does something, the story could go in different
directions. So the author has to either write masses of branches, or
else the decisions you make as a player have to be relatively short
lived. If you played CYOA books you'll know that the wrong move either
ended the story pretty quickly, or else it didn't really matter what
you did because you'd end up at the same place.
To beat this limitation, Undum allows you to make the output
dynamic. It allows you to keep track of what has happened to the
character (any kinds of data, in fact), and to then change the text
that gets output accordingly. Effectively it is like writing a CYOA
page that is different each time you read it. This allows for far
richer and more rewarding game design.
Undum is a pure client client-side library. It consists of a HTML file
and three Javascript files. The HTML file uses a nice bit of styling,
so there's a bunch of CSS and images in the default package too, but
that can be replaced if you want. To create your own game, you edit
the HTML file a little (mainly just changing the title and author),
and edit one of the Javascript files.
Because the game is written in Javascript, you get the full power of a
dynamic and efficient programming language. This isn't a CYOA
scripting system with limited functionality. You can take control of
anything you want. Or you can just keep things simple using a bunch of
simple functions provided by Undum.
## Compatibility
Undum is designed for HTML5 and CSS3 browsers. It has been tested on
Firefox 3.6, Chrome 5, and Safari 5. Older browsers may work okay too,
but some of the animation won't work, the styles may render poorly,
and saving and loading of games is unlikely to work. Anyone who wants
to hack around with it and make it work more widely is welcome. Just
fork this project on Github.
The local storage system on some browsers does not work when loading a
page from your hard drive. To test your game when developing it, you
may want to start up a simple local webserver. I have found that
Chrome seems to reliably provide local storage for local
development. It also has excellent Javascript debugging tools.
## Getting Started
1. Download Undum. Use the 'download zip' link in the right column of
this page.
2. Unzip Undum somewhere on your hard-drive.
3. Open games/tutorial.html in your browser, and play through the tutorial.
4. Copy games/tutorial.html to a file that reflects your game name.
5. Edit your HTML file and add the title, author and description of
the game you want to write. At the bottom of the file change the
name of `tutorial.game.js` to something else (by convention
*your-game-name*`.game.js`.
6. Copy `tutorial.game.js` to the file name you chose in the last
step. Open it and begin creating your game.
Reference documentation, including full API details, is at
[http://idmillington.github.io/undum/](http://idmillington.github.io/undum/),
and is also included in the repository.
The source code for all the files is also heavily commented, so if you
get stuck, go in and read it.
## Deploying
To deploy your game, just upload your HTML file and the `media` folder
to your webserver. You can serve several games with the same look and
feel from the same directory. You need a different HTML file for each
game, and each one should load the correct `.game.js` file at the
end. Add any media you need for your game (images, audio, video), and
the remaining files will be reused.
For example, if you had 3 games: `episode1`, `episode2`, and
`christmas-special`. You'd have a directory structure:
episode1.html
episode2.html
christmas-special.html
media/
css/ ...
img/ ...
js/
jquery-1.4.2.min.js
undum.js
games/
episode1/
episode1.game.js
... media for episode 1 ...
episode2/
episode2.game.js
... media for episode 1 ...
christmas-special/
christmas-special.game.js
... media for christmas special ...
This assumes you use the same directory lay out that I do. You are
welcome to change things around, of course, as long as you work and
change the references.
## Undum
The name `undum` came from a little project that preceded this code
base. In 2008 I put together a simple browser based game. It was
narrative, but used the grind-based mechanics of games such as
Farmville and Mafia Wars. Because of the grinding, I called it
Carborundum, which I found I couldn't type at speed, so it became
Undum. The code has changed out of all recognition since them, as the
grind-based game moved to Flash. But the name stuck for the Javascript
framework.
## License
The code, documentation, styles, design and images are all distributed
under the MIT license. This permits you to modify and use them, even
for commercial use. A copy of the MIT license is found in the LICENSE
file.
| 39.382979 | 77 | 0.745903 | eng_Latn | 0.999778 |
9e139b427df5b5042e993f64b6a6faf790a78f83 | 78 | md | Markdown | _includes/03-links.md | gokdag/markdown-portfolio | f5dc2c60e1efc55134ec2eafeadc990a3d4bc2d0 | [
"MIT"
] | null | null | null | _includes/03-links.md | gokdag/markdown-portfolio | f5dc2c60e1efc55134ec2eafeadc990a3d4bc2d0 | [
"MIT"
] | 5 | 2020-11-03T13:03:27.000Z | 2020-11-03T13:36:49.000Z | _includes/03-links.md | gokdag/markdown-portfolio | f5dc2c60e1efc55134ec2eafeadc990a3d4bc2d0 | [
"MIT"
] | null | null | null | [LinkedIn](https://www.linkedin.com/in/kerim-can-g%C3%B6kda%C4%9F-a580a21ab/)
| 39 | 77 | 0.74359 | kor_Hang | 0.173061 |
9e142dfc53110bb6ddca582c2be239973cfce143 | 8,273 | md | Markdown | README.md | emernet-eins/system | 891b7465aee779d7e46eb2cfda1595b55f7c7ea4 | [
"MIT"
] | 1 | 2019-11-11T11:10:16.000Z | 2019-11-11T11:10:16.000Z | README.md | emernet-eins/system | 891b7465aee779d7e46eb2cfda1595b55f7c7ea4 | [
"MIT"
] | 21 | 2019-11-16T12:33:39.000Z | 2019-12-10T07:56:41.000Z | README.md | emernet-eins/system | 891b7465aee779d7e46eb2cfda1595b55f7c7ea4 | [
"MIT"
] | 1 | 2019-11-11T11:14:32.000Z | 2019-11-11T11:14:32.000Z | <div align="center">
<a href="https://github.com/webpack/webpack">
<img width="200" height="200" src="https://i.imgur.com/9hvPtsv.png">
</a>
<br>
<br>





<br>



<h1>EMERNET E.I.N.S - system</h1>
<p>
EMERNET E.I.N.S (Emergency Information Network System) was created to provide important information in case of a complete communication network failure.
EMERNET provides information like important Telephone numbers, emergency news applications (ex. NINA) and emergency services (ex. shelter locations).
All information provided is managed by the Open Source Community on GitHub and EMERNET-EINS.org.
EMERNET is frequently updated to provide the latest information.
This repository hosts the frontend for EMERNET. It holds emergency numbers for different countries across the whole world.
</p>
</div>
## Table of Contents
1. [Installation](#install)
1. [Updating](#updating)
2. [Introduction](#introduction)
3. [Concepts](#concepts)
4. [Contributing](#contributing)
5. [Support](#support)
6. [Core Team](#core-team)
7. [Sponsoring](#sponsoring)
8. [Special Thanks](#special-thanks-to)
<h2 align="center" id="install">Install</h2>
### Installing EMERNET System
If you want to host your own instance of emernet, you have two options:
- run the installer provided <a href="https://github.com/emernet-eins/server/releases">here</a>. This will
- install Apache2
- install unzip
- **remove Apache's 000-default.conf file, and replace it with emernet.conf**
- download this repository and move it to `/var/www/emernet/`
- set owner rights of `/var/www/emernet/` to www-data:www-data
or
- download and install EMERNET yourself:
- download this repository to your webserver and place it in its respective document root directory
- create a virtual host file
- configure emernet to be reachable (<a href="https://github.com/emernet-eins/server/blob/master/emernet.conf">example Apache2 config</a>)
If you don't have any webserver installed, option one is recommended. If you already have a webserver installed, maybe also configured it, option two is what you want.
<h2 align="center" id="updating">Updating</h2>
If you installed EMERNET via the installer, it should automatically update itself if you didn't terminate the .jar file. If you did or want to trigger the update by yourself, simply run <a href="https://github.com/emernet-eins/server/releases/">emernet_runtime_version.jar</a> again.
If you installed EMERNET by hand, you can simply download the latest <a href="https://github.com/emernet-eins/system/releases">release</a> and override it with the contents of your webserver.
<h2 align="center" id="introduction">Introduction</h2>
As mentioned above, EMERNET E.I.N.S is a system that provides emergency information in case of an emergency.
**What is being provided?**
* Important numbers for multiple countries
* Automatically updated files
* Direct call links
### Browser Compatibility
EMERNET E.I.N.S works in pretty much every browser. Even mobile browsers are able to use the links provided.
<h2 align="center" id="concepts">Concepts</h2>
### Intent and purpose
We created this project to provide the really important information everybody could need in an emergency. We know that there are enough other ways to get to his information, but we are planning big for this project.
### Future
We are planning to extend this project into a raspberry Pi only version, that even installs everything required for a WiFi hotspot, so everybody can access the information by just connecting to the open WiFi.
### Monetization
We are not really planning to monetize this project. The information provided is free to be viewed to everybody. However, if you would like to buy us a coffee, feel free to do so.
<h2 align="center" id="contributing">Contributing</h2>
**We want contributing to EMERNET E.I.N.S to be fun, enjoyable, and educational for anyone, and everyone.** Due to our project being split into multiple parts, there is more than one repository on our organization. However, feel free to take part in developing every single one of them if you'd like to.
Contributions are highly apreaciated! If you want to add / change / delete something, have a look at <a href="https://github.com/emernet-eins/system/blob/master/CONTRIBUTING.md">our contributing.md</a>.
Contributions go far beyond pull requests and commits. Although we love giving you the opportunity to put your stamp on EMERNET, we also are thrilled to receive a variety of other contributions including:
* Documentation updates, enhancements, designs, or bugfixes
* Spelling or grammar fixes
* README.md corrections or redesigns
* Adding features
* Triaging GitHub issues -- especially determining whether an issue still persists or is reproducible.
* Spreading the word about EMERNET and helping someone else who needs help
* Teaching others how to contribute to our repos!
* Blogging, speaking about, or creating tutorials about EMERNET E.I.N.S.
If you are worried or don't know where to start, you can **always** reach out to us by simply submitting an issue and a maintainer can help give you guidance!
_Looking to speak about EMERNET?_ We'd **love** to review your talk abstract/CFP! You can email it to emernet@rustige.me and we can give pointers or tips!!!
<h3 align="center" id="flavour">Creating your own flavours of EMERNET</h3>
If you'd like to have a different color, or just something else that does not meet your likings, feel free to do so and fork the repository.
<h2 align="center" id="support">Support</h2>
If you have discovered a 🐜 or have a feature suggestion, feel free to create an issue on Github.
### License
This project is lincensed under the MIT License.
<h2 align="center" id="team">Core Team</h2>
<table>
<tbody>
<tr>
<td align="center" valign="top">
<img width="150" height="150" src="https://avatars0.githubusercontent.com/u/52698477?s=460">
<br>
<a href="https://github.com/miit0o">Christoph</a>
<p>EMERNET Backend</p>
<br>
<p>Founder</p>
</td>
<td align="center" valign="top">
<img width="150" height="150" src="https://avatars0.githubusercontent.com/u/33719652?s=460">
<br>
<a href="https://github.com/CodeF0x">CodeF0x</a>
<p>EMERNET Frontend</p>
<br>
<p>Developer</p>
</td>
</tr>
</tbody>
</table>
<h2 align="center" id="sponsoring">Sponsoring</h2>
Most of the core team members, EMERNET contributors and contributors in the ecosystem do this open source work in their free time. If you use EMERNET for a serious task, and you'd like us to invest more time on it, please donate. This project increases your income/productivity too. It makes development and applications faster and it reduces the required bandwidth.
This is how we use the donations:
* Allow the core team to work on EMERNET
* Thank contributors if they invested a large amount of time in contributing
* Support projects in the ecosystem that are of great value for users
* Support projects that are voted most (work in progress)
* Infrastructure cost
* Fees for money handling
<h2 align="center" id="specialthanks">Special Thanks to</h2>
<p align="center">(In chronological order)</p>
* @W3schools for [W3CSS](https://www.w3schools.com/w3css/), which provides our (beautiful) design
* Fontawesome for well... [Fontawesome](https://fontawesome.com)
* Everyone who has helped making EMERNET what it is today
* Everyone I forgot to mention here, but also influenced EMERNET.
| 45.707182 | 366 | 0.746887 | eng_Latn | 0.983321 |
9e1441a530fa257437d6e280b6da37904e9ecb53 | 328 | md | Markdown | _annotations/35f9ef4d-e877-4169-b220-cb973965bf49.md | jcmundy/test4_20210106_dev | 75f50af6cb786d38d36918e638a692bb2b633cb1 | [
"MIT"
] | null | null | null | _annotations/35f9ef4d-e877-4169-b220-cb973965bf49.md | jcmundy/test4_20210106_dev | 75f50af6cb786d38d36918e638a692bb2b633cb1 | [
"MIT"
] | null | null | null | _annotations/35f9ef4d-e877-4169-b220-cb973965bf49.md | jcmundy/test4_20210106_dev | 75f50af6cb786d38d36918e638a692bb2b633cb1 | [
"MIT"
] | null | null | null | ---
annotation_id: 35f9ef4d-e877-4169-b220-cb973965bf49
author: Joannas Github
tei_target: cfa55e7c-ef8e-4d2d-aa57-3cf1fee74dfe
annotated_page: https://readux.ecdsdev.org/iiif/15210893.5622.emory.edu/canvas/15210893.5622.emory.edu$5
page_index: 4
target: cfa55e7c-ef8e-4d2d-aa57-3cf1fee74dfe
---
<p>Descrizione is the word!</p> | 32.8 | 104 | 0.801829 | kor_Hang | 0.139798 |
9e14515eacce860c3d2ebb440d96fbfa5c78ecd8 | 392 | md | Markdown | markdown/org/docs/patterns/teagan/options/acrossbackfactor/fr.md | nicholasdower/freesewing | f7bd8c62043502ce08a3b524d9516597d32f1482 | [
"MIT"
] | 174 | 2018-08-25T13:46:07.000Z | 2022-03-13T22:34:10.000Z | markdown/org/docs/patterns/teagan/options/acrossbackfactor/fr.md | nicholasdower/freesewing | f7bd8c62043502ce08a3b524d9516597d32f1482 | [
"MIT"
] | 1,029 | 2018-08-13T08:44:55.000Z | 2022-03-31T20:35:42.000Z | markdown/org/docs/patterns/teagan/options/acrossbackfactor/fr.md | andyssundaypink/freesewing | 25cdf7d3aeaa5b1603922fe82e7fd3c085e87731 | [
"MIT"
] | 100 | 2018-09-18T18:11:38.000Z | 2022-03-31T17:55:09.000Z | 
Contrôle la largeur de votre dos en tant que facteur de mesure de votre épaule à épaule
## Effet de cette option sur le motif
 | 56 | 201 | 0.798469 | fra_Latn | 0.989736 |
9e146da7b741db794a7122e9638142cd6d495984 | 273 | md | Markdown | docs/website/docs/integrations/cli.md | forjagames/fg-api | b95dc4024fcc06db7bc19e41efb02f71dd0f00e2 | [
"MIT"
] | null | null | null | docs/website/docs/integrations/cli.md | forjagames/fg-api | b95dc4024fcc06db7bc19e41efb02f71dd0f00e2 | [
"MIT"
] | null | null | null | docs/website/docs/integrations/cli.md | forjagames/fg-api | b95dc4024fcc06db7bc19e41efb02f71dd0f00e2 | [
"MIT"
] | null | null | null | ---
sidebar_position: 1
---
# FG-Api CLI _[Deprecated]_
_The current integration is deprecated._
## Command-Line Interface
The `FG-Api CLI` is a command-line interface tool that you use to interact with the API (with Administrator rights) directly from a command shell.
| 24.818182 | 146 | 0.761905 | eng_Latn | 0.995713 |