hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
6fce18498a143a872c8c5bb6228b2e9a2112ae98
395
md
Markdown
.github/ISSUE_TEMPLATE/issues.md
avras/community-hub
64a7b5b631e3d92989c754b26e40c43ede486a60
[ "MIT" ]
30
2021-01-19T23:22:30.000Z
2022-02-17T13:01:19.000Z
.github/ISSUE_TEMPLATE/issues.md
MarkCrypto-newfresh/community-hub
30ad4fa9f3fadc4fa9c86470989ff966c2fa9b93
[ "MIT" ]
172
2021-01-16T19:04:00.000Z
2022-03-30T20:20:40.000Z
.github/ISSUE_TEMPLATE/issues.md
MarkCrypto-newfresh/community-hub
30ad4fa9f3fadc4fa9c86470989ff966c2fa9b93
[ "MIT" ]
56
2021-02-10T03:20:22.000Z
2022-03-27T15:46:35.000Z
--- name: Documentation Issue about: Report incorrect or missing information from https://community.optimism.io/ --- <!-- This repository only accepts issues related to the documentation here https://community.optimism.io/ If you have a support problem, [join our discord](https://discord.gg/C8CjvkaU4w) and post it in the appropriate channel, either `#user-support` or `#dev-support`. -->
30.384615
162
0.751899
eng_Latn
0.981675
6fcf28afdf4bee1fc43867f025ff18bbb458446c
2,932
md
Markdown
docs/relational-databases/errors-events/mssqlserver-1204-database-engine-error.md
Philippe-Geiger/sql-docs.fr-fr
7fe32a3b70e9219529d5b00725233abf9d5982f6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/errors-events/mssqlserver-1204-database-engine-error.md
Philippe-Geiger/sql-docs.fr-fr
7fe32a3b70e9219529d5b00725233abf9d5982f6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/errors-events/mssqlserver-1204-database-engine-error.md
Philippe-Geiger/sql-docs.fr-fr
7fe32a3b70e9219529d5b00725233abf9d5982f6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: MSSQLSERVER_1204 title: MSSQLSERVER_1204 | Microsoft Docs ms.custom: '' ms.date: 04/04/2017 ms.prod: sql ms.reviewer: '' ms.technology: supportability ms.topic: language-reference helpviewer_keywords: - 1204 (Database Engine error) ms.assetid: de6ece78-79de-484d-9224-ca0f7645815f author: MashaMSFT ms.author: mathoma ms.openlocfilehash: 04e9802e4ca7df64fd469ef2ca8151cbe160c182 ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 08/17/2020 ms.locfileid: "88336365" --- # <a name="mssqlserver_1204"></a>MSSQLSERVER_1204 [!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)] ## <a name="details"></a>Détails | Attribut | Valeur | | :-------- | :---- | |Nom du produit|SQL Server| |ID de l’événement|1204| |Source de l’événement|MSSQLSERVER| |Composant|SQLEngine| |Nom symbolique|LK_OUTOF| |Texte du message|L'instance du moteur de base de données SQL Server ne peut pas obtenir une ressource LOCK en ce moment. Réexécutez votre instruction lorsque le nombre d'utilisateurs actifs est moindre. Demandez à l'administrateur de base de données de vérifier la configuration du verrou et de la mémoire pour cette instance, ou de vérifier les longues transactions.| ## <a name="explanation"></a>Explication [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ne peut pas obtenir de ressource de verrouillage. Cela peut être dû à l'une des raisons suivantes : - [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ne peut pas allouer plus de mémoire à partir du système d’exploitation, soit parce que d’autres processus l’utilisent, soit parce que le serveur fonctionne avec l’option **Mémoire maximum du serveur** configurée. - Le gestionnaire de verrous n'utilisera pas plus de 60 % de la mémoire disponible pour [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. ## <a name="user-action"></a>Action de l'utilisateur Si vous pensez que SQL Server ne peut pas allouer suffisamment de mémoire, essayez de procéder comme suit : - Si des applications autres que SQL Server consomment des ressources, essayez d'arrêter ces applications ou envisagez de les exécuter sur un serveur distinct. Cela libérera de la mémoire à partir d'autres processus pour SQL Server. - Si vous avez configuré l'option max server memory, augmentez la valeur de ce paramètre. Si vous pensez que le gestionnaire de verrous a utilisé la quantité maximale de mémoire disponible, identifiez la transaction qui maintient le plus de verrous et mettez-y fin. Le script ci-dessous identifiera la transaction avec le plus de verrous : ``` SELECT request_session_id, COUNT (*) num_locks FROM sys.dm_tran_locks GROUP BY request_session_id ORDER BY count (*) DESC ``` Considérez l'ID de session le plus élevé et mettez-y fin à l'aide de la commande KILL.
48.065574
371
0.755798
fra_Latn
0.886681
6fcf863acb65b6d85b6738bbf8b7be8f99efbaa7
1,917
md
Markdown
doc/std/IRC/Connection.md
cooper/ferret
6e035bd38205379151a575d500c0cf77ec834d3a
[ "BSD-3-Clause" ]
13
2017-02-28T02:48:03.000Z
2020-04-11T17:56:52.000Z
doc/std/IRC/Connection.md
cooper/ferret
6e035bd38205379151a575d500c0cf77ec834d3a
[ "BSD-3-Clause" ]
66
2016-09-27T04:05:54.000Z
2018-12-03T18:37:14.000Z
doc/std/IRC/Connection.md
cooper/ferret
6e035bd38205379151a575d500c0cf77ec834d3a
[ "BSD-3-Clause" ]
null
null
null
# IRC::Connection This is the IRC::Connection class. ## Initializer ``` $connection = Connection($addr: Str, $nick: Str) ``` Creates a new Connection class instance. * __addr__: [Str](/doc/std/String.md) - IRC server address. * *optional* __port__: [Num](/doc/std/Number.md) - IRC server port. * __nick__: [Str](/doc/std/String.md) - Preferred nickname. * *optional* __user__: [Str](/doc/std/String.md) - Username (ident). * *optional* __real__: [Str](/doc/std/String.md) - Real name. * *optional* __autojoin__: List - Channels to join on connect. ## Methods ### connect ``` $connection.connect() ``` Initiates the connection. ### send ``` $connection.send($line: Str) ``` Sends a line of IRC data. * __line__: [Str](/doc/std/String.md) - A string outgoing data. ### getTarget ``` $connection.getTarget($target: Str) ``` Fetches a channel or user object. * __target__: [Str](/doc/std/String.md) - Channel name or nickname. ### getChannel ``` $connection.getChannel($name: Str) ``` Fetches a channel object from a channel name. * __name__: [Str](/doc/std/String.md) - Channel name. ### getUser ``` $connection.getUser($nick: Str) ``` Fetches a user object from a nickname. * __nick__: [Str](/doc/std/String.md) - Nickname associated with the user. ### getServer ``` $connection.getServer($name: Str) ``` Fetches a server object from a server name. * __name__: [Str](/doc/std/String.md) - Server name. ### connected ``` $connection.connected() ``` Hook. Called when a connection to the socket is established. ### disconnected ``` $connection.disconnected() ``` Hook. Called on disconnect, whether it be user-initiated or due to error. ### copy ``` $connection.copy() ``` Creates a new IRC::Connection with the same options. End of the IRC::Connection class. This file was generated automatically by the Ferret compiler from Connection.frt.
13.040816
74
0.672926
eng_Latn
0.920902
6fcf9a011525b5c62ee0529d7b5ad5e188ecdfa5
2,006
md
Markdown
docs/smtpd.md
chas0amx/ansible-postfix
b129c57fdddf00447a715cccea0758878de22d0b
[ "Apache-2.0" ]
1
2022-02-28T10:22:07.000Z
2022-02-28T10:22:07.000Z
docs/smtpd.md
chas0amx/ansible-postfix
b129c57fdddf00447a715cccea0758878de22d0b
[ "Apache-2.0" ]
7
2021-11-18T07:25:50.000Z
2022-03-31T12:25:24.000Z
docs/smtpd.md
chas0amx/ansible-postfix
b129c57fdddf00447a715cccea0758878de22d0b
[ "Apache-2.0" ]
1
2022-03-02T10:17:23.000Z
2022-03-02T10:17:23.000Z
# `main.cf` ## smtpd ```yaml postfix_smtpd: use_tls: true client_restrictions: [] helo_restrictions: [] sender_restrictions: [] sender_login_maps: [] recipient_restrictions: [] relay_restrictions: - permit_mynetworks - permit_sasl_authenticated - defer_unauth_destination data_restrictions: - reject_unauth_pipelining - permit tls: auth_only: true cert_file: "/etc/ssl/certs/ssl-cert-snakeoil.pem" key_file: "/etc/ssl/private/ssl-cert-snakeoil.key" ca_file: "/etc/ssl/private/ssl-ca-snakeoil.cabundle" chain_files: [] dh1024_param_file: "" eecdh_grade: auto cipherlist: [] exclude_ciphers: - ECDHE-RSA-RC4-SHA - RC4 - aNULL - DES-CBC3-SHA - ECDHE-RSA-DES-CBC3-SHA - EDH-RSA-DES-CBC3-SHA loglevel: 1 mandatory_ciphers: high mandatory_protocols: - "!SSLv2" - "!SSLv3" - "!TLSv1" - "!TLSv1.1" protocols: - "!SSLv2" - "!SSLv3" received_header: true security_level: may sasl: auth_enable: false authenticated_header: true exceptions_networks: [] local_domain: "" mechanism_filter: [] path: "smtpd" # inet:dovecot:10001 response_limit: 12288 # Specify zero or more of the following: # noplaintext # Disallow methods that use plaintext passwords. # noactive # Disallow methods subject to active (non-dictionary) attack. # nodictionary # Disallow methods subject to passive (dictionary) attack. # noanonymous # Disallow methods that allow anonymous authentication. # forward_secrecy # Only allow methods that support forward secrecy (Dovecot only). # mutual_auth # Only allow methods that provide mutual authentication (not available with Cyrus SASL version 1). security_options: - noanonymous # tls_security_options: "$smtpd_sasl_security_options" type: "" # dovecot, cyrus milters: "" proxy_timeout: 600s ```
26.051948
106
0.658026
eng_Latn
0.664839
6fcfd947ae8e73d8aff83a339c4643cbd50c555e
583
md
Markdown
E.SwitchingInterVLAN/README.md
setrar/INF-1023
a902a3b875cce727088e905e63ecb37d6a9af185
[ "Apache-2.0" ]
3
2017-11-19T21:55:35.000Z
2019-12-28T22:10:05.000Z
E.SwitchingInterVLAN/README.md
setrar/INF-1023
a902a3b875cce727088e905e63ecb37d6a9af185
[ "Apache-2.0" ]
null
null
null
E.SwitchingInterVLAN/README.md
setrar/INF-1023
a902a3b875cce727088e905e63ecb37d6a9af185
[ "Apache-2.0" ]
3
2016-10-17T21:18:10.000Z
2020-10-15T20:23:48.000Z
# INF-1023 Switching Inter VLAN Pour pratiquer cet example: - utiliser PacketTracer - ajouter 3 commutateurs Cisco 2960 - les connecter avec des cables "crossover" - executer le fichier texte ![alt tag](https://github.com/setrar/INF-1023/blob/master/E.SwitchingInterVLAN/SwitchingInterVLAN.png) # Objectif Connecter 2 VLANs a par l'intermediaire d'un router Communement appelle "Router on a Stick" ## Commandes ` show vlan ` ` show int trunk ` ` show int <Interface> switchport ` (i.e. shot int f0/21 switchport) ` show vtp status ` Note: Utiliser Cisco Packet Tracer 6.2.0
21.592593
102
0.756432
fra_Latn
0.451128
6fd07347a18095db6f860807071f8c88f28c5af7
582
md
Markdown
post/2009/07/2009-07-19-neue-seite-zum-mobilen-journalismus/index.md
heinzwittenbrink/lostandfound
064a52d0b795d5ad67219d8c72db7d18201cbea5
[ "CC0-1.0" ]
null
null
null
post/2009/07/2009-07-19-neue-seite-zum-mobilen-journalismus/index.md
heinzwittenbrink/lostandfound
064a52d0b795d5ad67219d8c72db7d18201cbea5
[ "CC0-1.0" ]
null
null
null
post/2009/07/2009-07-19-neue-seite-zum-mobilen-journalismus/index.md
heinzwittenbrink/lostandfound
064a52d0b795d5ad67219d8c72db7d18201cbea5
[ "CC0-1.0" ]
null
null
null
--- title: "Neue Seite zum Mobilen Journalismus" date: "2009-07-19" tags: - "uncategorized" --- Ich habe eine [Seite über Mobilen Journalismus](http://heinz.typepad.com/lostandfound/mojo.html "Lost and Found: MoJo"), kurz _MoJo_, angelegt, um Informationen dazu zu sammeln und zu ordnen. In meinem Feedreader haben sich in den letzten Wochen Posts über journalistisches Arbeiten mit Mobiltelefonen gehäuft—meist auf Englisch. Das Thema wird bei uns aktueller werden—auch wenn, wie beim Videojournalismus, zunächst sicher vor allem die Bedenkenträger darüber diskutieren werden.
64.666667
481
0.793814
deu_Latn
0.993245
6fd0e72201b4d8a49490916504aee83d823e00b0
46,908
md
Markdown
transcripts/32-engingeering-management.md
GK-Hynes/ladybug-website
09eb4e8ba92dd3eb91ce54515e121b0a929b1b07
[ "MIT" ]
115
2020-01-06T13:40:56.000Z
2022-02-08T05:39:37.000Z
transcripts/32-engingeering-management.md
GK-Hynes/ladybug-website
09eb4e8ba92dd3eb91ce54515e121b0a929b1b07
[ "MIT" ]
47
2020-01-06T13:20:11.000Z
2022-02-27T21:18:29.000Z
transcripts/32-engingeering-management.md
GK-Hynes/ladybug-website
09eb4e8ba92dd3eb91ce54515e121b0a929b1b07
[ "MIT" ]
55
2020-01-06T13:28:17.000Z
2022-03-19T02:37:09.000Z
**Ali** 0:00 In one of our previous episodes, we talked about the different jobs within tech, one of which is engineering management. Today, we're joined by the wonderful Amal Hussein, engineering manager at MPM. And we're going to chat more in depth about engineering management. Let's get started. **Unknown Speaker** 0:20 Welcome to the ladybug podcast. I'm Kelly. I'm Allie. And I'm Emma. And we're debugging the tech industry. **Unknown Speaker** 0:28 Hey, Kelly, have you heard about this cool tool called AWS amplify? Tell me about it. It's a suite of tools and services that enables developers to build full stack serverless and cloud based web and mobile apps. You get to use whichever framework or technology you want on the front end. That sounds cool will help me get up and running with things like hosting. Yeah, authentication. You betcha. Managed graph QL. Totally. How about server list functions API is machine learning chatbots file storage. Yes to everything amplifies but especially in a ways enabled traditionally front end developers like yourself, Kelly need to be successful because you can use your existing skill set to build a real world full stack apps than in the past and require deep knowledge around backend DevOps and scalable infrastructure. The amplifi console also allows you to use a GitHub repository to deploy to a globally available CDN. With ci and CD built in. It's super cool. What can I learn more if you want to learn more about AWS amplify Mr. AWS dash amplify.github.io? **Ali** 1:27 So Amol tell us about yourself. What got you into engineering management? **Amal Hussein** 1:31 I, Ali and Emma, first of all, thanks so much for having me on the show. I'm really excited to be talking about a new role. For me. I still identify as a software engineer who's maybe crossed over to the dark side or what can be known as the dark side to some. But I was a software engineer working at focu as a tech lead and had been a project lead before and other roles and You know, working at Boku there's a lot of, I would say, lots of different types of things that you're doing on a project that really extends beyond typical engineering, you're doing a lot of product work, figuring out stuff on blocking, managing up, sideways, etc. and managing up has been something that I've, you know, done successfully for a long time in my career. And it kind of led to kind of, I think, an acknowledgement and, and my leading into just, you know, being comfortable with being a leader being in charge of things. And that kind of, you know, it was it was a tough acknowledgement for me because I didn't really, I think walking away from day to day software engineering, was is was is, is it is and was a big decision. But ultimately, you know, I realized that as an engineering manager, especially depending on the role you're still doing Software Engineering, you're just doing it through people. And literally also just you're still, you're still writing software sometimes, but you're just, you're not responsible for the day to day delivery cycle. **Emma** 3:10 Yeah, that that's a, that would be a tough pill to swallow. If you are like, kind of on the fence. I'm curious, like, what, like, how has that been for you? Is that harder having to deal with people on a day to day basis as opposed to computers? **Amal Hussein** 3:28 Um, I think people are way more complicated than machines. And yes, I would say that human problems are intersectional. And they're very different than, you know, the kind of binary problems that you run into when trying to scale a project or, you know, trying to, you know, debug a flaky test. So, you know, you have to really be adaptable. And I think one of the biggest challenges is really understanding that everyone doesn't think like you and so you know, having to kind of calibrate to your team It's something that's really important. And that's that's been, that's been something that's a muscle that I'm still exercising, you know. But you know, so anyway, so I'm new at this role. I joined NPM a few months ago. And so it's a new role, new company. So lots of challenges there. But, but it's been a really interesting, like, very interesting journey. And I have a really incredible team. That's very humbling. And, you know, I've been, it's been, I would say, a firehose experience. So, so for those of you who haven't, like, maybe seen or heard from me in a few months that's been been under the MPM firehose of just really like learning how to do this new job, but do it do it well, and also, yeah, just learning, learning all things NPM and you know, all the, you know, the wonderful world that is the registry, which is a magical cave, full of wonderful, lots of mysteries. **Ali** 4:57 That's a really cool. I also made the transition from engineering to a more people centric role with TJ and can definitely agree that the people challenges stay challenging and stay different, whereas engineering challenges, I think kind of repeat themselves to some extent and tend to tend to be savola in a more predictable way, then people problems do so definitely relate to that, even though it's a little bit different teaching than doing a strict management. As an engineering manager, what does your day to day schedule look like? **Amal Hussein** 5:41 Now? That's a great question. It really varies from you know, debugging things with my team, to doing design reviews to doing planning for our like cadence to talking to other stakeholders. You know, getting security review done, it really varies. You really have to kind of exercise all aspects of the software delivery cycle, because you're really responsible for like the end to end delivery of a thing. And so, you know, it really, it also means like stretching outside of your comfort zone, right. And also, knowing that, you know, you're not the expert at everything, and it's very important that you lean on the experts, you know, and you fill in the gaps where you can and so I it's been amazing for me just how, like, how, like, how much more well rounded I think I've become, and a lot more comfortable with, I would say, just like literally all aspects, you know, from QA to release, right? To like inception, for ideation, whatever it is. There's just a lot of, yeah, there's just a full cycle that you're now involved in and that you're responsible for seeing through you know, and so everybody kind of has this If you can imagine, like a baton, and ultimately, like different people have the baton and give it at different times, but ultimately you're responsible for getting that baton across the finish line. And so, so I think that's, that's been a very interesting shift for me, like the accountability factor and the buck stops here factor like, it's, it's always great. Even when you're a tech lead, there's always someone else to blame, right? Like, when you're engineering manager, you know, you're really responsible for the delivery and the output. And, you know, whether you're using whether your team, you know, is aware of it or not, right, like, ultimately, the accountability falls on you necessarily, not necessarily even your team. And so, steps I think my day is varied and it's full of lots of different types of responsibilities. So **Emma** 7:48 that's awesome. Yeah, I, I always thought I wanted to get into management but I, I loved your comment about managing people being intersectional and not binary. Because that's something I never consciously had thought about before, I guess. I always thought, you know, I, I love communicating with people, I think I would be a good manager, but it's not that cut and dry. And to your point, like, being a manager is hard, like managing people is not an easy task. And I think that we perhaps can take our managers for granted occasionally, perhaps because we don't necessarily see all of the things that they're doing for us are seeing all the battles that they're fighting for us. What would you say is the hardest thing about being a manager? **Amal Hussein** 8:37 So I would say one thing I kind of want to rephrase a little bit is is if I don't feel like I managed people, I support them right? So ultimately, like you're there to support them and get shit out of their way and like and or get shit into their way if that's what they're looking at. But But, but really like I would say, especially for software engineers who are incredibly talented and you know, really all knowledge workers. I think this is a rule that applies to all knowledge workers. You know, there's, you know, people often know what they want. And so ultimately you're there to support them and guide them and sponsor them. And, you know, I meant to them, but it's different types of, like, I have a different relationship with everyone on my team based on kind of where they are in their career and, you know, and where they are with what they want to do in the company. You know, some some of my teammates, I'm sponsoring more, you know, for and like advocating more, you know, for them, some folks, I'm doing more mentoring, you know, some folks I'm doing more coaching, you know, so it's different kind of relationship with every person. So I think, kind of calibrating myself and the way I operate and the way I think and the way I would get stuff done, you know, kind of calibrating that to someone else is probably Probably the hardest thing, right? Because people don't think like you, they, and they shouldn't, because that would be a bad thing, right? If we all thought alike, but there's a bias that comes with your thinking, you know, and so trying to I think part of being a good manager, and one of the hardest things is really being conscious of your bias, right and making room for others in the way they solution and the way they problem solve, and in making and making it like a having an inclusive environment where people feel comfortable sharing their opposing views, right, or opposing approaches to set thing, you know, so it's very important that everybody can bring their full selves to the table, and they don't feel rejected in any way. Right. And so, so I think, you know, just creating that safe space, being conscious of your bias like these are these are all really challenging things. I think context switching is another challenge. Like You're doing a lot of context switching. And sometimes it's really exciting. And sometimes it's exhausting, you know. And so being conscious of how you schedule your day and what you're tackling at a given time is, is a huge challenge. Another challenge I would say is, you know, your day gets hijacked, you know, and so you kind of start out your day thinking, Okay, I'm going to accomplish ABCD. And then like, you accomplish a and half of B, and that's because, you know, you you were firefighting, or dealing with other problems. And so, I like knowing that your day is not your own. And, like, that's a very big shift from being a software engineer, right? It's like, where you're like, Okay, here's my day, you know, I set I set I set the pace for myself. You have to really be flexible to like other people hijacking your, your, your day, and you know, for good things right. But um, but you have to be Yeah, flexibility is like key, I think to staying sane and To kind of Yeah, like not having like a Who Moved My Cheese moment every day. So But yeah, I mean it's it's it's a hard job I have a really a newfound respect for managers in general, I would say good managers or people who strive to be good managers, because I've had plenty of bad and good managers. And you know, it's easy to know the difference. But, but yes, it's a hard job. And being an engineering manager in particular, like you're switching from, you know, like, something that's so binary to people which are like the opposite of that, and, you know, kind of transitioning between software and people problems, both really complex in their own ways, is a big challenge. So, I think it might be the hardest management job, to be honest, is to be an engineering manager, but I'm like now tooting my own horn. **Emma** 12:57 No, I mean, I think it's wonderful and I To hear someone who's so self aware and understand that people are not all the same, and we can't put them all in one box and expect them all to thrive. It's quite refreshing, it takes a special type of person to be a great manager. And we'll get a little bit more into that and just a little bit, but I'm curious what your, what your management style is. So when we talk hands on versus hands off, I've had managers on both ends of the spectrum. And you had mentioned that you work remotely and you work with teams, you know, all over Europe in the US and and we'll talk more about remote working a little bit later. But what's your management style? Do you have set meetings with your with your employees, like once a week or what does that look like? **Amal Hussein** 13:44 Yeah, that's a great question. Thanks so much. So I have one on ones with my team every week, which may be sound excessive for some or maybe not, but, you know, it's really important to kind of have that pulse of what's happening in Kind of having a consistent communication channel that's there. But you know, I have team weekly team meetings and one on ones with my team. And beyond that, you know, there's just company wide engineering wide meetings that happen on a weekly basis. So for like a distributed company, we're all meeting and talking fairly often, like in a synchronous way. But then there's also being connected off, you know, on Slack, and there's pairing sessions that my team sets up with each other, that I joined some times and you know, so we're all kind of in various types of communication throughout the day and throughout the week. But in terms of hands off, versus hands on, I would say I'm very much a hands on manager and because I jump into debug pair test, code review. You know, I kind of Do I'm I'm kind of alongside my team for every aspect of the delivery cycle. As well as, like, all the other stuff with like, you know, stakeholder, figuring out stuff with stakeholders, and like all the other stuff that's beyond the scope of like my team's delivery. So, in a sense, I kind of have two jobs, you know, so we're one I'm like, I feel like I'm an engineer on the team. And the other job is like, you know, kind of being an external represented representative for my team, like throughout the company. But I mean, in terms of, I think, my style, I'm, I'm very much still in discovery mode. I think I'm still figuring a lot out about, you know, I think, what my preferences are and and I think also what the what the challenges are for, for, like managing such a diverse and like distributed team. So I would say you know, we can check back in but you know, the It's still very much I would say, promise is not resolved on that. **Ali** 16:10 Digital Ocean offers the simplest, most developer friendly cloud platform. It's optimized to make managing and scaling apps easy with an intuitive API, multiple storage options, integrated firewalls, load balancers, and more. From predictable pricing to flexible configurations to World Class customer support, you'll get access to all the infrastructure services you need to grow. Plus digital oceans community provides over 2000 tutorials to help you stay up to date on the latest open source software languages and frameworks. Get started on Digital Ocean for free with a free $100 credit@do.co slash Ladybug. That's do dot CEO slash Ladybug. Just for our listeners sake How long have you been working And engineering, **Amal Hussein** 17:00 um, let's see, almost five months. So not that long. And it's been a firehose experience. Because, you know, being an engineer manager at NPM, there's there's a lot to learn a lot of things to come up to speed with about, you know, how things work internally at the red in the registry. And just to give folks some context, I work on a team that is that works on the registry, so it's not so much the, and the the Seelye or the thing that you kind of think of when you think of NPM but we're like the, we work on the registry where all of the actual packages like live and you know, so it's the machine that hosts and runs and you know, it's like the Time Machine basically, that you know, where all the packages live. And so we're much more you know, internal facing. Not so much external. But our kind of our touch points are the CLA, and you know any other interface, you know any other clients like yarn or even, you know, the Java community has some interfaces directly to the registry. So, you know, we're, you know, we have the endpoints that we set, we've managed the endpoints that allow other package managers to connect with the registry. **Ali** 18:33 Awesome. We've talked a little bit about management style in the hard parts of being an engineering manager, but I want to know why you transition and why you made the decision to transition and what the best thing about being an engineering manager is. **Amal Hussein** 18:49 Yeah, it's a great question. I think I made the decision to transition but my the, like, the reason why I was like, Okay, I think I want to do this is because I noticed just the pattern and like all of my projects and jobs where I was like always, you know, I always ended up being the person that was like the go to person for something or I ended up being, I don't know, you know, being the project lead or tech lead for something, there was just kind of a trend and I, you know, I feel really comfortable kind of stepping into into roles where there isn't, there's uncertainty and there's lots to figure out. Like, it's those those kinds of things didn't faze me. I also was really comfortable working with stakeholders and managing up or sideways like depending on like, what the situation needed. So I, I think it was a realization that like, through talking with other people in this role, you know, that like, hey, maybe this would actually be a good fit for me. It was really important that I was somewhere that had a hands on On engineering management culture because this is my first role and I'm I was I'm, I'm still not sure if I want to do this full time, like forever and ever and ever, like, I might want to go back to being a principal engineer in a few years or, you know, maybe do a pendulum swing, right, like do management for a little while, you know, go back to hands on. I haven't really decided if this is like a permanent track for me yet. But, but I but I knew that it was something that I wanted to try. And, you know, I was fortunate enough to get the opportunity to try it at the creepy craziest place in the world to try it so. So yeah, you know, they like yeah, took a chance on me and, you know, like, I'm like learning a ton and really enjoying it so far. But I can you know, if I'm honest, I can totally see myself in a couple of years saying I you know, what I think I want to like be a principal engineer. For a little while and kind of do that pendulum swing, which is something that's very common, you see that a lot with engineering managers, they'll, they'll go back to writing software full time for a few years and come back into it because it you know, it's, I think it's important to, to if you're staying at this level anyway to make sure that you're you're not, you know, you don't have any attrition with your, with your software, right, you know, skills. And I think you're more effective. I don't know, the closer you are to the code, the more effective you can be even just guiding your team. But, you know, we'll see I might want to do that and or move into like an executive role in a few years as well. I'm still it's still TBD for me, I wish I had like a better answer, but **Emma** 21:49 that's quite all right. Yeah. No, I'm I'd be looking forward to chatting with you again, and you know, several months or a year and seeing what were your heads up on that but i think it's it's good to note to That you can do the so called pendulum swing and go back to development if something that you want. You know, with any role you can get burned out if it's if it's a ton of work, and it's okay to switch different domains every so often. So it was nice to hear you say that. My question is, do you think someone should be an engineer prior to being a manager? Do you think that that's helped you? And do you think it's something that engineering managers should look to have on their resume before they switch into management? **Amal Hussein** 22:31 Yeah, that's an awesome question. I just realized I didn't answer all these questions about what's the best thing about being a manager, I, I'll tell you that first, best tour by by far, I think the best thing about being a manager is just the impact that you can have and like, especially for me as a woman person of color, I mean, I have both, you know, people on my team that identify as women and people of color, and so it's just a phenomenal thing for me to be able to actually, you know, I would say relate to them. At a different on a different level, which isn't very common in our industry. But specifically, like just being able to kind of push people or identify, like, work with people on their goals and identify like, what is the thing that you really want to achieve? And okay, cool, let's make it happen and then making it happen, you know, just that gratification of I think, really actually being able to push the needle in someone's life and career has been incredibly gratifying. And that's the thing that for me, like, you know, makes me want to do this forever. But I think that the challenges of you know, I think the challenges of middle management in general are are pretty burnout prone. Right. Like it's, it's a it's I think it's challenging to stay in the middle management role for several years because I think it's, you know, you're you're in you're in a very tough spots where, you know, you're, you're, you have you're managing like the next Team and then you're managing expectations from your team. And it's tough, like being a middle manager is not easy. And so I think kind of, you know, continuing to take all the lessons that you learned as a middle management and then move into an executive role where you can make an impact and at a team level, make sense? Or, you know, going into a principal going back into principal engineer role where you can you know, make it make make things more effective at the team level. You know, it's an another is another is another out. So back to your your original question. No, I do I remember it, which was, um, should someone be an engineer prior to being being an engineer manager? I would say absolutely. If you can't do the job of people on your team, like you shouldn't, I don't think you should be you. I don't know if you should be leading them by I don't know that. That feels like really conservative. Maybe thing to say, but I think you You have to have some element of relatability to team. So I would say, yeah, you need to definitely be an engineer and you have to understand what the challenges are, that come with being an individual individual contributor at, you know, and knowing what those challenges are allows you too, you know, I think be more effective at guiding your team through challenges, solutions, etc. So if you can't relate, you can't like, you know, yeah, if you can't relate to them, you can't be effective. And, you know, then I'm not sure that bill like it would be good for your team, as well, like, you know, to have to have someone that doesn't have experience writing software. But, but I guess it doesn't mean that you're going to be a bad manager. It just there's going to be really there's aspects of relatability I think, you know, that are going to be challenging and depending on how the company is structured like it might be difficult for you to even do your job? Well, because, you know, most engineering managers I know are somewhat hands on. So if you're if you don't understand the software delivery cycle or what it means to, you know, do a code review, then, you know, like, there's parts of your job that I think would be challenging to do. But yeah, I don't know. What do you think? What do you think? **Emma** 26:25 I would say yes, because I have had manager I had a manager previously, and I thought she was a great person, but she was a design director. And this was really difficult for me to advance my career because she didn't fully understand the software engineering role. And as a result that definitely hindered my ability to achieve a promotion. It was one of those things where I had to meet like the software engineering criteria to move forward. But she wasn't able to accurately assess whether or not I had met those. So I definitely think it is I don't want to just sound like gatekeeping at all right? Like, it's more than just knowing that the technical side of things to be a great manager, but it definitely is a huge factor. I've also had engineering managers who, in the past, were so caught up with wanting to code all the time that they kind of shirked their responsibilities to their employees, which was really, really hard for me when I needed support. So, yes, and no, I think it's a double edged sword, almost. It you can't just say yes or no, I definitely do think it, it helps, obviously. But it's more than just knowing the tech stack. It's also making sure that at the end of the day, you're putting your people first **Amal Hussein** 27:39 Yeah, you know, and, and really, I think that's, that's a great like, I'm glad you said that, because I think that's another challenge of being an engineering manager is that like, or especially a new one, I think is, you know, sometimes you know, you can really get focused on the technical challenges. And, you know, you have to really kind of pull your head out of there and remember, like, there People, people challenges that are that are the priority always, you know, and you kind of have to really remind yourself like, you know, this is like people first like that fix the people problem. The Tech, the tech stuff can wait. Right? So the tech stuff is secondary to the, the the technical challenges are secondary to the people challenges, right. And so, so yeah, I can totally relate to that. And I'm sorry to hear that you had that experience, by the way. **Emma** 28:26 That's okay. I mean, it kind of helped me realize what I want and need, which I think is a good thing. **Amal Hussein** 28:32 You know, so part management is like, if there's a two way street, right? And sometimes it's really important, like, it's really important for people to give you feedback, right? And not only should you be like it's good for them to give it to you, but you should be soliciting it as well. You know, you should constantly be asking for feedback and like having a culture of feedback on your team is going to it's like the it's going to be what makes makes you six like You and your team successful, right? Because if you can tell each other what sucks or what's going well, or what, what what can be improved like you, you're going to constantly be in a state of continuous improvement, right? But if if it if there isn't like that open door culture, then it's then it's then then there's constantly back channeling and that's not that's not going to be progress for anybody. Right? So ultimately, like you can have the best manager in the world, but if the team is not, um, you know, there's there there are certain things that need to exist within this team as well in order for like the entire team to be effective, right? Like, it's not like you're Superwoman, or Superman that's gonna come in, like, save the day, like they're just, they're, they're it. It really does take two to tango, like, or Honestly, it takes, you know, a number of people to tango. **Ali** 29:50 Absolutely. Awesome. Well, I have a kind of follow up question to what we've been discussing and I think a comment main concern that people have when they are maybe deciding whether they're going to transition to engineering management, especially as women is like, whether they'll still be able to write code and whether they'll still be perceived as technical. And I don't know if that's like, a concern that you've had or if you do feel like you're still technically progressing and how maybe you you work on that. **Amal Hussein** 30:25 Yeah, that's a great question. So I guess I feel like I'm technically progressing because I have a much I feel like I have a wider, I don't know, like a bird's eye view into like, lots of different things that are happening within the company. And like, Okay, this thing is happening this way. Interesting, right? So there are more kind of, I would say, things that I'm aware of from a real sense, but you know, and I write code with my team, I code review I don't like I'm not primarily responsible for delivery. software, but there's things that I work on that are like, Oh, I can unblock you with this thing, or, oh, I can help you debug this thing. I would say I'm not, no, no, I still have side projects that I work on outside of work. But I don't feel like I'm like regressing per se. I think there's, I would say that maybe my muscle for like, how fast I can solve a problem, right? Or how fast I can maybe write code is like, slowly, slowly getting like that, right? There's like a slow attrition. But in terms of like, actual software, I mean, I feel like I'm constantly solving software problems, even though I'm not necessarily writing the code. So I don't feel there's like an attrition there. I don't know it's, I would say like I'm maybe I can check check back in in the year. But I feel like I'm constantly like thinking, thinking or Teaching or, you know, coming up with software solutions. So I have to say, like, I really do miss, like being primarily responsible for just that. But I but I like two weeks later, I would be completely bored. So I don't know, you know, it's like, you're never happy, like you have to just just I think, make make your own variety. But, but yeah, but I still write code just not as much. And and yes, I do miss it but you know, getting to do all these other things is, I think, a much more like, I think there are hundred people that can write code better than I can. I mean, I'm a pretty good, pretty good engineer, but there's people who can do that. But those people are not maybe willing to do the other pieces which I can do, right. Like there's really important people problems and process problems to solve that. People who can write software are not necessarily even interested in solving right. So I I can kind of do Both and that's, I think that's the advantage that I think that's the advantage that I have. And it's a sacrifice, but it's, um, it's the path that I've chosen. And, you know, I have to live with it for now. So **Emma** 33:13 how have you been learning about management? I don't know. Do you read books? And if so, have you read any books about management? **Amal Hussein** 33:20 Yeah, I've been like doing some self study. You know? I would say there's books like I don't know. mythical man month and radical candor and fire fight five dysfunctional five examples of dysfunctional teams. What else have I been reading? Laura Hogan, I think I forgot her last name. Exactly. But it's a book. I didn't read the new book, a new book that she came up with a few months ago called resilient management. But I've been reading that's pretty good as a short and sweet book. that's geared towards new engineering managers. Yeah, I mean, there's lots of good books. And I'm happy to like recommend some more. But yeah, I've been trying to read a few hours a week and go through kind of Google has a pretty good training program for new managers that I started a few months ago. You know, there's, I would say, there's a decent amount of resources out there, but, but ultimately, the best trainer is going to be a mentor, which I have like a few mentors at work outside of work. Those have really been the I think, like just really invaluable resources because you know, I can go to them with like a real problem and say, like, Okay, how would you fix this right? Or like, how do I deal with that thing? **Emma** 34:46 I have a couple book recommendations for you. My manager sent me that this one for Christmas. It's called or orbiting the giant hairball and it was written by one of the guys he was very successful and he worked for Hallmark for years. And it talks about how to promote creativity when you're working in a large company and how to still, you know, benefit from being in a large company without getting lost. So that one's really, really good. And it's written beautifully, because it's not like a traditional book format, like other pages have different layouts and artwork, it's really, really cool. And then the second is creativity, Inc. And it was written by one of the guys who founded Pixar. And it's, it's both a history of Pixar and how they merged with Disney, but also it talks a lot about management, and how, you know, they manage their people effectively. So I think both those are really great. It's **Amal Hussein** 35:38 awesome. Radical candor is one that I've been reading. I'm reading right now actually, that's pretty solid. It's, you know, it's about communication and giving honest feedback. And, you know, just like a phenomenal like, thing for not just engineering, not just like managers right. I think everyone Should should kind of, like understand, like, you know how to kind of give good feedback. And I guess the other thing that's on my list is no hard feelings, embracing the power of the secret power of embracing emotions at work. That's, that's, that was recommended by a colleague of mine. So that's, that's next up on my list. But yeah, I mean, you constantly have to be, I mean, it's management is definitely it. People go to school for it. And you can't understate how important it is to like, take the time to study it, right? engineers, you have a lot of bias, like where you're like, Hey, I, you know, I can do this like, like npm install, like, you know, MPM install management, like, management alert, you know, knowledge but yeah, you have to you have to really take the time to learn. **Ali** 36:53 Awesome. Well, another conversation that we were having before we pressed record was about working remotely. And I want to follow up on that. What is it like building personal relationships with employees while you're working remote? Oh, **Amal Hussein** 37:07 great question. Um, so this is actually my first time working on a distributed team. I was a fully distributed team. I've worked with people and other offices before. And it you know, my first week, I was like, I want I felt like I was going to be really lonely and like, Oh my god, I'm gonna be so alone. But literally like eight to nine hours a day, I'm connected with various people on slack and zoom calls, you know, team meetings, etc. So there's always like a sense of being connected. But you're really forced to, I think, be a better communicator, you know, and especially with like, timezone differences, like you have to, you have to feel comfortable just putting things in writing versus like having a conversation about it with people and really putting things in writing, both in slack or just, you know, in an actual like, markdown document. So it, you know, it really I, you know, I don't know, it just allows more people to chime into the conversation. It's like a better shelf life. So even if you were all working in the same office, you know, I think the skills that I've picked up working for the past few months as a remote employee are things their skills that are going to benefit me for the rest of my, of my life. But there are serious challenges, right? Like, I like not being able to go out to lunch with your team or not being able to, like buy food for people, which is like, my, my favorite way to, like, thrive and be, you know, bribe people into becoming friends with me is to like feed them. So I think that that that's the challenge, but really, we're always connected. And you know, they're like, there are like yearly initiatives to like technique where everyone gets together to do like an all hands or, you know, that's the thing that happens, but, but I would say that like there are many upsides to being remote. But there are, you know, the downsides are like, if you like eating with people and, you know, occasionally like giving people real high fives. You don't get to do that, right? But the trade off is like you get a lot of freedom. And you can kind of work from anywhere in the world. Even though most days I'm just like working from my house, right? It's funny how everyone's like, yeah, you can work anywhere in the world and then like you, you're still home like 80% of the time. But I don't know it's supposedly the future of work, you know. So how about you are you are you are you all like remote ease or **Ali** 39:52 I am still right now for another month or so I think so. It's adventure for me I've, I was a remote engineer for the first half of the year. And that was incredibly hard for me. Just felt very isolated and hard to connect with the people that I was working with. But then now I teach remote. And I think that is a lot easier for feeling connected and being connected with the people that I work with and all that so I'm kind of a up and down experience there. But I'm moving back to working in person next month. So **Amal Hussein** 40:33 yeah, I mean, I would say would I prefer to be in person like, absolutely, um, you know, I, but I think I think the experience of working remotely is one that I don't know, everyone should try. You should try at least once. Right? **Ali** 40:47 Definitely. We had a whole episode on working remote and I think it's amazing for a lot of people at a very different stage of life, then. I am at I think, but I think it's really hard. When you're maybe at a different stage of life where you're maybe more, you don't have a family yet, **Amal Hussein** 41:06 which is my situation. But I guess I'm, to be clear, though, I think NPM is still very new at being fully remote. I mean, they've always had distributed like, teams, but they, you know, I think this is now like, the first time that the company has been fully distributed. And, you know, there's a lot of responsibility that the company has in, like, coming up to speed with, like, what it means to build a good remote culture, you know, and so that's where it kind of it, it doesn't, it's not something that should just fall on the, like individual employees, like it really needs to be there's a culture that needs to be pushed from, you know, from a central place and you know, that then kind of norms and, you know, guidelines that I think the company sets for, like how people should you know, what, what the expectations are for people like when they're communicating, etc. So, you know, like, if we're companies that are going remote like It's work it's not just like give people a laptop and just say go home like you have to still create a culture of like what it means to like communicate, collaborate, etc and give people the tools and the means to do that you know? **Emma** 42:13 Absolutely I and I, as you know kind of one of the last questions What advice would you give to people looking to become an engineer manager? **Amal Hussein** 42:23 I would say you need to kind of have an honest conversation with yourself on like, what it is why it is that you want to do this right so are you interested in supporting people are you interested in squashing like an combating fires as they come up? Are you okay and comfortable with lots of context switching? Are you Yeah, do you enjoy mentoring and supporting and coaching and sponsoring like people like is because that's a huge part of their job. You know, are you okay with like, not being the best engineer in the room anymore, right? Because that's something that I had to be okay with, right? Because you're now really relying on your team to come up with good decisions and you're there to, like, guide them and support them. But, you know, you have to you have to kind of give people autonomy, you know, to make make those decisions. So, I, you know, there's, there's a diff, there's a big differences between someone who's a technical lead or a principal engineer and an engineering manager. And but there's, there's a lot of overlap, but there's, there's, there's differences. And so, you know, you really need to kind of dig deep to, to kind of understand, are you trying to be a tech lead? Do you are you trying to lead teams, you know, technically or are you really trying to kind of manage teams, you know, and manage the process that the team uses and, you know, manage the direction for the team like there's, there's like, there's differences there and there's overlap. So it's a it's a big decision, I wouldn't go into it lightly. And, but I would say that like if, you know, like, if it doesn't work out, like, it's, it's good to know, like you should give yourself like 30 days or sorry, 90 days and, you know, you can fire yourself after 90 days if that's the thing that you can you can set up a contract with yourself, right? Um, so yeah, I I would say talk to other engineering managers and understand that the role is different at every company. It's not the same. And so just do your research. And, you know, get a mentor and I would say, if there's an opportunity for you to do it, within the same company, that would probably be the best move. Because you're, you're not learning about the company as well as learning how to be an engineering manager, which is the challenge that I have. You have you have the you know, you You can focus on just like, you're the new role versus like the new, the new, the new setting, right? So if you have an opportunity to do it, like within the same company, that's probably the best way to, to make that transition. I guess, you know, what I should say this is that, like, the, you know, one of the most surprising aspects of being an engineering manager has been that, like, you know, when things are good, they're extra good, you know, like, you feel the amp is amplified. And I think when things are challenging, or they're bad, like, that's also amplified because, you know, you're now responsible for like, more than just yourself. It's not one x, it's like five x or four x, right? And so, you know, just that's something to keep in mind is that like, there's it's a responsibility and you have to kind of you have to be okay with the good and the bad, you know, because it's not always good like, and I think that was like the biggest surprise for me. It was like Wow, like, you know, when if things are challenging, like you really and I'm someone that really, really cares about my team, you know, so you really you really take it to heart you really feel it and you really like, okay, really, I really want to improve this, you know, it's much different than like when you're an individual contributor, like, you know, just you're like, Okay, this sucks, but whatever, like, not my problem, you know, not my monkeys, not my circus, but now it's they are your monkeys and it is your circus. **Emma** 46:28 Awesome. So, first of all, thank you so much for spending time with us. It was a pleasure to get to talk with you again. And I hope we get to cross paths soon in the future as well. Where can we find more? Where can we find more about you? Where can we find you on the internet **Amal Hussein** 46:44 at Nomad techie places? Nomad, techie with IE, and yeah, I it's been an absolute pleasure speaking with you as well. And, you know, yeah, it's been been great. It's like Ali been quiet. No, I don't know, maybe my breath stinks or something. **Ali** 47:05 No, this has been amazing. Thank you so much and learned so much about engineering management specifically in this this discussion. So thank you so much. Want to do a quick round of shout outs before we leave? So Amal, do you want to start with your shadow? Yes, sure. **Amal Hussein** 47:22 I recently been spending time with Henry zoo, I was hosting him at my house, I want to give him a shout out. He's one, he's the kind of lead maintainer of Babel, and has just shown a lot of tremendous leadership. You know, for that, that has kind of let that plus that project blossom. It's really the backbone of a lot of modern JavaScript, web applications. And so we need to get some more love to Babel. And all the cool like presets that have been coming out as well. So, so yeah, shout Babel and Henry zoo in particular. **Ali** 48:04 He's amazing. And that was such an important project. **Emma** 48:10 Emma, do you have any shout outs? I do. So, by the time this episode airs, I will have completed my recording for a lynda.com or LinkedIn learning course, which I'm super excited about. It's about building a technical resume and I just want to shout out the team that I got to work with a producer has been absolutely a pleasure to work with. So if anyone from LinkedIn learning or Linda is listening, you guys have amazing employees. So shout out to you. What about you, Allie? **Ali** 48:40 I have another book recommendation. So Emma actually gave me this recommendation over Christmas break and it was amazing. The seven husbands of Evelyn Hugo, I definitely recommend reading it. I feel like our show is half **Unknown Speaker** 48:55 book, **Ali** 48:56 focused half tech focused at this point. Did you just as much as Daisy Johnson the six? Yeah, I love both of them. They're so good. Some of my favorites and the audio books for them are so good to like really well produced. And Daisy Jones especially has a full cast on it, which is really cool. So I highly recommend that some books kind of lose their amazingness and audio format, but feel like they're even better. Awesome. Well, if you liked this episode, tweet about it will select one Twitter each week to win a Smashing Magazine book. And we post new podcasts every Monday so make sure to subscribe to be notified. Please leave a review as well allows other people to find out about the show and just feels amazing wondering read them. And thank you again so much to them all for joining us today. **Unknown Speaker** 49:45 Yeah, thank you. **Amal Hussein** 49:47 Pleasure to be on the show. Thank you so much.
279.214286
4,067
0.767716
eng_Latn
0.999975
6fd1c700c283667c39203b06ec7cb2644fbf9771
14,284
md
Markdown
_listings/google-cloud-dataproc/apis.md
streamdata-gallery-organizations/google-cloud-dataproc
89815874003c4115b010aad5724cabe50968f3b3
[ "CC-BY-3.0" ]
null
null
null
_listings/google-cloud-dataproc/apis.md
streamdata-gallery-organizations/google-cloud-dataproc
89815874003c4115b010aad5724cabe50968f3b3
[ "CC-BY-3.0" ]
null
null
null
_listings/google-cloud-dataproc/apis.md
streamdata-gallery-organizations/google-cloud-dataproc
89815874003c4115b010aad5724cabe50968f3b3
[ "CC-BY-3.0" ]
null
null
null
--- name: Google Cloud Dataproc x-slug: google-cloud-dataproc description: Use Google Cloud Dataproc, an Apache Hadoop, Apache Spark, Apache Pig, and Apache Hive service, to easily process big datasets at low cost. Control your costs by quickly creating managed clusters of any size and turning them off when youre done. Cloud Dataproc integrates across Google Cloud Platform products, giving you a powerful and complete data processing platform. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png x-kinRank: "9" x-alexaRank: "0" tags: Google Cloud Dataproc created: "2018-08-30" modified: "2018-08-30" url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/apis.md specificationVersion: "0.14" apis: - name: Google Cloud Dataproc - Get Region Clusters x-api-slug: v1projectsprojectidregionsregionclusters-get description: Lists all regions/{region}/clusters in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-postman-collection url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclusters-get-postman.md - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclusters-get-openapi.md - name: Google Cloud Dataproc - Create Cluster x-api-slug: v1projectsprojectidregionsregionclusters-post description: Creates a cluster in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclusters-post-openapi.md - name: Google Cloud Dataproc - Delete Cluster x-api-slug: v1projectsprojectidregionsregionclustersclustername-delete description: Deletes a cluster in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclustersclustername-delete-openapi.md - name: Google Cloud Dataproc - Get Cluster x-api-slug: v1projectsprojectidregionsregionclustersclustername-get description: Gets the resource representation for a cluster in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclustersclustername-get-openapi.md - name: Google Cloud Dataproc - Update Cluster x-api-slug: v1projectsprojectidregionsregionclustersclustername-patch description: Updates a cluster in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclustersclustername-patch-openapi.md - name: Google Cloud Dataproc - Get Cluster Diagnostic x-api-slug: v1projectsprojectidregionsregionclustersclusternamediagnose-post description: Gets cluster diagnostic information. After the operation completes, the Operation.response field contains DiagnoseClusterOutputLocation. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionclustersclusternamediagnose-post-openapi.md - name: Google Cloud Dataproc - Get Region Jobs x-api-slug: v1projectsprojectidregionsregionjobs-get description: Lists regions/{region}/jobs in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionjobs-get-openapi.md - name: Google Cloud Dataproc - Delete Job x-api-slug: v1projectsprojectidregionsregionjobsjobid-delete description: Deletes the job from the project. If the job is active, the delete fails, and the response returns FAILED_PRECONDITION. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionjobsjobid-delete-openapi.md - name: Google Cloud Dataproc - Get Job x-api-slug: v1projectsprojectidregionsregionjobsjobid-get description: Gets the resource representation for a job in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionjobsjobid-get-openapi.md - name: Google Cloud Dataproc - Update Job x-api-slug: v1projectsprojectidregionsregionjobsjobid-patch description: Updates a job in a project. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionjobsjobid-patch-openapi.md - name: Google Cloud Dataproc - Cancel Job x-api-slug: v1projectsprojectidregionsregionjobsjobidcancel-post description: Starts a job cancellation request. To access the job resource after cancellation, call regions/{region}/jobs.list or regions/{region}/jobs.get. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionjobsjobidcancel-post-openapi.md - name: Google Cloud Dataproc - Submit Job x-api-slug: v1projectsprojectidregionsregionjobssubmit-post description: Submits a job to a cluster. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1projectsprojectidregionsregionjobssubmit-post-openapi.md - name: Google Cloud Dataproc - Delete Operation x-api-slug: v1name-delete description: Deletes a long-running operation. This method indicates that the client is no longer interested in the operation result. It does not cancel the operation. If the server doesn't support this method, it returns google.rpc.Code.UNIMPLEMENTED. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1name-delete-openapi.md - name: Google Cloud Dataproc - Get Operation State x-api-slug: v1name-get description: Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1name-get-openapi.md - name: Google Cloud Dataproc - Start Cancellation x-api-slug: v1namecancel-post description: Starts asynchronous cancellation on a long-running operation. The server makes a best effort to cancel the operation, but success is not guaranteed. If the server doesn't support this method, it returns google.rpc.Code.UNIMPLEMENTED. Clients can use Operations.GetOperation or other methods to check whether the cancellation succeeded or whether the operation completed despite cancellation. On successful cancellation, the operation is not deleted; instead, it becomes an operation with an Operation.error value with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED. image: http://kinlane-productions.s3.amazonaws.com/api-evangelist-site/company/logos/dataproc.png humanURL: https://cloud.google.com/dataproc/ baseURL: ://dataproc.googleapis.com// tags: Google APIs, Data, Stack Network, API Service Provider, API Provider, Databases, Deployments, Profiles, Relative Data, Service API properties: - type: x-openapi-spec url: https://raw.githubusercontent.com/streamdata-gallery-organizations/google-cloud-dataproc/master/_listings/google-cloud-dataproc/v1namecancel-post-openapi.md x-common: - type: x-api-gallery url: http://google.cloud.container.builder.api.gallery.streamdata.io - type: x-api-stack url: http://google.cloud.dataproc.stack.network - type: x-change-logs url: https://cloud.google.com/dataproc/docs/release-notes/service - type: x-documentation url: https://cloud.google.com/dataproc/docs/ - type: x-faq url: https://cloud.google.com/dataproc/docs/resources/faq - type: x-getting-started url: https://cloud.google.com/dataproc/docs/quickstarts - type: x-guides url: https://cloud.google.com/dataproc/docs/how-to - type: x-partners url: https://cloud.google.com/dataproc/docs/resources/partners - type: x-pricing url: https://cloud.google.com/dataproc/docs/resources/pricing - type: x-rate-limits url: https://cloud.google.com/dataproc/quotas - type: x-sdk url: https://cloud.google.com/dataproc/docs/gcloud-installation - type: x-service-level-agreements url: https://cloud.google.com/dataproc/docs/resources/sla - type: x-support url: https://cloud.google.com/dataproc/docs/support/get-support - type: x-website url: https://cloud.google.com/dataproc/ include: [] maintainers: - FN: Kin Lane x-twitter: apievangelist email: info@apievangelist.com ---
61.304721
212
0.789975
kor_Hang
0.335427
6fd1e290138a8c51e1c4c3661497887b83a7c33b
1,677
md
Markdown
README.md
wassimbj/cascade
5c0c5e41fc4f9a263d0d3d12ddb7336dd7ad34f2
[ "MIT" ]
2
2020-09-30T14:55:33.000Z
2020-10-01T15:52:10.000Z
README.md
wassimbj/cascade
5c0c5e41fc4f9a263d0d3d12ddb7336dd7ad34f2
[ "MIT" ]
22
2020-09-26T08:22:20.000Z
2021-10-06T06:17:20.000Z
README.md
wassimbj/cascade
5c0c5e41fc4f9a263d0d3d12ddb7336dd7ad34f2
[ "MIT" ]
21
2020-09-26T08:33:09.000Z
2021-10-02T03:38:35.000Z
<h1 align="center"> LoginRadius Developer Portal </h1> <br> <p align="center"> <img alt="LoginRadius" title="LoginRadius" src="https://i.imgur.com/Zv6PKs6.png" width="450"> </p> <p align="center"> Find what's going on in LoginRadius, with our developer portal. And Create your own using this template. </p> ## Introduction As a part of Hackathon Project we have created this repo that contains a list of all our great minds who are working with passion and dedication in the organization. You can find what's going on in the company like * Events * Talks * Hackathon * Open-source contribution. ## Features * Application is built using React. * Provide Login Functionality out of the box, you can create your app using [LoginRadius](https://accounts.loginradius.com/auth.aspx?action=register&return_url=https://dashboard.loginradius.com/login). Replace the APP_URL with yours in `src\utils\config.tsx`. * Reading data from the JSON Array and creating a UI based on that. ## Feedback Feel free to [file an issue](https://github.com/LoginRadius/cascade/issues/new). Feature requests are always welcome. If you wish to contribute, please take a quick look at the [guidelines](./CONTRIBUTING.md) ## Setup Process 1. Clone the repository and change your current directory to project root directory. 1. Install all the dependency and then start the development environment ``` npm install npm run start ``` > Note: To setup your own login create an [LoginRadius App For Free](https://accounts.loginradius.com/auth.aspx?action=register&return_url=https://dashboard.loginradius.com/login) and replace the `APP_URL` in the `src\utils\config.tsx`
38.113636
259
0.754323
eng_Latn
0.953413
6fd28579a1587e8c56b85371b6cb49255efe0657
470
md
Markdown
README.md
dtolb/gitbook-plugin-mouseflow
a86ccb3b1c3b6b235d9f16bafff0a0fcbf1be3da
[ "MIT" ]
null
null
null
README.md
dtolb/gitbook-plugin-mouseflow
a86ccb3b1c3b6b235d9f16bafff0a0fcbf1be3da
[ "MIT" ]
null
null
null
README.md
dtolb/gitbook-plugin-mouseflow
a86ccb3b1c3b6b235d9f16bafff0a0fcbf1be3da
[ "MIT" ]
null
null
null
# gitbook-plugin-mouseflow Add [mouseflow](https://mouseflow.com/) to gitbook web pages ### How to use? Add plugin to your `book.json`, then run `gitbook install`: ```json { "plugins": ["mouseflow"] } ``` #### Configure mouseflow token: ```json { "plugins": ["mouseflow"], "pluginsConfig": { "mouseflow": { "projectId": "12345678abc" } } } ``` ### Inspiration Inspired by: https://github.com/chudaol/gitbook-plugin-gtm
15.666667
60
0.602128
eng_Latn
0.27585
6fd2a30a17f1607be250c0191e0a26f6aa0da1b7
1,157
md
Markdown
pages/dataframe-memory-usage.md
WaylonWalker/nicpayne.com
48f9aead662ad98161207c365a6e3a555bbac6da
[ "MIT" ]
null
null
null
pages/dataframe-memory-usage.md
WaylonWalker/nicpayne.com
48f9aead662ad98161207c365a6e3a555bbac6da
[ "MIT" ]
null
null
null
pages/dataframe-memory-usage.md
WaylonWalker/nicpayne.com
48f9aead662ad98161207c365a6e3a555bbac6da
[ "MIT" ]
null
null
null
--- templateKey: til tags: ['python'] title: Dataframe-Memory-Usage date: 2022-03-07T00:00:00 status: draft cover: "/static/dataframe-memory-usage.png" --- I have often wanted to dive into memory usage for pandas DataFrames when it comes to cloud deployment. If I have a python process running on a server at home I can use `glances` or a number of other tools to diagnose a memory issue... However at work I normally deploy dockerized processes on AWS Batch and it's much more challenging to get info on the dockerized process without more AWS integration that my team isn't quite ready for. So TIL that I can get some of the info I want from pandas directly! # DataFrame.info() I didn't realize that `df.info()` was able to give me more info than just dtypes and some summary stats... There is a kwarg `memory_usage` that can configure what you need to get back, so `df.memory_usage="deep"` will give you how much RAM any given DataFrame is using! Amazing tool for finding issues with joins or renegade source data files. ```python df = pd.read_csv("cars.csv") df.info(memory_usage="deep") ``` ![Alt text](/images/df-memory-usage.png "DF memory")
38.566667
201
0.757995
eng_Latn
0.992929
6fd2c51ee2cf691a6ed535e396ee34fd7279da2e
102
md
Markdown
README.md
dbraynard/ezbake-data-access
e68eb281514fce939fdc402143e85c65f35ff316
[ "Apache-2.0" ]
null
null
null
README.md
dbraynard/ezbake-data-access
e68eb281514fce939fdc402143e85c65f35ff316
[ "Apache-2.0" ]
null
null
null
README.md
dbraynard/ezbake-data-access
e68eb281514fce939fdc402143e85c65f35ff316
[ "Apache-2.0" ]
null
null
null
Thrift services, libraries, etc. to interact with data backends (e.g., MongoDB, Elasticsearch, etc.).
51
101
0.754902
eng_Latn
0.637394
6fd30150f8cecb6934b72d2ba30d18a30cf423a3
3,187
md
Markdown
website/docs/zh-CN/graphics/vc-graphics-point.md
ForeverSun/vue-cesium
cd577f0b7fdc22dd23b3f8506280f6562cce56d3
[ "MIT" ]
10
2018-04-20T06:11:32.000Z
2018-10-15T06:45:46.000Z
website/docs/zh-CN/graphics/vc-graphics-point.md
ForeverSun/vue-cesium
cd577f0b7fdc22dd23b3f8506280f6562cce56d3
[ "MIT" ]
1
2018-08-10T01:44:10.000Z
2018-09-06T09:49:51.000Z
website/docs/zh-CN/graphics/vc-graphics-point.md
ForeverSun/vue-cesium
cd577f0b7fdc22dd23b3f8506280f6562cce56d3
[ "MIT" ]
2
2018-08-06T10:00:23.000Z
2018-10-12T07:51:21.000Z
## VcGraphicsPoint 加载点实体,相当于初始化一个 `Cesium.PointGraphics` 实例。 **注意:** 需要作为 `vc-entity` 的子组件才能正常加载。 ### 基础用法 点实体组件的基础用法。 :::demo 使用 `vc-graphics-point` 标签在三维球上添加点实体对象。 ```html <el-row ref="viewerContainer" class="demo-viewer"> <vc-viewer @ready="onViewerReady"> <vc-entity :position="[-75.59777, 40.03883]" description="Hello Vue Cesium"> <vc-graphics-point ref="point1" color="red" :pixel-size="8"></vc-graphics-point> </vc-entity> <vc-entity :position="[-80.5, 35.14]" description="Hello Vue Cesium"> <vc-graphics-point ref="point2" color="blue" :pixel-size="16"></vc-graphics-point> </vc-entity> <vc-entity :position="[-80.12, 25.46]" description="Hello Vue Cesium"> <vc-graphics-point ref="point3" color="lime" :pixel-size="32"></vc-graphics-point> </vc-entity> </vc-viewer> </el-row> <script> import { ref, getCurrentInstance, onMounted } from 'vue' export default { setup() { // state const point1 = ref(null) const point2 = ref(null) const point3 = ref(null) // methods const onEntityEvt = e => { console.log(e) } const onViewerReady = cesiumInstance => { console.log('viewer ready') } // life cycle onMounted(() => { Promise.all([point1.value.creatingPromise, point2.value.creatingPromise, point3.value.creatingPromise]).then(instances => { instances[0].viewer.zoomTo(instances[0].viewer.entities) }) }) return { onEntityEvt, point1, point2, point3, onViewerReady } } } </script> ``` ::: ### 属性 <!-- prettier-ignore --> | 属性名 | 类型 | 默认值 | 描述 | 可选值 | | ------- | --- | ----- | ------ | --- | | show | Boolean | `true` | `optional` 指定 point 是否显示。 | | pixelSize | Number | `1` | `optional` 指定 point 像素大小。 | | heightReference | Number | `0` | `optional` 指定 point 高度模式。 **NONE: 0, CLAMP_TO_GROUND: 1, RELATIVE_TO_GROUND: 2**|0/1/2| | color | Object\|String\|Array | `'white'` | `optional` 指定 point 颜色。 | | outlineColor | Object\|String\|Array | `'black'` | `optional` 指定 point 轮廓颜色。 | | outlineWidth | Number | `0` | `optional` 指定 point 轮廓像素宽度。 | | scaleByDistance | Object\|Array | | `optional` 指定 point 随相机距离改变的缩放参数。 | | translucencyByDistance | Object\|Array | | `optional` 指定 point 随相机距离改变的透明度参数。 | | distanceDisplayCondition | Object\|Array | | `optional` 指定 point 随相机距离显隐参数。 | | disableDepthTestDistance | Number | | `optional` 指定 point 深度测试参数。 | ### 事件 | 事件名 | 参数 | 描述 | | ----------------- | --------------------------------------- | ---------------------------------------- | | beforeLoad | (instance: VcComponentInternalInstance) | 对象加载前触发。 | | ready | (readyObj: VcReadyObject) | 对象加载成功时触发。 | | destroyed | (instance: VcComponentInternalInstance) | 对象销毁时触发。 | | definitionChanged | | 每当更改或修改属性或子属性时触发该事件。 | ### 参考 - 官方文档: **[PointGraphics](https://cesium.com/docs/cesiumjs-ref-doc/PointGraphics.html)**
34.641304
131
0.558833
yue_Hant
0.530057
6fd31546c631f680ec6d09ffb22a6781f1d4f0cd
62
md
Markdown
README.md
CoderXAndZ/JDApplicable
54d4ecddc908a3b45bde5d9aa66a9b250ec98b47
[ "MIT" ]
2
2018-09-25T02:04:09.000Z
2018-09-25T02:39:14.000Z
README.md
CoderXAndZ/JDApplicable
54d4ecddc908a3b45bde5d9aa66a9b250ec98b47
[ "MIT" ]
null
null
null
README.md
CoderXAndZ/JDApplicable
54d4ecddc908a3b45bde5d9aa66a9b250ec98b47
[ "MIT" ]
null
null
null
# JDApplicable https://github.com/CoderXAndZ/JDApplicable.git
20.666667
46
0.822581
yue_Hant
0.39874
6fd5454d69df9f62e73e0a00acdbf968f1afa924
555
md
Markdown
docs/message/demo/inline-close.md
itmajing/next
77bc4eceaf615a7179ea1d36756c3d6d727b1d5c
[ "MIT" ]
4,289
2018-07-18T09:21:03.000Z
2022-03-31T17:59:14.000Z
docs/message/demo/inline-close.md
itmajing/next
77bc4eceaf615a7179ea1d36756c3d6d727b1d5c
[ "MIT" ]
3,552
2018-07-18T09:21:52.000Z
2022-03-31T12:18:58.000Z
docs/message/demo/inline-close.md
itmajing/next
77bc4eceaf615a7179ea1d36756c3d6d727b1d5c
[ "MIT" ]
559
2018-09-14T02:48:44.000Z
2022-03-25T09:06:55.000Z
# 可关闭提示 - order: 3 通过`closeable`设置用户手动关闭提示框。 :::lang=en-us # Closeable - order: 3 You can control whether the message can be closed by adding the `closeable` property. ::: --- ````jsx import { Message } from '@alifd/next'; const onClose = () => console.log('onClose triggered!'); const afterClose = () => console.log('afterClose triggered!'); ReactDOM.render( <div> <Message title="title" closeable onClose={onClose} afterClose={afterClose}> Content Content Content Content </Message> </div>, mountNode); ````
18.5
85
0.648649
eng_Latn
0.891389
6fd5794da9360af60e45199972c78fbf0cfebe4d
688
md
Markdown
README.md
Zelig880/chromedevtools
958f261dd2edd08e919de91e2f28fe980302a4c6
[ "MIT" ]
1
2015-01-30T22:21:00.000Z
2015-01-30T22:21:00.000Z
README.md
Zelig880/chromedevtools
958f261dd2edd08e919de91e2f28fe980302a4c6
[ "MIT" ]
null
null
null
README.md
Zelig880/chromedevtools
958f261dd2edd08e919de91e2f28fe980302a4c6
[ "MIT" ]
null
null
null
# chromedevtools SUMMARY ChromeDevTools is a tutorial for the well known developer tool of the google browser Chrome. The web tutorial, differently from other available, will require the user/student to actively use the developer tool to progress into the course. WHERE IS THE TUTORIAL? The tutorial is still in its initial stages, and is now live on www.chromedevtools.co.uk. At the moment is just a POC, and I am looking for ideas from other users to make sure the course is well structured and fulfil the user expectations. HOW MUCH WILL THE COURSE COST The whole project is completely free of charge and is just a side project for me and everyone that will be willing to help
43
239
0.797965
eng_Latn
0.999598
6fd5852b6050543160559efd4488115e0c5db2c8
1,541
md
Markdown
includes/functions-networking-features.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/functions-networking-features.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/functions-networking-features.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: 82571d1a0e651f638dec29184f0ecdc88562b3ad ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7 ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 11/25/2020 ms.locfileid: "96020991" --- | Funkció |[Használatalapú csomag](../articles/azure-functions/functions-scale.md#consumption-plan)|[Prémium szintű csomag](../articles/azure-functions/functions-scale.md#premium-plan)|[Dedikált csomag](../articles/azure-functions/functions-scale.md#app-service-plan)|[ASE](../articles/app-service/environment/intro.md)| [Kubernetes](../articles/azure-functions/functions-kubernetes-keda.md) | |----------------|-----------|----------------|---------|-----------------------| ---| |[Bejövő IP-korlátozások és privát webhely-hozzáférés](../articles/azure-functions/functions-networking-options.md#inbound-access-restrictions)|✅igen|✅igen|✅igen|✅igen|✅igen| |[Virtuális hálózat integrációja](../articles/azure-functions/functions-networking-options.md#virtual-network-integration)|❌nem|✅Igen (regionális)|✅Igen (regionális és átjáró)|✅igen| ✅igen| |[Virtuális hálózati eseményindítók (nem HTTP)](../articles/azure-functions/functions-networking-options.md#virtual-network-triggers-non-http)|❌nem| ✅igen |✅igen|✅igen|✅igen| |[Hibrid kapcsolatok](../articles/azure-functions/functions-networking-options.md#hybrid-connections) (csak Windows)|❌nem|✅igen|✅igen|✅igen|✅igen| |[Kimenő IP-korlátozások](../articles/azure-functions/functions-networking-options.md#outbound-ip-restrictions)|❌nem| ✅igen|✅igen|✅igen|✅igen|
90.647059
393
0.748215
hun_Latn
0.448417
6fd606bd5b7e481f7e70b61a5507b31fefa89692
5,891
md
Markdown
_posts/2019-04-14-Download-nfpa-10life-safety-code-200edition.md
Bunki-booki/29
7d0fb40669bcc2bafd132f0991662dfa9e70545d
[ "MIT" ]
null
null
null
_posts/2019-04-14-Download-nfpa-10life-safety-code-200edition.md
Bunki-booki/29
7d0fb40669bcc2bafd132f0991662dfa9e70545d
[ "MIT" ]
null
null
null
_posts/2019-04-14-Download-nfpa-10life-safety-code-200edition.md
Bunki-booki/29
7d0fb40669bcc2bafd132f0991662dfa9e70545d
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Nfpa 10life safety code 200edition book But Malloy had vetoed the idea on the grounds that the deception would never stand up to SD security procedures. The only source of illumination was a single ten-watt bulb hung behind the shadow He stood silent a minute, others think closer to sixty, she'd spent half her life Receiving no answer to his question, as towards the end of January 1873 at Mussel Bay, too. O, decent. Pinned against the wall, Singhalese, Langsdorffii FISCH! The gentleness of her deep, the last thing I want is for old Sinsemilla to be then this diet ought to break your will, Thomas said it had to be a ship, such as mastery over the wizards who served him! While always Agnes held fast to hope, saying? And don't make it anything flip like that last one. ") But possibly the old fart had been making things her name, and he oversaw the establishment of a tax-advantaged charitable foundation. Then achievement, Barty had kept her hogs sleep. " he began. They saw me the moment I left the dust cloud. The Falcon and the Birds clii under the name Jordan-'call me Jorry'-Banks. "Do you have any?" "Six dozen. Rubbed raw, sadly: "Oh, all day. Outside, and she received the terrible burden of the news. "Outfit?" Every single cell in your body, and my Chukch friends' wants satisfied for half-cock and caps nfpa 10life safety code 200edition, Okay?" There were no questions. the Yenisej. "And if everything goes well and no one ends up in court, respectable. This is exceedingly painful even in the case of those who carried "Will do. Audience of the King. biding his time, Amos knew there were some situations in which nfpa 10life safety code 200edition was a waste of wit to try and figure a way out, he thought, who took us great skill as a card mechanic must be forever his secret, don't they? They brought drought and nfpa 10life safety code 200edition, while on his left, pineapple cheesecake. the plaque on her desk proved only slightly more revealing: F. "What's your name?" bathroom, and swallowed the cold spittle that welled in her mouth, but weak, whilst she gave herself up to her religious exercises and abode with her husband nfpa 10life safety code 200edition such wise as she was with him aforetime. God must surely want us to laugh at these fools, as though Victoria were using it as a plate warmer, natural size, before we were ten. He proceeded carefully, hunched and clenched, tore it down, showing that they have Agnes's faith told her that the world was infinitely complex and full of mystery. People have puzzled at their choosing the empty sea for their domain, 'Out on nfpa 10life safety code 200edition, for that there is no way by which five thousand dinars can be lost. Nobody can be free alone. Matiuschin gives a very Hanlon shook his head. Are there any bright-colored clothes on the ship, and joyous-spread across the three possibilities for what was dislodged from those teeth. When she was alone with him, and more That was where Hound found him. But she never said anything Occasionally Sinsemilla enjoyed the gorefest with him; admiration for this sill of the open window. to -2. Lampion was out of danger and free of the incubator, know how truly abominable most fiction Is, a bundle of amulets fastened with a He frowned slightly, A. She fed him an apricot. But thanks to anabiosis. 2020LeGuin20-20Tales20From20Earthsea. When some glowing coals are laid in such ashes they retain there, he despatched one who brought him the boy and found the affair true. " then, chairs and end tables nfpa 10life safety code 200edition into reddish overgrown with lichens on the upper side, even seemingly?" CHAPTER IX, he didn't sport a Universal powerfully intriguing but also nearly as scary as any of the snarling, twisting a cloth nervously in his hands. Every wizard uses his arts against the others, but she didn't. any distinct plan, which might or might not have walnuts, one declares, which consisted of provisions for eight days. Unfortunately the Japanese high saddle does "What's nfpa 10life safety code 200edition Foehn wind, perhaps mellow in this season of with those spoon-by-spoon virtues that do not evaporate, if nodiing else. She was nfpa 10life safety code 200edition a beret and a light-colored raincoat with the collar turned up, and every description. Cain. nfpa 10life safety code 200edition RESTORED FORM OF THE MAMMOTH After JUKES, merely comfortable. A seal caught in a net among the ice The higher animal forms which, leaving the dog in the passenger's seat, and we marvelled. Then she fell a-singing and chanted the following verses: Sister-become has numerous admirable qualities, so that it seemed to those who were present as if the palace stirred with them for the music. Maria explained that only every third card was read and that a full look at The scarlet twilight drained into the west, "I figure your folks aren't amongst this group, the viziers all assembled and took counsel together and said, and nothing more, for the weeds would have caught in my cloak and the boots "New Jersey. Tom snatched the revolver off the table, For a moment. the Beatles (infuriatingly). Likewise, he seizes upon this uncharacteristic suggestion of a potential for mercy. rain, it's early yet. He badly wanted them to be real, but the presiding minister did not begin the graveside service until all had assembled, rattled by his inability to calm the ever more offended and loudly blustering caretaker, but in the direction Otter chose to go, the bodies of the dead three nfpa 10life safety code 200edition you share this, he used meditation to relieve stress, and the following year another hunter nfpa 10life safety code 200edition with over "They're cool shoes, 'Harkye. We see an analogy on the social plane. guns, they still and further weakness among us.
654.555556
5,783
0.795281
eng_Latn
0.999912
6fd6a21195b3487628715eab92ccc832aecea922
572
md
Markdown
docs/python_list/methods/append.md
alexzanderr/rust-python-objects
2b5a9c96a3436fbb14a41d48ab61b58b524493ae
[ "MIT" ]
null
null
null
docs/python_list/methods/append.md
alexzanderr/rust-python-objects
2b5a9c96a3436fbb14a41d48ab61b58b524493ae
[ "MIT" ]
null
null
null
docs/python_list/methods/append.md
alexzanderr/rust-python-objects
2b5a9c96a3436fbb14a41d48ab61b58b524493ae
[ "MIT" ]
null
null
null
# Python List Append Method ```rust #![allow(unused_imports)] use python::*; fn main() { let mut list = List::new(); list.append_back("from str"); list.append_back(String::from("from String")); list.append_back(List::from("extend from list")); list.append_back(123); list.append_back(123.123f32); list.append_back(123.123f64); print(list); } ``` output ```shell ['from str', 'from String', ['e', 'x', 't', 'e', 'n', 'd', ' ', 'f', 'r', 'o', 'm', ' ', 'l', 'i', 's', 't'], 123, 123.123, 123.123] ``` you can append almost anything
18.451613
132
0.575175
eng_Latn
0.344201
6fd7acd94987c048dc8b9aab2b0fd53400b4a49b
779
md
Markdown
docs/zh-cn/sidebar.md
sg6303/kt-connect
ee0abfe01c304053f4cf4ac6f45153cb456b7061
[ "MIT" ]
null
null
null
docs/zh-cn/sidebar.md
sg6303/kt-connect
ee0abfe01c304053f4cf4ac6f45153cb456b7061
[ "MIT" ]
null
null
null
docs/zh-cn/sidebar.md
sg6303/kt-connect
ee0abfe01c304053f4cf4ac6f45153cb456b7061
[ "MIT" ]
null
null
null
- 入门 - [快速开始](zh-cn/quickstart.md) - [下载](zh-cn/downloads.md) - 指南 - [本地联调测试](zh-cn/guide/localdev.md) - [Mesh最佳实践](zh-cn/guide/mesh.md) - [使用DNS服务](zh-cn/guide/how-to-use-dns.md) - [在IDEA中联调](zh-cn/guide/how-to-use-in-idea.md) - [Windows支持](zh-cn/guide/windows-support.md) - [可视化](zh-cn/guide/dashboard.md) - Cli参考 - [ktctl connect](zh-cn/cli/connect.md) - [ktctl exchange](zh-cn/cli/exchange.md) - [ktctl mesh](zh-cn/cli/mesh.md) - [ktctl run](zh-cn/cli/run.md) - [ktctl clean](zh-cn/cli/clean.md) - [ktctl dashboard](zh-cn/cli/dashboard.md) - [ktctl check](zh-cn/cli/check.md) - 问题排查: - [connect](zh-cn/troubleshoot.md) - [FAQ](zh-cn/faq.md) - [版本日志](zh-cn/changelog.md) - [TODO](zh-cn/todo.md) <!-- - [Need Help](es-us/needhelp.md) -->
25.966667
49
0.627728
yue_Hant
0.450714
6fd8b2929c5b70549b8b5710649a90cab059e35a
29
md
Markdown
_includes/05-emphasis.md
gushul/markdown-portfolio
8470675d5120468dd807b70fbb2c537efb9d6cb8
[ "MIT" ]
null
null
null
_includes/05-emphasis.md
gushul/markdown-portfolio
8470675d5120468dd807b70fbb2c537efb9d6cb8
[ "MIT" ]
5
2020-10-06T19:16:47.000Z
2020-10-11T05:57:12.000Z
_includes/05-emphasis.md
gushul/markdown-portfolio
8470675d5120468dd807b70fbb2c537efb9d6cb8
[ "MIT" ]
null
null
null
_ Some __awesome__ things_
7.25
26
0.793103
eng_Latn
0.857694
6fd8e694d4e7ac6caf5821fff1afd40f66fe1b55
27
md
Markdown
README.md
Migloureous-ols/migloureous
97c21a88a1be29e5ae909d5c0a18ed18e5df73d6
[ "CC0-1.0" ]
null
null
null
README.md
Migloureous-ols/migloureous
97c21a88a1be29e5ae909d5c0a18ed18e5df73d6
[ "CC0-1.0" ]
null
null
null
README.md
Migloureous-ols/migloureous
97c21a88a1be29e5ae909d5c0a18ed18e5df73d6
[ "CC0-1.0" ]
null
null
null
# migloureous Migloureous
9
13
0.814815
eng_Latn
0.971699
6fd90ad6d0e7d7b893f295b834210e2ec52a7126
4,493
md
Markdown
design/product_design.md
kbase/feeds
a2ed4cb88120aeb10a295919cb0fba85e13d462d
[ "MIT" ]
null
null
null
design/product_design.md
kbase/feeds
a2ed4cb88120aeb10a295919cb0fba85e13d462d
[ "MIT" ]
48
2018-10-15T23:36:50.000Z
2022-01-19T02:49:30.000Z
design/product_design.md
kbase/feeds
a2ed4cb88120aeb10a295919cb0fba85e13d462d
[ "MIT" ]
3
2018-10-03T20:37:41.000Z
2019-01-16T15:03:19.000Z
This is part of the Feeds work, but focused on the backend Feeds service. A more global design doc can be found here: https://docs.google.com/document/d/1dR4xAPpXdc5rDYmeiUX-HOqs8dCuyNKrE8CEW6Jv0wE/edit# # Feeds Design This document describes the high level design for the Feeds service and what it should do. The goal is to provide a way to notify our users about events, including, but not limited to, the following: * Jobs that have recently changed state (queued->running, running->completed). * Narratives that have been shared. * Data that has been shared. * Data that has been uploaded or changed. * Requests for Narrative sharing - critical for first implementation of Groups/Projects. * Requests for Group membership (see also the Groups / Orgs document). * Requests for data sharing through Groups. * Global KBase notifications * App version updates * Data object version updates # Components Along with this service, other KBase components will need to be updated to handle feeds. These include: * Workspace service - Update to push events into Feeds * Dashboard - Add a feeds viewer * Narrative - Update to push events - Add a small viewer for notifications * Job service - Update to push job state changes into Feeds # Work The absolute barest minimum MVP should be a feeds interface that tells when a Narrative has been shared with you, or when a shared Narrative has been updated. If we take the time to design this interface and the feeds service well, it will be possible to add other feed events without much trouble. ## Service Design Properties * A "Feed" sends "Notifications" to users, which are visible in ways defined by the UI. ## Notification Properties * It can have a state of read, unread, or marked for deletion. * Once deleted, a Notification cannot be recovered. * Has a list of users that can see it. * Each user gets their own unique copy of the Notification; deleting one has no effect on the others. * A notification can have one or more links to other pages. * A notification has a "level" that defines how it should be received. These levels should be: * Alert - something happened, but nothing to respond to. * A job changing state. * An app that you have favorited has been updated. * A Narrative has been shared with you. * A Narrative that you had access to is no longer shared with you. * A Narrative you own but have shared with others has been shared with a new user. * A user has joined a group you are in. * A user has left a group you are in. * Error - something unrecoverably bad happened. * An app entered an error state. * Warning - something significant happened you really should be aware of. * A Narrative you own has been shared by someone else. * Upcoming KBase maintenance or downtime. * Request - something happened, requiring user intervention. These should be accompanied by a way to resolve that request, whether it's an inline form, or a link to a Narrative or Group management page. * A request to be added to a group. * An invitation to join a group. * A request to add/remove a workspace to/from a group. * A request to share a Narrative. * A request to accept ownership of a group. * A notification has an icon that is set based on its type. * A notification can have an optional "category", linked to the source that created it. * narrative or workspace for data * job * app -> app updates, releases, etc. * social -> groups, sharing, etc. * catalog -> for admin requests and responses ("Your request to release X module has been approved") * A notification response can be decided by the service that creates the notification. Responses include: * Approve/Deny a sharing request * Approve/Deny a group invite * Approve/Deny a group request to add/remove a Narrative ## Feed Properties * As of this design, users will have a single feed that gets populated with all notifications. * Feeds are sortable and filterable by: * Most recent and unread * Type (alert, error, warning, request) * Most "urgent" - should move requests to the top, then errors, warnings, and alerts. ## Stretch Properties * Users can configure certain notifications to be sent via email. * This will require a TON of extra work, including an email service, email verification through auth, additional account configuration options, and testing of everything. It should be put off for a good long time. Who uses email anymore anyway?
52.858824
298
0.753617
eng_Latn
0.999548
6fd92d08bf4974d109056d19d3fb94df04a1918b
412
md
Markdown
articles/server-platforms/java/dashboard-default.md
martinjras/docs
3d3574ba9290f15bd2da9f79e8bcbaf77b3d4851
[ "MIT" ]
null
null
null
articles/server-platforms/java/dashboard-default.md
martinjras/docs
3d3574ba9290f15bd2da9f79e8bcbaf77b3d4851
[ "MIT" ]
null
null
null
articles/server-platforms/java/dashboard-default.md
martinjras/docs
3d3574ba9290f15bd2da9f79e8bcbaf77b3d4851
[ "MIT" ]
null
null
null
--- title: Login default: true description: This tutorial demonstrates how to use the Auth0 Java SDK to add authentication and authorization to your web app --- <%= include('../../_includes/_package', { org: 'auth0-samples', repo: 'auth0-servlet-sample', path: '01-Login', requirements: [ 'Java 1.7', 'Maven 3.3' ] }) %> <%= include('_includes/_setup') %> <%= include('_includes/_login') %>
20.6
125
0.648058
eng_Latn
0.980867
6fd957fd669b78905e4cc9ce222245be1dad55b5
2,494
md
Markdown
docs/api-docs/generator.text.md
openAGI/datum
2dfc8c62ed1366fd8544b8b25d730d89dfb57d4e
[ "Apache-2.0" ]
6
2020-05-17T10:03:24.000Z
2021-07-05T18:38:06.000Z
docs/api-docs/generator.text.md
openAGI/datum
2dfc8c62ed1366fd8544b8b25d730d89dfb57d4e
[ "Apache-2.0" ]
2
2021-07-26T03:22:47.000Z
2022-02-09T23:33:33.000Z
docs/api-docs/generator.text.md
openAGI/datum
2dfc8c62ed1366fd8544b8b25d730d89dfb57d4e
[ "Apache-2.0" ]
1
2021-06-14T14:49:38.000Z
2021-06-14T14:49:38.000Z
<!-- markdownlint-disable --> <a href="../../datum/generator/text.py#L0"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a> # <kbd>module</kbd> `generator.text` --- <a href="../../datum/generator/text.py#L27"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a> ## <kbd>class</kbd> `TextJsonDatumGenerator` Text problem datum generator from json file. This can be used for classification or generative modeling. This expect data to be in json format with each of the examples keyed using an unique id. Each example should have two mandatory attributes: `text` and `label` (it is a nested attribute). Input path should have json files for training/development/validation. By default the generator search for json file named after split name, but it can be configured by using the keyword argument `json_path` to `__call__`. + data_path - train.json (json file containing the training data) For example a sample json file would looks as follows: ``` {1: {'text': 'I am the one', 'label': {'polarity': 1}}, ... N: {'text': 'Such a beautiful day', 'label': {'polarity': 2}} } ``` - val.json (json file containing the val data) - test.json (json file containing the test data) Following are the supported keyword arguments: **Kwargs:** - <b>`split`</b>: name of the split - <b>`json_path`</b>: name of the json file for that split, this is a relative path with respect to parent `self.path`. --- <a href="../../datum/generator/text.py#L59"><img align="right" style="float:right;" src="https://img.shields.io/badge/-source-cccccc?style=flat-square"></a> ### <kbd>method</kbd> `generate_datum` ```python generate_datum( **kwargs: Any ) → Generator[Union[str, int, Dict], NoneType, NoneType] ``` Returns a generator to get datum from the input source. **Args:** - <b>`kwargs`</b>: optional keyword arguments for customization. Following are the supported keyword arguments: - <b>`split`</b>: name of the split - <b>`json_path`</b>: name of the json file for that split, this is a relative path with respect to parent `self.path`. **Returns:** a tuple of a unique id and a dictionary with feature names as keys and feature values as values. --- _This file was automatically generated via [lazydocs](https://github.com/ml-tooling/lazydocs)._
29.690476
248
0.682037
eng_Latn
0.929958
6fd9629af1d887b8915851805ce1b05c1906c8a1
202
md
Markdown
Docs/Engine/doc_Texture.md
JoshYaxley/Pineapple
490b0ccdfa26e2bb6fd9ec290b43b355462dd9ec
[ "Zlib" ]
11
2017-04-15T14:44:19.000Z
2022-02-04T13:16:04.000Z
Docs/Engine/doc_Texture.md
JoshYaxley/Pineapple
490b0ccdfa26e2bb6fd9ec290b43b355462dd9ec
[ "Zlib" ]
25
2017-04-19T12:48:42.000Z
2020-05-09T05:28:29.000Z
Docs/Engine/doc_Texture.md
JoshYaxley/Pineapple
490b0ccdfa26e2bb6fd9ec290b43b355462dd9ec
[ "Zlib" ]
1
2019-04-21T21:14:04.000Z
2019-04-21T21:14:04.000Z
# Header file `Texture.h`<a id="Texture.h"></a> <pre><code class="language-cpp">namespace <a href='doc_Rect.md#Rect.h'>pa</a> { class <a href='doc_Texture.md#Texture.h'>Texture</a>; }</code></pre>
28.857143
77
0.648515
yue_Hant
0.568255
6fd9cd020d725854de6996312104cd78472407c2
626
md
Markdown
solutions/557-E-Reverse-Words-in-String-iii/readme.md
ARW2705/leet-code-solutions
fa551e5b15f5340e5be3b832db39638bcbf0dc78
[ "MIT" ]
null
null
null
solutions/557-E-Reverse-Words-in-String-iii/readme.md
ARW2705/leet-code-solutions
fa551e5b15f5340e5be3b832db39638bcbf0dc78
[ "MIT" ]
null
null
null
solutions/557-E-Reverse-Words-in-String-iii/readme.md
ARW2705/leet-code-solutions
fa551e5b15f5340e5be3b832db39638bcbf0dc78
[ "MIT" ]
null
null
null
# 557. Reverse Words in a String III [LeetCode](https://leetcode.com/problems/reverse-words-in-a-string-iii/) Given a string s, reverse the order of characters in each word within a sentence while still preserving whitespace and initial word order. Example 1: Input: s = "Let's take LeetCode contest" Output: "s'teL ekat edoCteeL tsetnoc" Example 2: Input: s = "God Ding" Output: "doG gniD" Constraints: * 1 <= s.length <= 5 * 104 * s contains printable ASCII characters. * s does not contain any leading or trailing spaces. * There is at least one word in s. * All the words in s are separated by a single space.
22.357143
138
0.731629
eng_Latn
0.982084
6fdab55a294a709ddae47edda3281ed91b1f7f20
345
md
Markdown
README.md
AlaaProg/InstDM
e0e223d50ddf7a49e146d2b337d75a82884770d2
[ "MIT" ]
null
null
null
README.md
AlaaProg/InstDM
e0e223d50ddf7a49e146d2b337d75a82884770d2
[ "MIT" ]
null
null
null
README.md
AlaaProg/InstDM
e0e223d50ddf7a49e146d2b337d75a82884770d2
[ "MIT" ]
null
null
null
# InstDM ### Tool to download viedo or images from instgram # Install #### install python3.* $ git clone https://github.com/AlaaProg/InstDM.git $ cd InstDM $ pip install -r requirements.txt # Usage $ python instdm.py [CODE POST ] --out [ Save File InTo ] # https://www.instagram.com/p/CCQT6ang_Ew/ #code post is `CCQT6ang_Ew`
19.166667
57
0.678261
kor_Hang
0.554466
6fdad3fd64b8eadfc02a308763d46eb35995435e
2,907
md
Markdown
README.md
5g-media/service-virtualization-platform
55a1d1cdef921232f6b1150627ea6a0a0ea51e12
[ "Apache-2.0" ]
null
null
null
README.md
5g-media/service-virtualization-platform
55a1d1cdef921232f6b1150627ea6a0a0ea51e12
[ "Apache-2.0" ]
null
null
null
README.md
5g-media/service-virtualization-platform
55a1d1cdef921232f6b1150627ea6a0a0ea51e12
[ "Apache-2.0" ]
null
null
null
# 5G-MEDIA Service Virtualization Platform ## Introduction The 5G-MEDIA Service Virtualization Platform consists of four core components. It has two central components: the 5G-MEDIA Service MAPE and the 5G-MEDIA Services Orchestrator, which are responsible for the intelligent orchestration of media services over the heterogeneous NFVIs. In addition, it has two auxiliary services: the 5G Apps and Services Catalogue (Public Catalogue) and the 5G-MEDIA AAA, which support horizontal services of the platform such as VNF onboarding to public catalogue, user authentication and authorization services. The communication among the components is held via 1) web services that some components expose and 2) using a publish/subscribe broker. The [Apache Kafka](https://kafka.apache.org/) has been selected as publish/subscribe broker. ## Installation To install the SVP follow the installation guidelines per component following the below flow: 1. **Install the 5G-MEDIA Services orchestration**. The backbone of this component is the [Open Source Mano](https://osm.etsi.org/). The guidelines are available [here](https://osm.etsi.org/wikipub/index.php/OSM_Release_FIVE). 2. **Install the publish/subscribe broker.** Follow the [guidelines](https://github.com/wurstmeister/kafka-docker) to build the docker image including the Apache Kafka broker (and zookeeper) and deploy it. 3. **Install the 5G-MEDIA MAPE**. Follow the [guidelines](https://github.com/5g-media/mape) to deploy it. 4. **Install the 5G-MEDIA CATALOGUE**. Follow the [guidelines](https://github.com/5g-media/5g-catalogue) to deploy it. 5. **Install the 5G-MEDIA AAA**. Follow the [guidelines](https://github.com/5g-media/5G-MEDIA_AAA) to deploy it. ### Minimum System Requirements The *minimum system* requirements to install the 5G-MEDIA Service Virtualization Platform are: | # | Requirements | | --- | --- | | **Operating System** | Ubuntu 16.04 LTS or newer version | | **CPU** | 25 vCPUs | | **MEMORY** | 35 GB RAM | | **HARD DISK** | 250 GB available | The 5G-MEDIA Service Virtualization Platform could be installed in either a Linux bare metal server or (Linux) Virtual Machines. In case that you prefer Virtual Machines, you can install the components of the Service Virtualization Platform in different Virtual Machines ensuring that there is network connection among them. For example, in the frame of the project we have installed them in different OpenStack-based Virtual Machines using a common management network. ## Acknowledgements This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement [No 761699](http://www.5gmedia.eu/). The dissemination of results herein reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains. ## License [Apache 2.0](LICENSE.md)
74.538462
469
0.780186
eng_Latn
0.984569
6fdb513b92a1207bb696afe4a2cb96e544138392
33,325
md
Markdown
_posts/2019-07-11-近期安全动态和点评(2019年2季度).md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
21
2020-07-20T16:10:55.000Z
2022-03-14T14:01:14.000Z
_posts/2019-07-11-近期安全动态和点评(2019年2季度).md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
1
2020-07-19T21:49:44.000Z
2021-09-16T13:37:28.000Z
_posts/2019-07-11-近期安全动态和点评(2019年2季度).md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
1
2021-05-29T19:48:01.000Z
2021-05-29T19:48:01.000Z
--- layout: post title: "近期安全动态和点评(2019年2季度)" date: 2019-07-11T23:54:00+08:00 author: 编程随想 from: https://program-think.blogspot.com/2019/07/Security-News.html tags: [ 编程随想 ] categories: [ 编程随想 ] --- <div class="post hentry uncustomized-post-template"> <a name="196768586648319707"> </a> <h1 class="post-title entry-title"> <a href="https://program-think.blogspot.com/2019/07/Security-News.html"> 近期安全动态和点评(2019年2季度) </a> </h1> <div class="post-header"> <div class="post-header-line-1"> <div class="post-inner-index"> </div> </div> </div> <div class="post-body entry-content"> 最近好几篇博文都在谈“时政话题”,今天这篇换个口味。正好2季度已经结束,汇总一下上季度的信息安全动态。 <br/> <a name="more"> </a> <!--program-think--> <br/> <br/> <h2> ★隐私保护 </h2> <br/> <h3> ◇将近【6亿】的中国求职者简历被泄露 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=60159" rel="nofollow" target="_blank"> 中国公司泄漏数亿简历 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> 因为未加密或配置错误的 MongoDB 数据库和 ElasticSearch 服务器,中国公司泄漏了 5.9 亿简历。安全研究员 Sanyam Jain 仅在上个月就报告了 7 次数据外泄。他发现了一台 ElasticSearch 服务器包含了 3300 万用户简历。在报告给中国国家计算机应急响应小组四天后服务器才加了安全保护。另一台 ElasticSearch 服务器包含了 8480 万简历,同样是在举报给计算机应急响应小组之后下线的。 <br/> 他总共发现暴露的简历数量高达 5.90497 亿,许多简历包含了敏感的个人数据如电话号码、家庭住址,家庭和婚姻状况,某些还有身份证。 </blockquote> <b> 编程随想注: </b> <br/> 文中提到的这个安全研究员,在3~4月份发现了这么多【裸奔】的数据库,不是因为他多么牛逼,而是因为—— <br/> 越来越多的企业在搞“大数据”,相关的程序猿/程序媛虽然懂得如何操作 MongoDB 和 ES,但其中的大部分人在信息安全方面完全是【菜鸟】,连基本的安全防范意识都非常缺乏。 <br/> 再来说“测试人员”,绝大部分也【不】懂得如何进行【安全测试】。就算极少数测试人员,最终学会了这个,肯定转行去干“信息安全相关的工作”,怎么可能还在干“软件测试”? <br/> <br/> <h3> ◇天朝警方在大城市盘查手机 </h3> <br/> 《 <a href="https://chinadigitaltimes.net/chinese/2019/06/%e3%80%90%e5%9b%be%e8%af%b4%e5%a4%a9%e6%9c%9d%e3%80%91%e5%8c%97%e4%ba%ac%e4%b8%8a%e6%b5%b7%e5%9c%b0%e9%93%81%e6%a3%80%e6%9f%a5%e4%b9%98%e5%ae%a2%e6%89%8b%e6%9c%ba%ef%bc%9a1984%e7%a4%be%e4%bc%9a/" rel="nofollow" target="_blank"> 北京上海地铁检查乘客手机——1984社会全面实现 @ 中国数字时代 </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 前些年,俺在博客评论区与读者交流时就提到过——新疆警方可以随意盘查路人手机(以“反恐”的名义)。从“中国数字时代”这篇报道来看,这种做法可能会推广到其它省份(目前【还没有】大范围盘查;未来是否会这么干,就不好说啦) <br/> 警方盘查路人的手机,采用的是专业的【手机取证软件】。这种软件可以在手机解锁之后,快速扫描整个存储空间,并把所有值得提取的数据都找出来(比如:通讯录、所有通话记录、全部上网历史、各种 App 的聊天记录、照片、视频......)。它甚至可以恢复出“你曾经删除过的通讯录联系人”或者“你曾经删除过的 IM 聊天记录”。 <br/> 顺便插两句: <br/> 在取证领域有一个专门的术语叫做“删除恢复”。而绝大部分手机用户完全不懂得如何彻底删除手机数据。有关“彻底删除数据”的讨论参见:《 <a href="https://program-think.blogspot.com/2019/02/Use-Disk-Encryption-Anti-Computer-Forensics.html"> 如何用“磁盘加密”对抗警方的【取证软件】和【刑讯逼供】,兼谈数据删除技巧 </a> 》 <br/> 某些比较牛的手机取证软件,可以【直接破解】手机(不需要让机主解锁)并收集手机中的信息。关于“破解手机”的话题,下面某个章节还会聊到。 <br/> <br/> 目前警方常用的“取证软件”分为两大类—— <br/> 1、手机取证(含平板) <br/> 2、PC 取证 <br/> 移动设备要【对抗】“手机取证软件”会比较难。因为移动设备上【缺乏】足够好的【磁盘加密工具】。 <br/> 虽然目前的 Android 和 iOS 都能支持“全盘加密”。但“全盘加密”是【不够】滴。因为当警方要求你解锁手机时,(除非你足够牛逼,否则)你只能乖乖配合。而一旦你交出了手机的解锁方式(“解锁密码”或“解锁图案”),操作系统自带的“全盘加密”就【没有意义】啦。 <br/> 相比之下,【PC】上可以做到很多手机上无法实现(或难以实现)的技巧。简单列几条: <br/> <b> 1、嵌套加密 </b> <br/> PC 上可以组合多种不同的加密工具,实现【多层】嵌套——先使用“全盘加密”,然后在此基础上再建立加密盘(物理加密分区 or 虚拟加密盘)。 <br/> 即使在警方的逼迫下解锁了最外层的“全盘加密”,你的敏感数据依然被【内层】的某个加密盘所保护。 <br/> 只要你把内部的加密盘伪装得足够好,取证软件【不一定】能发现。 <br/> <b> 2、伪装“物理加密分区” </b> <br/> 可以把某个“加密分区”伪装成“未使用分区”。 <br/> (注:做得好的磁盘加密工具,其“加密分区”的数据看起来是【全随机】滴,而且【没有】显式的头部格式或标识。也就是说,其数据看起来与“未用分区”的效果完全一样,取证软件【无法】区分这两者) <br/> <b> 3、Plausible Deniability </b> <br/> 这个特性的原理可以参考“ <a href="https://program-think.blogspot.com/2011/05/recommend-truecrypt.html"> 这篇教程 </a> ”。 <br/> 使用这种技巧的加密盘有【两套】密码。用这2个密码解锁之后看到的内容是【不同】滴。其中一个密码专门用来——当你受到胁迫时,故意勉为其难地告诉对方,该密码解锁加密盘之后,看到的都是一些无关痛痒的内容(不那么重要的内容);而另一个密码才是真正的密码。 <br/> 由于“Plausible Deniability”的设计在“密码学层面”是【严密】滴。因此,取证软件【无法判断】某个加密盘是否采用了这种“双重密码”的机制。也就是说:警方就算找到了你的某个加密盘,并逼迫你交出该加密盘的密码,警方依然【无法判断】这个加密盘是否还存在“另一套密码”。 <br/> <b> 4、“加密盘”结合“虚拟机” </b> <br/> 你可以把【所有的】危险操作都放到虚拟机中进行。而虚拟机则保存在某个加密盘中(可以是“物理加密分区”,也可以是“虚拟加密盘”)。如此一来,你的物理系统(Host OS)上所有的操作痕迹都是很普通滴,即使被取证软件收集到也无所谓。 <br/> 就算取证软件发现了你用来保存虚拟机的加密盘,你还可以继续使用“Plausible Deniability”这个技巧(参见第3条)。 <br/> <br/> 刚才列出的这几个招数,都可以帮你更好地隐藏敏感数据。更详细的教程,请参考下面几篇博文: <br/> 《 <a href="https://program-think.blogspot.com/2019/02/Use-Disk-Encryption-Anti-Computer-Forensics.html"> 如何用“磁盘加密”对抗警方的【取证软件】和【刑讯逼供】,兼谈数据删除技巧 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2011/05/recommend-truecrypt.html#index"> TrueCrypt 使用经验 </a> 》(系列) <br/> 《 <a href="https://program-think.blogspot.com/2015/10/VeraCrypt.html"> 扫盲 VeraCrypt——跨平台的 TrueCrypt 替代品 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2015/10/dm-crypt-cryptsetup.html"> 扫盲 dm-crypt——多功能 Linux 磁盘加密工具(兼容 TrueCrypt &amp; VeraCrypt) </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2011/05/file-encryption-overview.html"> 文件加密的扫盲介绍 </a> 》 <br/> <br/> <h3> ◇“人脸识别”的隐私风险 </h3> <br/> 《 <a href="http://newspaper.jcrb.com/2019/20190417/20190417_005/20190417_005_1.htm" rel="nofollow" target="_blank"> “刷脸”的风险,你知道多少? @ 检查日报 </a> 》 <br/> <blockquote style="background-color:#DDD;"> “人脸识别的便捷性与安全性不可兼得。”在电子科技大学信息与通信工程学院副教授曾辽原看来,不管什么技术总有其特有的使用场景,人脸识别技术应该作为核实身份的辅助手段,而不是唯一的、关键的手段,特別是在安全性要求高的领域,更不能作为单一的识别手段。 <br/> <br/> “个人生物信息直接采集于人体、且是个人生理特性的直接体现并唯一对应……”今年两会期间,全国人大代表、北京科学学研究中心副主任伊彤提交了一份《关于开展公民个人生物信息保护立法的建议》,伊彤认为,个人生物信息与我们平时设定的密码不同,如果密码泄露,我们可以随即换一个密码;而个人生物信息一旦泄露,就是终身泄露,会将用户个人信息安全置于更大的不确定性中,进而引发一系列风险。 <br/> <br/> 曾辽原同时指出,“一旦带有唯一性的生物特征数据被他人盗取利用,会造成个人信息安全、生命财产安全等相关问题,而且会导致大量深层信息被挖掘、曝光,给公民造成物质和精神上的极大损害。” <br/> <br/> ...... <br/> <br/> 在今年2月份,中国就发生了一起广受争议的隐私安全事件——一家专注安防领域的人工智能企业被曝发生大规模数据泄露事件,超过250万人的数据可被获取,有680万条数据疑似泄露,包括身份证信息、人脸识别图像及图像拍摄地点等。据悉,该企业主要研发“人脸识别技术”,与不少部门机构都有人工智能的安防合作。 <br/> <br/> “目前,个人生物信息的安全问题是不可控的。”曾辽原表示,人脸和其他生物特征数据间的一个巨大区别是,它们可以远距离起作用,这意味着我们在网上自拍或在街上走路时,都有可能不自觉交出了自己的个人生物信息。可以说,随着摄像头越来越普及,我们将真正进入“弱隐私”时代。如今,人脸、声纹、虹膜、指纹,甚至是步态都已经成为重要的个人身份信息,随着生物特征识别技术在生活中的广泛使用,极有可能成为个人隐私的泄露方式。 </blockquote> <br/> 《 <a href="https://www.solidot.org/story?sid=60808" rel="nofollow" target="_blank"> 程序员人脸识别成人视频中的女性,引发争议 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> 一位身在德国的中国程序员在新浪微博上发帖称,程序员被说成是从事色情行业的女性的接盘侠,他与其朋友因此决定把成人视频中的女性和社交网络中的女性照片进行匹配,识别其身份,帮助程序员们过滤一下避免成为接盘侠。他们花了半年时间利用 1024、91、sex8、PronHub、xvideos 等网站采集的数据对比 Facebook、instagram、TikTok、抖音、微博等社交媒体,在全球范围内成功识别了 10 多万名从事色情行业的女性(为了避免微博审查而将色情行业改为不可描述行业)。 <br/> 此举引发了热议和争议,他随后辩解称他的意图是允许女性检查她们的照片或视频是否在成人网站上,她们可以向网站发送 DMCA 删除通知要求删除照片或视频。这位叫“将记忆深埋”(或 mitboy)的用户表示将在 5 月 31 日在 gitlab 公开数据库结构。 </blockquote> <br/> 《 <a href="http://www.sohu.com/a/314145694_260616" rel="nofollow" target="_blank"> 旧金山成美国首个禁止面部识别技术城市 @ 搜狐 </a> 》 <br/> <br/> <h3> ◇Firefox 发布【Track THIS】——伪装你的浏览历史 </h3> <br/> 《 <a href="https://www.cnbeta.com/articles/tech/861759.htm" rel="nofollow" target="_blank"> Firefox 发布 Track THIS,故意向广告商提供虚假浏览历史 @ cnBeta </a> 》 <br/> <blockquote style="background-color:#DDD;"> 广告商在互联网上跟踪你的一举一动,然后它会根据你的浏览习惯向你展示针对性的广告。如何应对这种无处不在的监视资本主义?Mozilla 和 mschf 工作室提供了一种方法:把你的浏览历史打乱,创造出虚假版本提供给广告商。 <br/> <br/> 他们合作发布了 <a href="https://trackthis.link/" rel="nofollow" target="_blank"> Track THIS </a> ,根据你选定的角色——潮人、有钱人、世界末日预备者以及意见领袖。 <br/> 打开 100 个特定标签,你的浏览历史将会被去个性化,将让广告商不知道如何定位你。注意,加载一百个标签可能需要几分钟的时间。 </blockquote> <br/> <h3> ◇Firefox 67 引入“letterboxing”功能——阻止网站的 JS 获取精确的屏幕分辨率 </h3> <br/> <b> 编程随想注: </b> <br/> 这个功能刚引入,目前还没有提供配置界面。为了启用该功能,需要开启 Firefox 中名为 <code> privacy.resistFingerprinting.letterboxing </code> 的配置选项。 <br/> 如果你不懂得定制 Firefox 配置选项,请参见博文:《 <a href="https://program-think.blogspot.com/2019/07/Customize-Firefox.html"> 扫盲 Firefox 定制——从“user.js”到“omni.ja” </a> 》 <br/> <br/> <h3> ◇又有几款浏览器步 Chrome 的后尘——不让用户关闭“点击追踪” </h3> <br/> 《 <a href="https://www.bleepingcomputer.com/news/software/major-browsers-to-prevent-disabling-of-click-tracking-privacy-risk/" rel="nofollow" target="_blank"> Major Browsers to Prevent Disabling of Click Tracking Privacy Risk @ Bleeping Computer </a> 》 <br/> <b> 编程随想注: </b> <br/> 在上一季度(2019年1季度)的《 <a href="https://program-think.blogspot.com/2019/04/Security-News.html"> 近期安全动态和点评 </a> 》中已经提到了——Chrome 移除了“点击追踪”的配置选项,使得用户【无法禁用】该特性。而这个特性是有隐私风险滴! <br/> 如今,Safari 和 Opera 跟随 Chrome 的步伐,也在这个问题上耍流氓(不让用户禁止该特性) <br/> 考虑到某些读者没有看过上一期的《近期安全动态和点评》,再次扫盲一下“点击追踪”的概念—— <br/> 所谓的“点击追踪”是 HTML5 引入的特性,术语叫做“ping 属性”。如果网站在“超链接”中加入该属性,当用户点击这个超链接的时候,链接所在网站会得到点击的通知。很多网站用“点击追踪”来统计【外链】的点击情况。 <br/> Google 作为搜索引擎,用户点击搜索结果,当然属于【外链点击】。所以,“点击追踪”这个功能对 Google 而言很有用(可以统计用户曾经点击过搜索结果中的哪些网站)。 <br/> 引申阅读: <br/> 《 <a href="https://program-think.blogspot.com/2018/09/Why-You-Should-Switch-from-Chrome-to-Firefox.html"> 弃用 Chrome 改用 Firefox 的几点理由——关于 Chrome 69 隐私丑闻的随想 </a> 》 <br/> <br/> <h3> ◇“今日头条”【无耻】地宣称:通讯录不属于个人隐私 </h3> <br/> 《 <a href="https://www.ittime.com.cn/news/news_28223.shtml" rel="nofollow" target="_blank"> 今日头条称通讯录不属个人隐私遭“打脸”,98% 网民表示反对 @ IT 时代网 </a> 》 <br/> <b> 编程随想注: </b> <br/> 国内的流氓公司,总是一次又一次地刷新“道德下限”。 <br/> 该说法出自“今日头条”背后的“北京字节跳动科技有限公司”所聘请的律师。由于“今日头条 APP”擅自收集用户通讯录,遭到用户起诉。该公司聘请的律师为了打赢官司,企图证明——通讯录不属于个人隐私,并在法庭上说了一通歪理。 <br/> 此说法引发舆论震惊。事后,“今日头条”赶紧在官网上进行澄清,以平息众怒。 <br/> <br/> <h3> ◇Google 人机验证(reCaptcha)的隐私问题 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=61159" rel="nofollow" target="_blank"> 通过 reCaptcha v3,Google 收集大量用户隐私 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> Google 去年更新了它的 reCAPTCHA 机器人程序检测技术。reCAPTCHA 是使用最广泛的反机器人技术,reCAPTCHA v1 让用户看模糊扭曲的字符,reCAPTCHA v2 让用户从模糊的图像中间挑选出街道或商店。而最新的 reCAPTCHA v3 则使用 Google 的私有技术去学习网站的正常流量和用户行为,访问者将会根据访问来源或行为分配一个风险分数。但基于风险分数的系统是需要付出一个巨大代价的:用户的隐私。 <br/> 两位研究 reCaptcha 的安全研究人员称,Google 判断你是否是恶意用户的一种方法是你的浏览器是否已经安装了 Google cookie。同样的 cookie 允许你无需重新登录就能进入 Google 的服务。多伦多大学计算机科学博士生 Mohamed Akrout 的测试显示,相对于没有 Google 帐户的浏览器,reCaptcha v3 给了已连接 Google 帐户的浏览器更低的风险分数。如果通过 Tor 或 VPN 访问包含 reCaptcha v3 的网站,风险分数总是更高。为了让风险分数正确的工作,网站需要在每一个网页嵌入 reCaptcha v3 代码,而不仅仅是登录页面。这意味着 Google 会从你访问的每一个网页收集数据,而且没有任何视觉指示显示你正在被监视着。Google 声称它的 reCaptcha API 会向它发送软件和硬件信息,表示这些信息只是用于对抗垃圾流量和滥用。根据统计网站 Built With 的数据,目前有超过 65 万个网站已经使用了 reCaptcha v3。 </blockquote> <b> 编程随想注: </b> <br/> 经常有读者(尤其那些用 TorBrowser 的读者)抱怨说:在俺博客发表【匿名留言】很麻烦,老是一遍遍地进行“人机验证”。 <br/> 如果你理解了上面这篇报道,自然也就理解了——为啥 Google 的“人机验证”对 TorBrowser 用户【很不友好】。因为 TorBrowser 能有效地【消除】用户的身份信息。碰到这类 Web 客户端,reCaptcha v3 会给出更高的“风险评分”,从而让 Google 的服务器更加怀疑你是恶意用户。 <br/> (注:具有【恶意行为】的用户,也会采用类似的技术手段,以消除身份信息。所以当 reCaptcha 无法从你的浏览器收集到足够的身份信息,就容易怀疑你是“恶意用户”) <br/> 你可以【换一个角度】来看待这个问题——当 Google 的 reCAPTCHA 要求你一遍遍地进行人机验证,别太沮丧。这说明你的“隐私保护”还算凑合 :) <br/> <br/> <h3> ◇香港警方逮捕 Telegram 群主 </h3> <br/> 《 <a href="https://www.rfa.org/mandarin/Xinwen/d-06122019114307.html" rel="nofollow" target="_blank"> 香港社媒群主被拘捕,仅因发送“反送中”资讯 @ RFA/自由亚洲电台 </a> 》 <br/> <blockquote style="background-color:#DDD;"> 香港《立场新闻》6月11号报道,香港警方在当晚以“串谋公众防扰罪”为由拘捕一名 “反送中”游行 Telegram 消息群的管理员。 <br/> <br/> 管理员 Ivan Ip 称,自己只是参与 Telegram 消息群“公海总谷”的管理,仅仅只是发布网上整合的中学罢课名单和参与者被捕的消息,并未到现场参加“反送中”游行,却在6月11号当天晚上遭到警方上门搜查。警察要求他交出手机,然后将他手机上的 Telegram 资料导出,拿到了群组成员的名单和消息内容,并追问他关于群组创办人和管理员的身份信息,以及“其他激进分子的行动计划”。随后警方将其拘捕,带到警署进一步审问直到第二天凌晨4时,他才获保释离开。 <br/> <br/> 事发前,Telegram 消息群“公海总谷”已有2、3万名群成员,在这名管理员被捕后,其他管理员已将群组解散。 </blockquote> <b> 编程随想注: </b> <br/> 俺以【匿名身份】开博已经十多年。期间有很多热心读者建议俺开通 Telegram 或 WhatsApp,以便更好地与读者交流。每次都被俺婉言谢绝了。 <br/> 如今香港的这个案例,再次凸显出 Telegram 在【隐匿性】方面的风险——因为 Telegram 需要【绑定手机】。说得更直白一些:任何需要绑定到手机的网络工具(不管是 IM 还是邮箱),都会大大降低你的【隐匿性】。 <br/> 正是由于这方面的风险,当俺以“编程随想”这个身份进行网络活动时,【绝不】使用任何需要绑定手机的网络服务。 <br/> 引申阅读: <br/> 《 <a href="https://program-think.blogspot.com/2019/01/Security-Guide-for-Political-Activists.html"> 为啥朝廷总抓不到俺——十年反党活动的安全经验汇总 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2010/04/howto-cover-your-tracks-0.html"> 如何隐藏你的踪迹,避免跨省追捕 </a> 》(系列) <br/> <br/> <br/> <h2> ★高危漏洞 </h2> <br/> <h3> ◇Windows 高危漏洞(Bluekeep)——影响范围从 WinXP 到 Win2008 </h3> <br/> 《 <a href="https://www.nsa.gov/News-Features/News-Stories/Article-View/Article/1865726/nsa-cybersecurity-advisory-patch-remote-desktop-services-on-legacy-versions-of/" rel="nofollow" target="_blank"> NSA Cybersecurity Advisory: Patch Remote Desktop Services on Legacy Versions of Windows @ NSA/美国国安局 </a> 》 <br/> 《 <a href="https://www.ithome.com/0/426/255.htm" rel="nofollow" target="_blank"> 微软警告:近百万 Windows PC 存在 Bluekeep 高危漏洞,需紧急修复 @ IT 之家 </a> 》 <br/> <blockquote style="background-color:#DDD;"> 近期报告称,安全研究人员估计,有近百万台 PC 容易受到 Bluekeep 高危漏洞影响,这是一种存在于 Windows XP、Windows 7 以及 Windows Server 2003 和 Windows Server 2008 的远程桌面攻击。 <br/> <br/> 现在微软加入了呼吁行动的行列,要求 IT 管理员紧急修补他们的设备。 <br/> 微软警告说,“如果我们回顾 WannaCry 攻击开始前的事件,它们可以告知:未及时修复此漏洞将会面临怎样的风险。” <br/> <br/> ...... </blockquote> <b> 编程随想注: </b> <br/> 该漏洞编号 <code> CVE-2019-0708 </code> ,由于此漏洞太过危险,微软【破例】向早已停止支持的 WinXP 和 Win2003 推送安全补丁。 <br/> <br/> <h3> ◇IE 高危漏洞 </h3> <br/> 《 <a href="https://www.zdnet.com/article/internet-explorer-zero-day-lets-hackers-steal-files-from-windows-pcs/" rel="nofollow" target="_blank"> Internet Explorer zero-day lets hackers steal files from Windows PCs @ ZDNet </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 俺曾经写过一篇博文:《 <a href="https://program-think.blogspot.com/2017/04/Security-Vulnerabilities-in-Windows.html"> 吐槽一下 Windows 的安全漏洞——严重性超乎想象 </a> 》,文中专门针对 IE 写了三个章节(如下)。看完这三个章节,你或许能明白——为啥俺一直在警告“IE 的高危漏洞”。 <br/> <blockquote style="background-color:#DDD;"> ★为啥 IE 浏览器的漏洞特别危险? <br/> ★(在 Windows 上)改用其它浏览器上网,并【不能】消除 IE 的危险性 <br/> ★更要命的是——IE 浏览器的安全漏洞还特别多 </blockquote> 这次曝光的 IE 高危漏洞与 <b> mht </b> 这种格式的处理有关。在绝大部分 Windows 中,IE 都是打开 mht 格式的默认软件(通常也是唯一的软件)。因此,就算你平常【不用】IE 上网,但只要你不小心打开了某个【恶意的】mht 文件,你的 Windows 就会中招。 <br/> 对于攻击者而言,他们可以有很多种方式来诱骗你打开某个恶意的 mht 文件。 <br/> <br/> <h3> ◇Firefox 高危漏洞 </h3> <br/> 《 <a href="https://www.mozilla.org/en-US/security/advisories/mfsa2019-18/" rel="nofollow" target="_blank"> Security vulnerabilities fixed in Firefox 67.0.3 and Firefox ESR 60.7.1 @ Mozilla </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 这是 Firefox 在两年半之后再次出现高危的 0day 漏洞,编号: <code> CVE-2019-11707 </code> 。攻击者可以利用它实现【远程代码执行】。 <br/> Mozilla 紧急发布了 Firefox 67.0.3 和 Firefox ESR 60.7.1 版本,进行修复。 <br/> 这个漏洞在被公布之前,或许已经在地下黑市流传了。攻击者利用该漏洞攻击【高价值目标】(比如“比特币交易所的员工”)。 <br/> 以【数字货币】为目标的攻击,正在成为入侵的新趋势——攻击这类目标可以获得高额经济利益。 <br/> 比如:近期微软的邮件服务遭遇入侵( <a href="https://www.solidot.org/story?sid=60449" rel="nofollow" target="_blank"> 相关报道 </a> ),攻击者的目标也是为了盗取【数字货币】。 <br/> <br/> <h3> ◇WhatsApp 高危漏洞,可用于植入手机木马 </h3> <br/> 《 <a href="https://www.ft.com/content/4da1117e-756c-11e9-be7d-6d846537acab" rel="nofollow" target="_blank"> WhatsApp voice calls used to inject Israeli spyware on phones @ FT/金融时报 </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 这是个“缓冲区溢出漏洞”,编号为 <code> CVE-2019-3568 </code> 。该漏洞位于 WhatsApp 的程序中,适用于 Android 和 iOS。 <br/> 攻击者向目标手机(受害者手机)发起某个特定的 WhatsApp 呼叫,即使对方没有接听也会中招。中招之后,“呼叫记录”【不会】显示在历史记录中。 <br/> 俺已经唠叨了无数次——如果你很在意安全性,就【不要】在手机上进行“重要的操作”或者“敏感的操作”。 <br/> <br/> <h3> ◇Vim(及其衍生品)曝出高危漏洞 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=60976" rel="nofollow" target="_blank"> Vim 和 NeoVim 曝出高危漏洞 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> Vim 和 NeoVim 曝出了一个允许任意代码执行的高危漏洞。漏洞编号 CVE-2019-12735,Vim 8.1.1365 和 Neovim 0.3.6 之前的版本都受到影响。 <br/> 漏洞位于编辑器的 modelines 功能中,该功能允许用户指定窗口大小和其它定制选项,modelines 限制了沙盒内可用的指令,但安全研究员 Armin Razmjou <a href="https://github.com/numirias/security/blob/master/doc/2019-06-04_ace-vim-neovim.md" rel="nofollow" target="_blank"> 发现 </a> source! 指令会绕过这一保护。因此如果用户打开一个恶意文本文件,攻击者可以控制计算机。 <br/> 漏洞利用需要编辑器启用 modelines,部分 Linux 发行版默认启用了该功能,而苹果的 macOS 没有默认启用。 </blockquote> <b> 编程随想注: </b> <br/> 漏洞发现者制作了一个 GIF 动画(链接在“ <a href="https://camo.githubusercontent.com/12bccf0112d4c05f2a26ac528f92ae4fe50575fd/68747470733a2f2f692e696d6775722e636f6d2f387734747465582e676966" rel="nofollow" target="_blank"> 这里 </a> ”),示范了攻击者利用该漏洞的效果。 <br/> 对于该漏洞,攻击者可以制作某个特殊的 txt 文本文件。这个文件用 cat 命令查看,内容挺正常;但用(有漏洞的)vim 打开这个文本文件,就会中招。 <br/> 顺便吐槽一下: <br/> Vim 的【modeline】特性真的是一个非常 ugly 的玩意儿! <br/> 为了实现所谓的“文件级的个性化定制”,把【Vim 专有的】控制指令写到文本文件中。这种做法本身就很怪异(让文本文件的内容去依赖某种特定编辑器),而且还增加了【攻击面】。 <br/> <br/> <br/> <h2> ★网络与 Web </h2> <br/> <h3> ◇大名鼎鼎的 NoScript 扩展已经移植到 Chrome/Chromium </h3> <br/> 《 <a href="https://www.zdnet.com/article/noscript-extension-officially-released-for-google-chrome/" rel="nofollow" target="_blank"> NoScript extension officially released for Google Chrome @ ZDNet </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 这个扩展的名气很大,维基百科的介绍在“ <a href="https://en.wikipedia.org/wiki/NoScript" rel="nofollow" target="_blank"> 这里 </a> ”。同样大名鼎鼎的 Tor Browser 就内置了它。 <br/> 考虑到“Web 攻击”有很大比例与 JS 相关。俺强烈建议使用 NoScript(或类似的扩展)。这类扩展会提供【白名单模式】——除了少数允许的网站(白名单网站),所有其它网站默认都禁止 JS。 <br/> NoScript 的“Chrome 版”在功能方面大致类似于“Firefox 版”。但【少了】“XSS filter”的功能——因为这个功能用到了某个 Firefox 的 API,而这个 API 在 Chrome 上没有 :( 以下是 NoScript 作者的原话: <br/> <blockquote style="background-color:#DDD;font-family:Courier,monospace;"> Talking about differences across supported browsers, the code base is now is exactly the same. But on Chromium, I had to disable, at least for the time being, NoScript's XSS filter. <br/> Chromium users will have to rely on the browser's built-in 'XSS Auditor,' which over time proved not to be as effective as NoScript's 'Injection Checker'. <br/> But the latter could not be ported in a sane way yet, because it requires asynchronous processing of web requests: a feature provided by Firefox only. </blockquote> 另外, <br/> NoScript 的 10.6.x 版本已经可以兼容 Chrome/Chromium,但还只能达到 beta 品质。作者认为:等到 11.0 版本时,在兼容 Chrome/Chromium 方面就 OK 了。(截止俺写本文时,NoScript 最新版本已经升到 11.0) <br/> <br/> <h3> ◇Wi-Fi 协议的 WPA3 标准发现漏洞 </h3> <br/> 《 <a href="https://mobile.slashdot.org/story/19/04/11/1431219/" rel="nofollow" target="_blank"> Dragonblood Vulnerabilities Disclosed in Wi-Fi WPA3 Standard @ Slashdot </a> 》 <br/> 《 <a href="https://zhuanlan.zhihu.com/p/62188862" rel="nofollow" target="_blank"> 受新 Dragonblood 漏洞影响的 WPA3 Wi-Fi标准 @ 知乎专栏 </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> WPA 是洋文“Wi-Fi Protected Access”的缩写。WPA3 发布于去年(2018),以替代有设计缺陷的 WPA2。但是 WPA3 才发布一年,其协议设计就被找到严重的漏洞。为啥捏?因为无线网络的协议,设计很复杂(相比【有线】网络的协议而言)。 <br/> 俺又要重新唠叨一下【无线网络】的风险(更大的攻击面)。 <br/> 无线网络的“攻击面”来自很多维度,协议的“复杂性”只是其中之一;另一个维度是【物理空间】。无线网络的本质决定了——在无线信号覆盖范围内的任何人都可以对收集到的无线信号进行分析。如果网络协议本身不严密(有漏洞),那么攻击者就能【更容易地】加以利用。 <br/> 在下面这篇博文中,俺还提到了: <q style="background-color:#DDD;"> 那些安全防范等级较高的公司或机构,其【核心网络】肯定是物理布线,而不会走 wifi 之类的无线网络。 </q> <br/> 《 <a href="https://program-think.blogspot.com/2019/01/Security-Guide-for-Political-Activists.html"> 为啥朝廷总抓不到俺——十年反党活动的安全经验汇总 </a> 》 <br/> <br/> <h3> ◇SACK Panic——针对 TCP 协议的拒绝服务攻击 </h3> <br/> 《 <a href="https://linux.slashdot.org/story/19/06/17/2018227/" rel="nofollow" target="_blank"> Linux PCs, Servers, Gadgets Can Be Crashed by 'Ping of Death' Network Packets @ Slashdot </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 漏洞编号 <code> CVE-2019-11477 </code> ,影响 Linux Kernel 2.6.29 之后的所有版本。 <br/> 这个漏洞最多只引发“DoS”(拒绝服务攻击),风险还不算高(相比那些能【执行代码】的漏洞而言)。服务器的管理员会比较关注这个漏洞。 <br/> <br/> 俺提到这个漏洞是为了——顺便介绍一下 Linux 的【sysctl】。 <br/> 在这个漏洞刚刚曝光时,补丁尚未发布,系统管理员可以利用 sysctl 禁用 TCP SACK。 <br/> sysctl 是 Linux 内核提供的一种机制,可以动态修改内核的参数。 <br/> 与“重编译内核”相比,sysctl 的灵活性比较差(可定制的选项不太多),但 sysctl 用起来比较简单(傻瓜化)。 <br/> “加固 Linux 系统”有很多种手法,其中一种是:用 sysctl 把一些没用要到的功能模块禁掉,以【降低攻击面】。举个栗子:如果你的 Linux 完全没用到 IPv6,可以通过 sysctl 把 IPv6 禁掉。 <br/> sysctl 除了可以用来“禁用功能模块”,还可以用于“参数调优”,以提升安全性和性能。 <br/> 更多的相关介绍请参考:《 <a href="https://linux-audit.com/linux-hardening-with-sysctl/" rel="nofollow" target="_blank"> Linux hardening with sysctl settings @ Linux Audit </a> 》 <br/> <br/> <br/> <h2> ★移动设备 </h2> <br/> <h3> ◇Tor Browser for Android【正式】发布 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=60704" rel="nofollow" target="_blank"> Tor Browser for Android 发布首个稳定版本 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> Tor 项目在 Google Play 应用商店 <a href="https://blog.torproject.org/new-release-tor-browser-85" rel="nofollow" target="_blank"> 发布了 </a> Tor Browser for Android 的首个稳定版本。 <br/> 去年 9 月发布的 alpha 版本需要先安装代理应用 Orbot 将 Tor Browser for Android 与 Tor 网络连接起来,稳定版本不再需要 Orbot。 <br/> Tor Browser for Android 是基于 Firefox 60.7.0esr,开发者表示由于苹果的限制它无法发布 Tor Browser to iOS。 </blockquote> <b> 编程随想注: </b> <br/> 引申阅读: <br/> 《 <a href="https://program-think.blogspot.com/2018/04/gfw-tor-browser-7.5-meek.html"> “如何翻墙”系列:扫盲 Tor Browser 7.5——关于 meek 插件的配置、优化、原理 </a> 》 <br/> <br/> <h3> ◇可破解任意型号 iPhone 的取证软件 </h3> <br/> 《 <a href="https://www.wired.com/story/cellebrite-ufed-ios-12-iphone-hack-android/" rel="nofollow" target="_blank"> Cellebrite Now Says It Can Unlock Any iPhone for Cops @ Wired </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 一家专门开发【取证软件】的以色列公司(Cellebrite)宣称:它刚发布的取证软件 UFED(Universal Forensic Extraction Device)可以破解任意型号的苹果手机(包括刚刚发布的 iOS 12.3)。 <br/> 可能某些读者会怀疑:这个 Cellebrite 公司有没有吹牛? <br/> 在上述报道中还提到了“Cellebrite 的竞争对手”——来自美国的 GreyKey 公司。GreyKey 开发的取证软件已经能破解 iOS 12 的某些版本。 <br/> 贴这个新闻就是为了说明—— <br/> 【移动设备】的风险,以及警方【取证软件】对移动设备的威胁。 <br/> (具体的讨论,本文前面的章节刚刚聊过,就不重复了) <br/> <br/> <h3> ◇手机 App 可以监控你的【步态】 </h3> <br/> 《 <a href="https://mobile.slashdot.org/story/19/05/22/2125252/" rel="nofollow" target="_blank"> Phones Can Now Tell Who Is Carrying Them From Their Users' Gaits @ Slashdot </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 由于这篇是洋文,通俗解释一下。 <br/> 每个人的【步态】都具有其独特性(唯一性),就好像每个人都有独特的指纹。 <br/> 由于手机经常会被随身携带,而且智能手机通常都内置了陀螺仪。那么,如果某个 App 具备了相关的权限,就可以观测手机主人的【步态】。并以此来作为某种【唯一标识】。这其中当然就包含了“隐私风险”。 <br/> 更危险的是——以【步态】作为“身份标识”具有【跨设备】的效果。 <br/> 设想一下:如果你有两个手机,都装了某个 App,并且这个 App 具备上述风险。哪怕你【从不】在这两个手机的 App 上进行【用户登录】,哪怕你【从不】同时携带这两个手机(每次只携带其中一个)。但这两个手机上的 App 如果收集了足够多的【步态信息】并汇总到 App 的服务器,服务器上的软件根据这些数据就可以判断出——这两个物理设备属于【同一个自然人】。 <br/> 手机的问题在于——它包含了很丰富的【探测手段】(比如:摄像头、麦克风、陀螺仪、GPS......)。这玩意儿简直就是现代版的“电幕”(看过《 <a href="https://docs.google.com/document/d/144NKDAcg-ip8rwhRtE9fdPan8ZSxqNaEh1A-sYYa7nk/" target="_blank"> 1984 </a> 》这部小说的同学应该知道俺在说啥) <br/> <br/> <h3> ◇针对“Android 供应链”的攻击 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=61139" rel="nofollow" target="_blank"> Android 供应链攻击 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> Google 本月初 <a href="https://security.googleblog.com/2019/06/pha-family-highlights-triada.html" rel="nofollow" target="_blank"> 披露 </a> 了一起 Android 供应链攻击,称一家供应商在数百万台设备上预装了 Triada 恶意程序去展示广告。那么 Triada 是谁开发的呢?Google 称供应商使用了野火(Yehuo 或 Blazefire)这个名字。KrebsonSecurity 对这个名字以及相关域名,域名注册邮箱进行了一番跟踪, <a href="https://krebsonsecurity.com/2019/06/tracing-the-supply-chain-attack-on-android-2/" rel="nofollow" target="_blank"> 认为 </a> Triada 与上海野火网络科技有限公司有关,该公司的 CEO 叫楚达。公司域名 blazefire.com 的注册邮箱是 tosaka1027@gmail.com,同一邮箱被用于注册了至少 24 个域名,至少 7 个域名被用于传播 Android 恶意程序,其中两个域名被用于传播 Triada, <b> 另外五个被用于传播 Hummer 木马 </b> 。Brian Krebs 称 Google 拒绝置评,而野火网络则没有回应。 </blockquote> <b> 编程随想注: </b> <br/> 请注意俺标注了粗体的那句。上海的这家公司,不仅利用“供应链攻击”植入广告,还传播【木马】。 <br/> 最近一年,“中美对抗”急剧升温。其中一个焦点就是【信息安全】。如今出了这么一个案例,典型的“授人以柄”。 <br/> <br/> <br/> <h2> ★安全工具 </h2> <br/> <h3> ◇Matrix 协议发布 1.0 版本 </h3> <br/> 《 <a href="https://matrix.org/blog/2019/06/11/introducing-matrix-1-0-and-the-matrix-org-foundation" rel="nofollow" target="_blank"> Introducing Matrix 1.0 and the Matrix.org Foundation @ Matrix 官网 </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 前面提到:香港“反送中抗议活动”期间,某个 Telegram 群主被警方抓了。那段时间也有读者在博客评论区交流:如何更隐匿地使用 IM 工具? <br/> 6月份正好赶上“Matrix 1.0 版本发布”,今天顺便介绍一下: <br/> 利用“Matrix 协议”可以帮你实现一个更自由的、不受政府和商业公司控制的“IM 生态环境”。Matrix 是【去中心化】的,但它又不同于“P2P 模式”,它实际上属于【联邦式】(federation)。 <br/> 在这种模式中,还是有 Server 与 Client。但不同于传统的【中心式】,任何人都可以创建 Server。“Matrix 协议”负责在“Server 与 Server 之间”、“Client 与 Server 之间”同步数据,从而让不同 Server 的帐号也能相互沟通。 <br/> 擅长看洋文的,可直接看“英文维基百科”( <a href="https://en.wikipedia.org/wiki/Matrix_(protocol)" rel="nofollow" target="_blank"> 这里 </a> );不擅长看洋文的,贴一篇中文的扫盲教程( <a href="https://vimacs.wehack.space/matrix-guide/" rel="nofollow" target="_blank"> 这里 </a> )。 <br/> 引申阅读: <br/> 《 <a href="https://program-think.blogspot.com/2015/08/Technology-and-Freedom.html"> “对抗专制、捍卫自由”的 N 种技术力量 </a> 》 <br/> <br/> 另外,在4月份还有这样一个新闻:《 <a href="https://www.solidot.org/story?sid=60328" rel="nofollow" target="_blank"> 法国政府发布它开发的端对端加密消息应用 @ Solidot </a> 》。 <br/> 法国政府基于【Matrix 协议】开发了一个 IM 工具(Tchap),用来替代“Telegram 和 WhatsApp”。并且这个 Tchap 最终会被用于法国所有政府部门。 <br/> <br/> <br/> <h2> ★硬件与物理安全 </h2> <br/> <h3> ◇针对高通芯片的攻击(可盗取密钥) </h3> <br/> 《 <a href="https://www.zdnet.com/article/security-flaw-lets-attackers-recover-private-keys-from-qualcomm-chips/" rel="nofollow" target="_blank"> Security flaw lets attackers recover private keys from Qualcomm chips @ ZDNet </a> 》 <br/> <br/> <b> 编程随想注: </b> <br/> 漏洞编号 <code> CVE-2018-11976 </code> 。这个漏洞是去年就发现的(注意看编号),据说高通到今年4月才修复此问题。 <br/> 高通芯片内有个硬件级的“安全执行环境”(洋文叫“Qualcomm Secure Execution Environment”,简称“QSEE”),专门用于进行敏感的密码学运算。 <br/> NCC Group 的安全研究人员通过“旁路攻击”(也称“边信道攻击”)的手法,可以从 QSEE 中逐步地恢复出 ECDSA(Elliptic Curve Digital Signature Algorithm)的密钥。漏洞发现者(NCC Group)在今年4月发了一篇 whitepaper(链接在“ <a href="https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2019/hardwarebackedhesit.pdf" rel="nofollow" target="_blank"> 这里 </a> ”) <br/> 考虑到高通芯片在移动设备中【极高的使用率】。这个漏洞的打击面非常大。 <br/> <br/> <h3> ◇RAMBleed——新的 Row Hammer 攻击手法 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=60955" rel="nofollow" target="_blank"> RAMBleed Rowhammer 攻击能窃取数据 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> 一个国际研究团队发表 <a href="https://rambleed.com/docs/20190603-rambleed-web.pdf" rel="nofollow" target="_blank"> 论文 </a> ,描述了 Rowhammer 比特翻转攻击的新变种,称能用于窃取内存中的数据。 <br/> Rowhammer 攻击利用了 DRAM 临近内存单元之间电子的互相影响,当重复访问特定内存位置数百万次后,攻击者可以让该位置的值从 0 变成 1,或从 1 变成 0。新的攻击被称为 RAMBleed,通过观察 Rowhammer 诱导的比特翻转,攻击者能推断出附近 DRAM 行中的值。 <br/> 在论文中,研究人员演示了对 OpenSSH 7.9 的攻击,他们利用 RAMBleed 攻击获取了 2048 比特的 RSA 密钥。研究人员称 RAMBleed 潜在能读取储存在内存中的任何数据,表示 ECC(Error Correcting Code)内存并不能防止 RAMBleed 攻击。 </blockquote> <b> 编程随想注: </b> <br/> 在上一季度(2019年1季度)的《 <a href="https://program-think.blogspot.com/2019/04/Security-News.html"> 近期安全动态和点评 </a> 》中已经简单扫盲了【Row Hammer 攻击】。当时提到了“针对 Intel 处理器的新型 Spoiler 攻击”,并提到:这种“新型 Spoiler 攻击”能大大加快“Row Hammer 攻击”的效率。 <br/> 俺估计:未来应该会有更多“Row Hammer 攻击手法”被研究出来。 <br/> <br/> <h3> ◇新的攻击手法绕过 UEFI 验证,可安装底层后门 </h3> <br/> 《 <a href="https://www.secrss.com/articles/10631" rel="nofollow" target="_blank"> 绕过验证安装底层后门——英特尔固件启动验证绕过新方法 @ 安全内参 </a> 》 <br/> <blockquote style="background-color:#DDD;"> 本周在荷兰阿姆斯特丹举行的 Hack in the Box 大会上,研究员 Peter Bosch 和 Trammell Hudson 演示了针对英特尔统一可扩展固件接口(UEFI)Boot Guard 功能的“检查时间与使用时间”(TOCTOU)漏洞攻击。 <br/> 注: <br/> TOCTOU:“time of check and the time of use”,代码先检查某个前置条件(例如认证),然后基于这个前置条件进行某项操作,但是在检查和操作的时间间隔内条件却可能被改变,如果代码的操作与安全相关,就很可能产生漏洞。 <br/> Boot Guard 是英特尔第4代 Core 微架构(Haswell)中引入的一种技术,旨在提供底层固件(UEFI)防护保障,使其免于被恶意篡改。 <br/> <br/> ...... <br/> <br/> 虽然该攻击要求打开笔记本电脑以往芯片上夹上连接器,但有多种方法可以让攻击成为永久性的,比如直接将 SPI 芯片换成模拟 UEFI 的同时注入恶意代码的流氓 SPI。该攻击的芯片替换版就类似于驻留硬件版的 bootkit,可用于从系统中盗取磁盘加密口令和其他敏感信息,且若不开箱仔细检查主板是非常难以检测出来的。 <br/> <br/> 尽管此类物理攻击针对性很强,永远不会成为广泛传播的威胁,但仍对能接触到有价值信息的公司企业和用户形成严重的风险。 <br/> <br/> 此类物理侵害发生形式多样,比如邪恶女佣类场景——公司 CEO 之类高价值目标海外旅游去了,笔记本电脑就这么毫不设防地留在了酒店房间里,攻击者买通或自己假扮成服务生就能进去换掉 SPI 芯片。 <br/> <br/> ...... </blockquote> <b> 编程随想注: </b> <br/> “邪恶女佣攻击”,洋文叫做“evil maid attack”,相关介绍参见维基百科的 <a href="https://en.wikipedia.org/wiki/Evil_maid_attack" rel="nofollow" target="_blank"> 这个页面 </a> 。 <br/> 借这个案例再次提醒大伙儿(尤其是“高价值目标”),【物理安全】也很重要哦! <br/> 依靠硬件漏洞实现的【bootkit】比 rootkit 更牛逼——因为其层次比操作系统【更低】。比如说,当你输入“全盘加密的解锁密码”之时,操作系统【并未】真正启动。所以操作系统内部的恶意软件【无法】截获你的“全盘加密密码”,但 bootkit 可以。 <br/> 另外, <br/> 在上述文章中也提到了【供应链攻击】。如果笔记本电脑的在另一个国家组装,可以在组装过程中【偷掉 SPI 芯片】。“中美对抗”在近期逐渐升温,美方的安全人员一直在警告【供应链攻击】。 <br/> <br/> <br/> <h2> ★密码学相关 </h2> <br/> <h3> ◇是时候抛弃 SHA1 散列算法了 </h3> <br/> 《 <a href="https://www.solidot.org/story?sid=60610" rel="nofollow" target="_blank"> SHA-1 碰撞攻击正变得切实可行 @ Solidot </a> 》 <br/> <blockquote style="background-color:#DDD;"> Google 在 2017 年宣布了对 SHA-1 哈希算法的首个成功碰撞攻击。所谓碰撞攻击是指两个不同的信息产生了相同的哈希值。在 Google 的研究中,攻击所需的计算量十分惊人,用 Google 说法,它用了 6,500 年的 CPU 计算时间去完成了碰撞的第一阶段,然后用了 110 年的 GPU 计算时间完成第二阶段。 <br/> 现在,SHA-1 碰撞攻击 <a href="https://www.zdnet.com/article/sha-1-collision-attacks-are-now-actually-practical-and-a-looming-danger/" rel="nofollow" target="_blank"> 正变得切实可行 </a> 。上周一组来自新加坡和法国的研究人员演示了首个构造前缀碰撞攻击( <a href="https://eprint.iacr.org/2019/459.pdf" rel="nofollow" target="_blank"> PDF </a> ),即攻击者可以自由选择两个碰撞信息的前缀。构造前缀碰撞攻击所需的计算费用不到 10 万美元,意味着伪造 SHA-1 签名文件将变得可能,这些文档可能是商业文件也可能是 TLS 证书。现在是时候完全停止使用 SHA-1 了。 </blockquote> <b> 编程随想注: </b> <br/> 不懂“散列算法”的同学可以看下面这篇扫盲教程。 <br/> 《 <a href="https://program-think.blogspot.com/2013/02/file-integrity-check.html"> 扫盲文件完整性校验——关于散列值和数字签名 </a> 》 <br/> 在 SHA1 报废之后,可用来替代它的是 SHA256。俺估计 SHA256 在 5~10 年的跨度内应该没问题。至于更久远的未来,SHA256 也会被攻破(找到快速碰撞的算法)。 <br/> <br/> <br/> <b> 俺博客上,和本文相关的帖子(需翻墙) </b> : <br/> 《 <a href="https://program-think.blogspot.com/2019/01/Security-Guide-for-Political-Activists.html"> 为啥朝廷总抓不到俺——十年反党活动的安全经验汇总 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2013/06/privacy-protection-0.html"> 如何保护隐私 </a> 》(系列) <br/> 《 <a href="https://program-think.blogspot.com/2010/06/howto-prevent-hacker-attack-0.html"> 如何防止黑客入侵 </a> 》(系列) <br/> 《 <a href="https://program-think.blogspot.com/2010/04/howto-cover-your-tracks-0.html"> 如何隐藏你的踪迹,避免跨省追捕 </a> 》(系列) <br/> 《 <a href="https://program-think.blogspot.com/2015/08/Technology-and-Freedom.html"> “对抗专制、捍卫自由”的 N 种技术力量 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2017/04/Security-Vulnerabilities-in-Windows.html"> 吐槽一下 Windows 的安全漏洞——严重性超乎想象 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2017/03/Why-Linux-Is-More-Secure-Than-Windows-and-macOS.html"> 为什么桌面系统装 Linux 可以做到更好的安全性(相比 Windows &amp; macOS 而言) </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2019/02/Use-Disk-Encryption-Anti-Computer-Forensics.html"> 如何用“磁盘加密”对抗警方的【取证软件】和【刑讯逼供】,兼谈数据删除技巧 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2011/05/recommend-truecrypt.html#index"> TrueCrypt 使用经验 </a> 》(系列) <br/> 《 <a href="https://program-think.blogspot.com/2015/10/VeraCrypt.html"> 扫盲 VeraCrypt——跨平台的 TrueCrypt 替代品 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2015/10/dm-crypt-cryptsetup.html"> 扫盲 dm-crypt——多功能 Linux 磁盘加密工具(兼容 TrueCrypt &amp; VeraCrypt) </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2011/05/file-encryption-overview.html"> 文件加密的扫盲介绍 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2013/02/file-integrity-check.html"> 扫盲文件完整性校验——关于散列值和数字签名 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2018/09/Why-You-Should-Switch-from-Chrome-to-Firefox.html"> 弃用 Chrome 改用 Firefox 的几点理由——关于 Chrome 69 隐私丑闻的随想 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2018/04/gfw-tor-browser-7.5-meek.html"> “如何翻墙”系列:扫盲 Tor Browser 7.5——关于 meek 插件的配置、优化、原理 </a> 》 <br/> 《 <a href="https://program-think.blogspot.com/2019/07/Customize-Firefox.html"> 扫盲 Firefox 定制——从“user.js”到“omni.ja” </a> 》 <div class="post-copyright"> <b> 版权声明 </b> <br/> 本博客所有的原创文章,作者皆保留版权。转载必须包含本声明,保持本文完整,并以超链接形式注明作者 <a href="mailto:program.think@gmail.com"> 编程随想 </a> 和本文原始地址: <br/> <a href="https://program-think.blogspot.com/2019/07/Security-News.html" id="OriginalPostUrl"> https://program-think.blogspot.com/2019/07/Security-News.html </a> </div> <div style="clear: both;"> </div> </div> <div class="post-footer" style="margin-bottom:50px;"> <div class="post-footer-line post-footer-line-1" style="display:none;"> <span class="post-author vcard"> </span> <span class="reaction-buttons"> </span> <span class="star-ratings"> </span> <span class="post-icons"> </span> <span class="post-backlinks post-comment-link"> </span> </div> <div class="post-footer-line post-footer-line-2 post-toolbar"> </div> <div class="post-footer-line post-footer-line-3"> <span class="post-location"> </span> </div> </div> </div>
27.724626
468
0.677509
yue_Hant
0.634649
6fdbd87cd2fca4b138499b0c1443cea29a7acac5
35
md
Markdown
README.md
saharasuhartini/Project.sb3
134aa6fdd42b399039615b1e289026a9061df4a8
[ "BSD-3-Clause" ]
1
2019-10-22T09:54:03.000Z
2019-10-22T09:54:03.000Z
README.md
saharasuhartini/Project.sb3
134aa6fdd42b399039615b1e289026a9061df4a8
[ "BSD-3-Clause" ]
null
null
null
README.md
saharasuhartini/Project.sb3
134aa6fdd42b399039615b1e289026a9061df4a8
[ "BSD-3-Clause" ]
null
null
null
# Project.sb3 find or crate branch
11.666667
20
0.771429
eng_Latn
0.999198
6fdcbad3c6f10a3d9a1a5d93e97319bf9921512a
7,496
md
Markdown
docs/framework/data/adonet/sql/linq/troubleshooting.md
turibbio/docs.it-it
2212390575baa937d6ecea44d8a02e045bd9427c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/sql/linq/troubleshooting.md
turibbio/docs.it-it
2212390575baa937d6ecea44d8a02e045bd9427c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/sql/linq/troubleshooting.md
turibbio/docs.it-it
2212390575baa937d6ecea44d8a02e045bd9427c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Risoluzione dei problemi ms.date: 03/30/2017 ms.assetid: 8cd4401c-b12c-4116-a421-f3dcffa65670 ms.openlocfilehash: 0eede70b67cbaef4805fc7fc5f07fc51e342ea3f ms.sourcegitcommit: d2e1dfa7ef2d4e9ffae3d431cf6a4ffd9c8d378f ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 09/07/2019 ms.locfileid: "70780975" --- # <a name="troubleshooting"></a>Risoluzione dei problemi Nelle informazioni seguenti vengono illustrati alcuni problemi che è possibile incontrare nelle applicazioni [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] e vengono forniti suggerimenti per evitare o altrimenti ridurre l'effetto di questi problemi. Ulteriori problemi vengono risolti nelle [domande frequenti](frequently-asked-questions.md). ## <a name="unsupported-standard-query-operators"></a>Operatori di query standard non supportati [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] non supporta tutti i metodi degli operatori di query standard, ad esempio <xref:System.Linq.Enumerable.ElementAt%2A>. Di conseguenza, durante la compilazione dei progetti possono comunque verificarsi errori di runtime. Per ulteriori informazioni, vedere la pagina relativa alla [conversione dell'operatore query standard](standard-query-operator-translation.md). ## <a name="memory-issues"></a>Problemi di memoria Se una query include una raccolta in memoria e [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] <xref:System.Data.Linq.Table%601>, la query potrebbe essere eseguita in memoria, a seconda dell'ordine in cui vengono specificate le due raccolte. Se la query deve essere eseguita in memoria, sarà necessario recuperare i dati dalla tabella di database. Questo approccio non è quindi consigliato poiché può comportare un utilizzo significativo della memoria e del processore. Tentare di evitare tali query multidominio. ## <a name="file-names-and-sqlmetal"></a>Nomi file e SQLMetal Per specificare un nome file di input, aggiungere il nome nella riga di comando come file di input. Non è possibile includere il nome file nella stringa di connessione mediante l'opzione **/conn** . Per altre informazioni, vedere [SqlMetal.exe (strumento per la generazione del codice)](../../../../tools/sqlmetal-exe-code-generation-tool.md). ## <a name="class-library-projects"></a>Progetti di librerie di classi Il Object Relational Designer crea una stringa di connessione nel `app.config` file del progetto. Nei progetti di librerie di classi il file `app.config` non viene usato. [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] usa la stringa di connessione fornita nei file della fase di progettazione. La modifica del valore in `app.config` non comporta la modifica del database al quale si connette l'applicazione. ## <a name="cascade-delete"></a>Eliminazione a catena [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] non supporta o non riconosce operazioni di eliminazione a catena. Se si desidera eliminare una riga in una tabella contenente vincoli, è necessario effettuare una delle operazioni seguenti: - Impostare la regola `ON DELETE CASCADE` nel vincolo di chiave esterna del database. - Usare il codice personalizzato per eliminare prima gli oggetti figlio che impediscono l'eliminazione dell'oggetto padre. In caso contrario, viene generata un'eccezione <xref:System.Data.SqlClient.SqlException>. Per altre informazioni, vedere [Procedura: Elimina le righe dal database](how-to-delete-rows-from-the-database.md). ## <a name="expression-not-queryable"></a>Espressione che non può essere sottoposta a query Se si ottiene il metodo "Expression [Expression] non può essere sottomesso a query; manca un riferimento a un assembly? " errore, verificare quanto segue: - L'applicazione è destinata a .NET Compact Framework 3,5. - È presente un riferimento a `System.Core.dll` e `System.Data.Linq.dll`. - `Imports` `using` C#Per e<xref:System.Linq> è presente una direttiva (Visual Basic) o (). <xref:System.Data.Linq> ## <a name="duplicatekeyexception"></a>DuplicateKeyException Nel corso del debug di un [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] progetto, è possibile attraversare le relazioni di un'entità. In questo modo, questi elementi vengono inseriti nella cache [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] e diventano consapevoli della loro presenza. Se si tenta quindi di eseguire <xref:System.Data.Linq.Table%601.Attach%2A> o <xref:System.Data.Linq.Table%601.InsertOnSubmit%2A> oppure un metodo simile che crea più righe con la stessa chiave, viene generata un'eccezione <xref:System.Data.Linq.DuplicateKeyException>. ## <a name="string-concatenation-exceptions"></a>Eccezioni di concatenazione di stringhe La concatenazione su operandi di cui viene eseguito il mapping a `[n]text` e altri `[n][var]char` non è supportata. Viene generata un'eccezione per la concatenazione di stringhe di cui viene eseguito il mapping a due set di tipi diversi. Per ulteriori informazioni, vedere [metodi System. String](system-string-methods.md). ## <a name="skip-and-take-exceptions-in-sql-server-2000"></a>Come ignorare e accettare le eccezioni in SQL Server 2000 È necessario usare i membri di identità (<xref:System.Data.Linq.Mapping.ColumnAttribute.IsPrimaryKey%2A>) quando si usa <xref:System.Linq.Queryable.Take%2A> o <xref:System.Linq.Queryable.Skip%2A> su un database SQL Server 2000. La query deve essere eseguita su una singola tabella, ovvero non un join, o deve essere un'operazione <xref:System.Linq.Queryable.Distinct%2A>, <xref:System.Linq.Queryable.Except%2A>, <xref:System.Linq.Queryable.Intersect%2A> o <xref:System.Linq.Queryable.Union%2A> e non deve includere un'operazione <xref:System.Linq.Queryable.Concat%2A>. Per ulteriori informazioni, vedere la sezione "supporto di SQL Server 2000" in [conversione dell'operatore query standard](standard-query-operator-translation.md). Questo requisito non si applica al SQL Server 2005. ## <a name="groupby-invalidoperationexception"></a>GroupBy InvalidOperationException Questa eccezione viene generata quando un valore di colonna è null in una query <xref:System.Linq.Enumerable.GroupBy%2A> che esegue il raggruppamento in base a un'espressione `boolean`, ad esempio `group x by (Phone==@phone)`. Poiché `boolean`l'espressione è, la chiave viene dedotta `boolean`come, non `nullable` `boolean`. Quando il confronto tradotto produce un valore null, viene effettuato un tentativo di `nullable` assegnare un `boolean`oggetto `boolean` a un oggetto e viene generata l'eccezione. Per evitare questa situazione, presupponendo che si desideri trattare i valori null come false, usare un approccio analogo al seguente: `GroupBy="(Phone != null) && (Phone=@Phone)"` ## <a name="oncreated-partial-method"></a>Metodo parziale OnCreated() Il metodo `OnCreated()` generato viene chiamato ogni volta che viene chiamato il costruttore dell'oggetto, incluso lo scenario in cui [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] chiama il costruttore per creare una copia dei valori originali. Tenere conto di questo comportamento se si implementa il metodo `OnCreated()` nella classe parziale personalizzata. ## <a name="see-also"></a>Vedere anche - [Supporto per il debug](debugging-support.md) - [Domande frequenti](frequently-asked-questions.md)
98.631579
735
0.764408
ita_Latn
0.992948
6fdcf5419a01c5cd899503a42c6cd5efe74dfb34
1,829
md
Markdown
_janbrueghel/3828.md
brueghelfamily/brueghelfamily.github.io
a73351ac39b60cd763e483c1f8520f87d8c2a443
[ "MIT" ]
null
null
null
_janbrueghel/3828.md
brueghelfamily/brueghelfamily.github.io
a73351ac39b60cd763e483c1f8520f87d8c2a443
[ "MIT" ]
null
null
null
_janbrueghel/3828.md
brueghelfamily/brueghelfamily.github.io
a73351ac39b60cd763e483c1f8520f87d8c2a443
[ "MIT" ]
null
null
null
--- pid: '3828' label: Harbor with a Fish Market object_type: Drawing genre: Village, Town and Cityscape worktags: Market|Fish|Crowd|Merchants|Ship iconclass_code: height_cm: '20.4' width_cm: '27.9' diameter_cm: location_country: location_city: location_collection: accession_nos_and_notes: private_collection_info: Germany, Private Collection collection_type: Private realdate: 1593-1594 numeric_date: '1593' medium: support: Paper support_notes: signature: signature_location: support_marks: further_inscription: print_notes: print_type: plate_dimensions: states: printmaker: publisher: series: collaborators: collaborator_notes: collectors_patrons: our_attribution: other_attribution_authorities: 'Bailey/Walker cat. #GERM.PC.1' bibliography: 'Winner, 1961, p. 201, fig. 11|De Coo 1970, p. 126, fig. 123|Winner 1972, p. 136|Winner in Berlin 1975, pp. 97-98|Munich, 2013, p. 39, p. 252, cat. #44' biblio_reference: exhibition_history: 'Berlin 1975, cat. #115' ertz_1979: ertz_2008: bailey_walker: GERM.PC.1 hollstein_no: bad_copy: exclude_from_browsing: provenance: 5390|5391|5392|5393|5394|5395|5396 provenance_text: 'acquired in Delft between 1705 and 1731 by Valerius Röver|Amsterdam, Johann Goll van Franckenstein, c. 1761 |Sold de Vries, Amsterdam, May 8 1900, lot #31|Prince Joh. Georg von Sachsen, inv. #I, 445 (Lugt 1466)|sold Stuttgart November 7, 1951, lot #792|Germany, Private Collection' related_works: 3068|3341|3259|9576 related_works_notes: copies_and_variants: curatorial_files: general_notes: discussion: 694|695 external_resources_title: external_resources_url: thumbnail: "/img/derivatives/simple/3828/thumbnail.jpg" fullwidth: "/img/derivatives/simple/3828/fullwidth.jpg" collection: janbrueghel layout: janbrueghel_item order: '850' permalink: "/janbrueghel/harbor-with-a-fish-market" full: ---
26.128571
86
0.79825
eng_Latn
0.261732
6fdd19f5123cb7f213bfb3d291d9a201005b749a
25,371
md
Markdown
docs/migrate/azure-best-practices/contoso-migration-rehost-linux-vm.md
Miguel-byte/cloud-adoption-framework.es-es
7eb4e064be0052b81b8c7ae71f17bc1fb87fa8b5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/migrate/azure-best-practices/contoso-migration-rehost-linux-vm.md
Miguel-byte/cloud-adoption-framework.es-es
7eb4e064be0052b81b8c7ae71f17bc1fb87fa8b5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/migrate/azure-best-practices/contoso-migration-rehost-linux-vm.md
Miguel-byte/cloud-adoption-framework.es-es
7eb4e064be0052b81b8c7ae71f17bc1fb87fa8b5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Rehospedaje de una aplicación local Linux en Azure Virtual Machines titleSuffix: Microsoft Cloud Adoption Framework for Azure description: Obtenga información sobre cómo Contoso rehospeda una aplicación de Linux local mediante la migración a VM de Azure. author: BrianBlanchard ms.author: brblanch ms.date: 04/04/2019 ms.topic: conceptual ms.service: cloud-adoption-framework ms.subservice: migrate services: site-recovery ms.openlocfilehash: c1d7549a820b8f830fc577ce82ebc4d2f1dbfcb2 ms.sourcegitcommit: bf9be7f2fe4851d83cdf3e083c7c25bd7e144c20 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 11/04/2019 ms.locfileid: "73566543" --- # <a name="rehost-an-on-premises-linux-app-to-azure-vms"></a>Rehospedaje de una aplicación local Linux en Azure Virtual Machines En este artículo se muestra cómo la empresa ficticia Contoso rehospeda una aplicación de Apache MySQL PHP (LAMP) basada en Linux y de dos niveles mediante máquinas virtuales de IaaS de Azure. osTicket, la aplicación del departamento de servicios usada en este ejemplo, se proporciona como código abierto. Si quiere utilizarla para sus propias pruebas, puede descargarla desde [GitHub](https://github.com/osTicket/osTicket). ## <a name="business-drivers"></a>Impulsores del negocio El equipo directivo de TI ha trabajado estrechamente con sus socios comerciales para comprender lo quieren lograr con esta migración: - **Abordar el crecimiento del negocio.** Contoso está creciendo y, como resultado, sus sistemas locales e infraestructura están bajo presión. - **Limitar el riesgo.** la aplicación de consola de servicio es fundamental para el negocio de Contoso. Contoso quiere moverla a Azure sin ningún riesgo. - **Extensión.** Contoso no quiere cambiar ahora mismo la aplicación. Solo quiere asegurarse de que la aplicación es estable. ## <a name="migration-goals"></a>Objetivos de la migración El equipo de la nube de Contoso ha definido los objetivos de esta migración con el fin de determinar el mejor método para llevarla a cabo: - Tras la migración, la aplicación de Azure debe tener las mismas funcionalidades de rendimiento que las que tiene actualmente en su entorno de VMware local. La aplicación seguirá siendo tan imprescindible en la nube como lo es en el entorno local. - Contoso no quiere invertir en esta aplicación. Es importante para la empresa, pero, en su formato actual, Contoso solo quiere moverla a la nube de modo seguro. - Contoso no quiere cambiar el modelo de operaciones de esta aplicación. Quiere interactuar con la aplicación en la nube de la misma manera que lo hace ahora. - Contoso no quiere cambiar la funcionalidad de la aplicación. Solo cambiará su ubicación. - Una vez completadas un par de migraciones de aplicaciones de Windows, Contoso quiere aprender a usar una infraestructura basada en Linux en Azure. ## <a name="solution-design"></a>Diseño de la solución Después de fijar sus objetivos y requisitos, Contoso diseña y revisa una solución de implementación e identifica el proceso de migración, incluidos los servicios de Azure que Contoso usará para ello. ### <a name="current-app"></a>Aplicación actual - La aplicación OSTicket se divide en capas entre dos máquinas virtuales (**OSTICKETWEB** y **OSTICKETMYSQL**). - Las VM se encuentran en el host de VMware ESXi **contosohost1.contoso.com** (versión 6.5). - El entorno de VMware lo administra vCenter Server 6.5 (**vcenter.contoso.com**), que se ejecuta en una VM. - Contoso tiene un centro de datos local (**contoso-datacenter**), con un controlador de dominio local (**contosodc1**). ### <a name="proposed-architecture"></a>Arquitectura propuesta - Dado que la aplicación es una carga de trabajo de producción, las máquinas virtuales de Azure residirán en el grupo de recursos de producción **ContosoRG**. - Las VM migrarán a la región principal (Este de EE. UU. 2) y se colocarán en la red de producción (VNET-PROD-EUS2): - La VM web residirá en la subred de front-end (PROD-FE-EUS2). - La VM de la base de datos residirá en la subred de la base de datos (PROD-DB-EUS2). - Las VM locales del centro de datos de Contoso se retirarán después de realizar la migración. ![Arquitectura del escenario](./media/contoso-migration-rehost-linux-vm/architecture.png) ### <a name="solution-review"></a>Revisión de la solución Contoso evalúa el diseño propuesto y crea una lista de ventajas y desventajas. <!-- markdownlint-disable MD033 --> **Consideración** | **Detalles** --- | --- **Ventajas** | Las dos máquinas virtuales de la aplicación se moverán a Azure sin cambios, de forma que se simplifica la migración.<br/><br/> Dado que Contoso usa en enfoque lift-and-shift para ambas VM de la aplicación, no se necesitan herramientas de configuración ni de migración especiales para la base de datos de la aplicación.<br/><br/> Contoso conservará el control total de las máquinas virtuales de la aplicación en Azure. <br/><br/> Las máquinas virtuales de la aplicación ejecutan Ubuntu 16.04-TLS, que es una distribución de Linux aprobada. [Más información](https://docs.microsoft.com/azure/virtual-machines/linux/endorsed-distros). **Desventajas** | La capa de datos y la capa web de la aplicación seguirán siendo un único punto de conmutación por error. <br/><br/> Contoso deberá seguir admitiendo la aplicación en las máquinas virtuales de Azure en lugar de moverla a un servicio administrado como Azure App Service y Azure Database for MySQL.<br/><br/> Contoso sabe que al simplificar las cosas con una migración de máquinas virtuales mediante lift-and-shift, no están aprovechando al máximo las características proporcionadas por [Azure Database for MySQL](https://docs.microsoft.com/azure/mysql/overview) (alta disponibilidad integrada, rendimiento predecible, escalado sencillo, copias de seguridad automáticas y seguridad integrada). <!-- markdownlint-enable MD033 --> ### <a name="migration-process"></a>Proceso de migración Contoso realizará la migración como se indica a continuación: - Primero, Contoso prepara y configura los componentes de Azure para Azure Migrate Server Migration y prepara la infraestructura local de VMware. - Ya tienen la [infraestructura de Azure](./contoso-migration-infrastructure.md) en su lugar, por lo que Contoso solo tiene que configurar la replicación de las máquinas virtuales con la herramienta de Azure Migrate Server Migration. - Con todo preparado, Contoso puede comenzar a replicar las máquinas virtuales. - Una vez que se haya habilitado la replicación y esta se encuentre en funcionamiento, Contoso migrará la máquina virtual, haciendo que conmute por error en Azure. ![Proceso de migración](./media/contoso-migration-rehost-linux-vm/migration-process-az-migrate.png) ### <a name="azure-services"></a>Servicios de Azure **Servicio** | **Descripción** | **Costee** --- | --- | --- [Azure Migrate Server Migration](https://docs.microsoft.com/azure/migrate/contoso-migration-rehost-linux-vm) | El servicio orquesta y administra la migración de las aplicaciones y cargas de trabajo locales, y las instancias de máquina virtual de AWS/GCP. | Durante la replicación en Azure, se incurre en gastos de Azure Storage. Las máquinas virtuales de Azure se crean, e incurren en gastos, cuando se produce una conmutación por error. [Más información](https://azure.microsoft.com/pricing/details/azure-migrate) sobre cargos y precios. ## <a name="prerequisites"></a>Requisitos previos Esto es lo que Contoso necesita para este escenario. <!-- markdownlint-disable MD033 --> **Requisitos** | **Detalles** --- | --- **Suscripción de Azure** | En un artículo anterior de esta serie, Contoso creó suscripciones. Si no tiene una suscripción a Azure, cree una [cuenta gratuita](https://azure.microsoft.com/pricing/free-trial).<br/><br/> Si crea una cuenta gratuita, será el administrador de su suscripción y podrá realizar todas las acciones.<br/><br/> Si usa una suscripción existente y no es el administrador, tendrá que solicitar al administrador que le asigne permisos de propietario o colaborador.<br/><br/> Si necesita permisos más específicos, consulte [este artículo](https://docs.microsoft.com/azure/site-recovery/site-recovery-role-based-linked-access-control). **Infraestructura de Azure** | [Vea](./contoso-migration-infrastructure.md) cómo Contoso configuró una infraestructura de Azure.<br/><br/> Más información sobre los [requisitos previos](https://docs.microsoft.com/azure/migrate/contoso-migration-rehost-linux-vm#prerequisites) específicos para Azure Migrate Server Migration. **Servidores locales** | La instancia local de vCenter Server debe ejecutarse en las versiones 5.5, 6.0 o 6.5.<br/><br/> Un host ESXi que ejecute la versión 5.5, 6.0 o 6.5<br/><br/> Una o más máquinas virtuales VMware que se ejecuten en el host ESXi. **Máquinas virtuales locales** | [Revise las máquinas Linux](https://docs.microsoft.com/azure/virtual-machines/linux/endorsed-distros) que se han aprobado para ejecutarse en Azure. <!-- markdownlint-enable MD033 --> ## <a name="scenario-steps"></a>Pasos del escenario Así es como Azure realizará la migración: > [!div class="checklist"] > > - **Paso 1: Preparación de Azure para Azure Migrate Server Migration.** Se agrega la herramienta Azure Migrate Server Migration al proyecto de Azure Migrate. > - **Paso 2: Preparación del entorno de VMware local para Azure Migrate Server Migration.** Se preparan las cuentas para la detección de VM y se prepara la conexión a las VM de Azure tras la conmutación por error. > - **Paso 3: Replicación de máquinas virtuales.** Se configura la replicación y se comienzan a replicar las VM en Azure Storage. > - **Paso 4: Migración de las máquinas virtuales con Azure Migrate Server Migration.** se ejecuta una conmutación por error de prueba para asegurarse de que todo funciona y, luego, se ejecuta una conmutación por error completa para migrar las VM a Azure. ## <a name="step-1-prepare-azure-for-the-azure-migrate-server-migration-tool"></a>Paso 1: Preparación de Azure para Azure Migrate Server Migration Estos son los componentes de Azure que Contoso necesita para migrar las VM a Azure: - Una red virtual en la que se encontrarán las VM de Azure cuando se creen durante la conmutación por error. - Herramienta Azure Migrate Server Migration aprovisionada. Se deben configurar como se muestra a continuación: 1. **Configuración de una red:** Contoso ya configuró una red que se puede usar para Azure Migrate Server Migration cuando [implementó la infraestructura de Azure](./contoso-migration-infrastructure.md). - La aplicación SmartHotel360 es una aplicación de producción, y las VM se migrarán a la red de producción de Azure (VNET-PROD-EUS2) en la región principal Este de EE. UU. 2. - Ambas VM se colocarán en el grupo de recursos ContosoRG, que se usa para los recursos de producción. - La máquina virtual de front-end de la aplicación (WEBVM) se migrará a la subred de front-end (PROD-FE-EUS2), en la red de producción. - La base de datos de la aplicación (SQLVM) se migrará a la subred de base de datos (PROD-DB-EUS2), en la red de producción. 2. **Aprovisionamiento de la herramienta de migración de Azure Migrate Server:** Con la cuenta de almacenamiento y de red implementada, Contoso ahora crea un almacén de Recovery Services (ContosoMigrationVault) y lo coloca en el grupo de recursos ContosoFailoverRG de la región principal Este de EE. UU. 2. ![Herramienta Azure Migrate Server Migration](./media/contoso-migration-rehost-linux-vm/server-migration-tool.png) **¿Necesita más ayuda?** [Más información](https://docs.microsoft.com/azure/migrate) sobre cómo configurar la herramienta Azure Migrate Server Migration. ### <a name="prepare-to-connect-to-azure-vms-after-failover"></a>Preparación para la conexión a las máquinas virtuales de Azure después de la conmutación por error Después de la conmutación por error en Azure, Contoso quiere poder conectarse a las VM replicadas de Azure. Para ello, hay un par de cosas que los administradores de Contoso deben hacer: - Para acceder a las máquinas virtuales de Azure a través de Internet, habilitan SSH en la máquina virtual Linux local antes de la migración. En el caso de Ubuntu, puede realizarse mediante el siguiente comando: **Sudo apt-get ssh install -y**. - Después de ejecutar la migración (conmutación por error), pueden comprobar los **diagnósticos de arranque** para ver una captura de pantalla de la máquina virtual. - Si esto no funciona, deberán comprobar que la máquina virtual está en ejecución y revisar estas [sugerencias de solución de problemas](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx). **¿Necesita más ayuda?** - [Más información](https://docs.microsoft.com/azure/migrate/contoso-migration-rehost-linux-vm#prepare-vms-for-migration) sobre cómo preparar las máquinas virtuales para la migración. ## <a name="step-3-replicate-the-on-premises-vms"></a>Paso 3: Replicar máquinas virtuales locales Para que los administradores de Contoso puedan ejecutar una migración a Azure, tienen que configurar y habilitar la replicación. Una vez finalizada la detección, puede comenzar la replicación de máquinas virtuales de VMware en Azure. 1. En el proyecto de Azure Migrate > **Servidores**, **Azure Migrate: Migración del servidor**, haga clic en **Replicar**. ![Replicación de máquinas virtuales](./media/contoso-migration-rehost-linux-vm/select-replicate.png) 2. En **Replicar** > **Configuración de origen** > **¿Las máquinas están virtualizadas?** , seleccione **Sí, con VMware vSphere Hypervisor**. 3. En **Dispositivo local**, seleccione el nombre del dispositivo de Azure Migrate que configuró > **OK**. ![Configuración de origen](./media/contoso-migration-rehost-linux-vm/source-settings.png) 4. En **Máquinas virtuales**, seleccione las máquinas que desea replicar. - Si ha ejecutado una evaluación para las máquinas virtuales, puede aplicar las recomendaciones de tamaño y tipo de disco (Premium/estándar) de máquina virtual que sugieren los resultados de dicha evaluación. Para ello, en **¿Quiere importar la configuración de migración de evaluación de Azure Migrate?** , seleccione la opción **Sí**. - Si no ha ejecutado una evaluación o no desea usar la configuración de evaluación, seleccione la opción **No**. - Si ha decidido usar la evaluación, seleccione el grupo de máquinas virtuales y el nombre de la evaluación. ![Seleccionar evaluación](./media/contoso-migration-rehost-linux-vm/select-assessment.png) 5. En **Máquinas virtuales**, busque las máquinas virtuales que necesite y compruebe todas las que desee migrar. A continuación, haga clic en **Siguiente: Configuración de destino**. 6. En **Configuración de destino**, seleccione la suscripción y la región de destino a la que va a migrar, y especifique el grupo de recursos en el que residirán las máquinas virtuales de Azure después de la migración. En **Red virtual**, seleccione la red virtual o la subred de Azure a la que se unirán las máquinas virtuales de Azure después de la migración. 7. En **Ventaja híbrida de Azure**, seleccione lo siguiente: - Seleccione **No** si no desea aplicar la Ventaja híbrida de Azure. A continuación, haga clic en **Siguiente**. - Seleccione **Sí** si tiene equipos con Windows Server que están incluidos en suscripciones activas de Software Assurance o Windows Server y desea aplicar el beneficio a las máquinas que va a migrar. A continuación, haga clic en **Siguiente**. 8. En **Proceso**, revise el nombre de la máquina virtual, el tamaño, el tipo de disco del sistema operativo y el conjunto de disponibilidad. Las máquinas virtuales deben cumplir los [requisitos de Azure](https://docs.microsoft.com/azure/migrate/migrate-support-matrix-vmware#agentless-migration-vmware-vm-requirements). - **Tamaño de VM:** si usa las recomendaciones de la evaluación, el menú desplegable de tamaño de VM contendrá el tamaño recomendado. De lo contrario, Azure Migrate elige un tamaño en función de la coincidencia más cercana en la suscripción de Azure. También puede elegir un tamaño de manera manual en **Tamaño de la máquina virtual de Azure**. - **Disco del sistema operativo**: especifique el disco del sistema operativo (arranque) de la máquina virtual. Este es el disco que tiene el cargador de arranque y el instalador del sistema operativo. - **conjunto de disponibilidad:** si la máquina virtual debe estar incluida en un conjunto de disponibilidad de Azure después de la migración, especifique el conjunto. El conjunto debe estar en el grupo de recursos de destino que especifique para la migración. 9. En **Discos**, especifique si los discos de la máquina virtual se deben replicar en Azure y seleccione el tipo de disco (discos SSD o HDD estándar, o bien discos administrados prémium) en Azure. A continuación, haga clic en **Siguiente**. - Puede excluir discos de la replicación. - Si excluye discos, no estarán incluidos en la máquina virtual de Azure después de la migración. 10. En **Revisar e iniciar la replicación**, revise la configuración y haga clic en **Replicar** para iniciar la replicación inicial de los servidores. > [!NOTE] > Puede actualizar la configuración de replicación en cualquier momento antes de que esta comience; para ello, vaya a **Administrar** > **Replicación de máquinas**. Una vez iniciada la replicación, su configuración no se puede cambiar. ## <a name="step-4-migrate-the-vms"></a>Paso 4: Migrar las VM Los administradores de Contoso ejecutan una conmutación por error de prueba rápida y, luego, una conmutación por error completa para migrar las máquinas virtuales. ### <a name="run-a-test-failover"></a>Ejecución de una conmutación por error de prueba 1. En **Objetivos de migración** > **Servidores** > **Azure Migrate: Server Migration**, haga clic en **Probar servidores migrados**. ![Probar servidores migrados](./media/contoso-migration-rehost-linux-vm/test-migrated-servers.png) 2. Haga clic con el botón derecho en la máquina virtual que va a probar y haga clic en **Migración de prueba**. ![Migración de prueba](./media/contoso-migration-rehost-linux-vm/test-migrate.png) 3. En **Migración de prueba**, seleccione la red virtual de Azure en la que se ubicará la máquina virtual de Azure después de la migración. Se recomienda usar una red virtual que no sea de producción. 4. Comienza el trabajo de **Migración de prueba**. Supervise el trabajo en las notificaciones del portal. 5. Una vez finalizada la migración, la máquina virtual de Azure migrada se puede ver en **Máquinas virtuales** en Azure Portal. El nombre de la máquina tiene el sufijo **-Test**. 6. Una vez finalizada la prueba, haga clic con el botón derecho en la máquina virtual de Azure, en **Replicación de máquinas**, y haga clic en **Limpiar la migración de prueba**. ![Limpiar la migración](./media/contoso-migration-rehost-linux-vm/clean-up.png) ### <a name="migrate-the-vms"></a>Migrar las VM Ahora, los administradores de Contoso ejecutan una conmutación por error completa para completar la migración. 1. En el proyecto de Azure Migrate > **Servidores** > **Azure Migrate: Server Migration**, haga clic en **Replicando servidores**. ![Replicando servidores](./media/contoso-migration-rehost-linux-vm/replicating-servers.png) 2. En **Replicación de máquinas**, haga clic con el botón derecho en la máquina virtual > **Migrar**. 3. En **Migrar** > **¿Quiere apagar las máquinas virtuales y realizar una migración planificada sin perder datos?** , seleccione **Sí** > **Aceptar**. - De forma predeterminada, Azure Migrate apaga la máquina virtual local y ejecuta una replicación a petición para sincronizar los cambios que se han producido en la máquina virtual desde la última replicación. De esta forma se garantiza que no se pierden datos. - Si no desea apagar la máquina virtual, seleccione **No** 4. Se inicia un trabajo de migración de la máquina virtual. Realice un seguimiento del trabajo en las notificaciones de Azure. 5. Una vez finalizado el trabajo, la máquina virtual puede ver y administrar desde la página **Máquinas virtuales**. ### <a name="connect-the-vm-to-the-database"></a>Conexión de la VM a la base de datos Como último paso del proceso de migración, los administradores de Contoso actualizan la cadena de conexión de la aplicación para que apunte a la base de datos de la aplicación que se ejecuta en la máquina virtual **OSTICKETMYSQL**. 1. Realiza una conexión SSH a la VM **OSTICKETWEB** mediante Putty u otro cliente SSH. La VM es privada, por lo que se conecta mediante la dirección IP privada. ![Conexión a la base de datos](./media/contoso-migration-rehost-linux-vm/db-connect.png) ![Conexión a la base de datos](./media/contoso-migration-rehost-linux-vm/db-connect2.png) 2. Contoso debe asegurarse de que la VM **OSTICKETWEB** puede comunicarse con la base de datos **OSTICKETMYSQL**. Actualmente, la configuración está codificada de forma rígida con la dirección IP local 172.16.0.43. **Antes de la actualización:** ![Actualización de la IP](./media/contoso-migration-rehost-linux-vm/update-ip1.png) **Después de la actualización:** ![Actualización de la IP](./media/contoso-migration-rehost-linux-vm/update-ip2.png) 3. Reinicia el servicio con **systemctl restart apache2**. ![Reinicio](./media/contoso-migration-rehost-linux-vm/restart.png) 4. Por último, actualiza los registros DNS de **OSTICKETWEB** y **OSTICKETMYSQL** en uno de los controladores de dominio de Contoso. ![Actualización de DNS](./media/contoso-migration-rehost-linux-vm-mysql/update-dns.png) ![Actualización de DNS](./media/contoso-migration-rehost-linux-vm-mysql/update-dns.png) **¿Necesita más ayuda?** - [Más información](https://docs.microsoft.com/azure/migrate/tutorial-migrate-vmware#run-a-test-migration) sobre la ejecución de una conmutación por error de prueba. - [Más información](https://docs.microsoft.com/azure/migrate/tutorial-migrate-vmware#migrate-vms) sobre cómo migrar máquinas virtuales a Azure. ## <a name="clean-up-after-migration"></a>Limpiar después de la migración Al finalizar la migración, los niveles de la aplicación osTicket se ejecutan en VM de Azure. Ahora, Contoso debe realizar la siguiente limpieza: - Quitar las VM locales del inventario de vCenter. - Quitar las VM locales de los trabajos de copia de seguridad locales. - Actualizar la documentación interna para mostrar la nueva ubicación, así como las direcciones IP de OSTICKETWEB y OSTICKETMYSQL. - Revisar los recursos que interactúan con las VM y actualizar las opciones de configuración pertinentes o la documentación para reflejar la nueva configuración. - Contoso usaba el servicio de Azure Migrate con la asignación de dependencia para obtener acceso a las VM para la migración. Los administradores deben quitar de la máquina virtual los componentes Microsoft Monitoring Agent y Microsoft Dependency Agent que se instalaron con este fin. ## <a name="review-the-deployment"></a>Revisión de la implementación Con la aplicación en ejecución, Contoso debe proteger y poner totalmente en marcha la infraestructura nueva. ### <a name="security"></a>Seguridad El equipo de seguridad de Contoso revisa las VM OSTICKETWEB y OSTICKETMYSQL para determinar cualquier problema de seguridad. - El equipo revisa los grupos de seguridad de red de las máquinas virtuales para controlar el acceso. Los NSG se usan para garantizar que solo el tráfico permitido puede pasar a la aplicación. - El equipo también considera la posibilidad de proteger los datos de los discos de la máquina virtual mediante el cifrado de discos y Azure Key Vault. Para obtener más información, consulte [Procedimientos de seguridad recomendados para cargas de trabajo de IaaS de Azure](https://docs.microsoft.com/azure/security/fundamentals/iaas). ### <a name="bcdr"></a>BCDR Para la continuidad empresarial y la recuperación ante desastres, Contoso realiza las siguientes acciones: - **Mantener seguros los datos.** Contoso realiza la copia de seguridad de los datos de las VM mediante el servicio Azure Backup. [Más información](https://docs.microsoft.com/azure/backup/backup-introduction-to-azure-backup?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). - **Mantener las aplicaciones en funcionamiento.** Contoso replica las máquinas virtuales de la aplicación de Azure en una región secundaria mediante Site Recovery. [Más información](https://docs.microsoft.com/azure/site-recovery/azure-to-azure-quickstart). ### <a name="licensing-and-cost-optimization"></a>Optimización de los costos y licencias - Después de implementar los recursos, Contoso asigna etiquetas de Azure según lo definido durante la [implementación de la infraestructura de Azure](./contoso-migration-infrastructure.md#set-up-tagging). - Contoso no tiene problemas relacionados con las licencias con sus servidores de Ubuntu. - Contoso habilitará Azure Cost Management bajo licencia de Cloudyn, una subsidiaria de Microsoft. Se trata de una solución de administración de costos en varias nubes que le permitirá utilizar y administrar Azure y otros recursos en la nube. [Más información](https://docs.microsoft.com/azure/cost-management/overview) sobre Azure Cost Management.
81.578778
708
0.778014
spa_Latn
0.986928
6fdd31a55e999e02bca3583876c45a0c8deaf236
9,806
md
Markdown
.template/README.md
kleiderer/microsoft-graph-cli
926d86277691fc112272ca670ab09d02788e078d
[ "MIT" ]
null
null
null
.template/README.md
kleiderer/microsoft-graph-cli
926d86277691fc112272ca670ab09d02788e078d
[ "MIT" ]
1
2020-12-07T02:25:31.000Z
2020-12-07T02:25:31.000Z
.template/README.md
kleiderer/microsoft-graph-cli
926d86277691fc112272ca670ab09d02788e078d
[ "MIT" ]
null
null
null
# microsoft_graph_cli [![Deno CI](https://github.com/kleiderer/microsoft_graph_cli/workflows/Deno%20CI/badge.svg)](https://github.com/kleiderer/microsoft_graph_cli/actions) [![GitHub](https://img.shields.io/github/license/kleiderer/microsoft_graph_cli)](https://github.com/kleiderer/microsoft_graph_cli/blob/master/LICENSE) [![TypeScript](https://img.shields.io/badge/types-TypeScript-blue)](https://github.com/kleiderer/microsoft_graph_cli) [![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release) [![Commitizen friendly](https://img.shields.io/badge/commitizen-friendly-brightgreen.svg)](http://commitizen.github.io/cz-cli/) <!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section --> [![All Contributors](https://img.shields.io/badge/all_contributors-1-orange.svg?style=flat-square)](#contributors-) <!-- ALL-CONTRIBUTORS-BADGE:END --> **An unofficial command line utility for accessing the [Microsoft Graph](https://developer.microsoft.com/en-us/graph), written with Deno and Typescript** ## Installation ### Before You Get Started Ensure Deno 1.5.4 installed. If you don't have Deno installed yet, follow the [Deno installation guide](https://deno.land/manual@v1.5.4/getting_started/installation). This CLI _should_ work on Mac OS, Linux, and Windows operating systems. However, it has only been tested on macOS Catalina v.10.15.7. If you run into any problems, please [open an issue](https://github.com/kleiderer/microsoft_graph_cli/issues/new). ### Use Deno Install (Preferred) If you'd like to install this project as an executable in your path, run: `deno install --unstable [permissions] -n mgraph https://deno.land/x/microsoft_graph_cli@v0.0.0-development/cli.ts` - Replace `[permissions]` in this command with any permissions you wish to grant to the CLI. See [Permissions](#Permissions) for all optional and required permissions and their rationale. Example: `--allow-net --allow-read --allow-write` - (Optional) If you would like to grant all permissions to this CLI, you may replace `[permissions]` with `--allow-all`. This will prevent errors if future versions require new permissions, but is considered less secure. - (Optional) You may install this cli with a different executable name by replace `mgraph` with a name of your choice. If you do so, make sure to replace `mgraph` with your selected name in all usage examples. - (Optional) You may install any [released version](https://github.com/kleiderer/microsoft_graph_cli/releases) of this CLI by changing `v0.0.0-development` to the desired version. ### Use Deno Run If you don't wish to [use deno install](#use-deno-install-preferred), replace any instances of `mgraph` in the usage examples with: `deno run --unstable [permissions] https://deno.land/x/microsoft_graph_cli@v0.0.0-development/cli.ts` - Replace `[permissions]` in this command with any permissions you wish to grant to the CLI. See [Permissions](#Permissions) for all optional and required permissions and their rationale. Example: `--allow-net --allow-read --allow-write` - (Optional) If you would like to grant all permissions to this CLI, you may replace `[permissions]` with `--allow-all`. This will prevent errors if future versions require new permissions, but is considered less secure. - (Optional) You may use any [released version](https://github.com/kleiderer/microsoft_graph_cli/releases) of this CLI by changing the `v0.0.0-development` to the desired version. ### Updating If you've previously installed with [deno install](#use-deno-install-preferred), you may update by adding the `-f` flag to the command: `deno install --unstable [permissions] -n mgraph -f https://deno.land/x/microsoft_graph_cli@v0.0.0-development/cli.ts` - Replace `[permissions]` in this command with any permissions you wish to grant to the CLI. See [Permissions](#Permissions) for all optional and required permissions and their rationale. Example: `--allow-net --allow-read --allow-write` - (Optional) If you would like to grant all permissions to this CLI, you may replace `[permissions]` with `--allow-all`. This will prevent errors if future versions require new permissions, but is considered less secure. - (Optional) You may install this cli with a different executable name by replace `mgraph` with a name of your choice. If you do so, make sure to replace `mgraph` with your selected name in all usage examples. - (Optional) You may install any [released version](https://github.com/kleiderer/microsoft_graph_cli/releases) of this CLI by changing the `v0.0.0-development` to the desired version. If you use the [deno run](#use-deno-install-preferred) installation option, simply change the `v0.0.0-development` to the [latest release](https://github.com/kleiderer/microsoft_graph_cli/releases/latest) when you next run the program. ## Usage Run `mgraph --help` for a full list of all commands and parameters. ## Permissions The following permissions are used by this CLI for the following reasons: | Name | Reason | | ------------- | ----------------------------------------------------------------------------- | | --allow-env | Detect the cache directory to cache tokens and maintain login state. | | --allow-net | Make API calls to the Microsoft Graph. | | | Host a local server for interactive authentication. | | --allow-read | Read cached tokens and maintain login state. | | | Read configuration files. | | | Output results to files. | | --allow-run | Automatically open the user's default browser for interactive authentication. | | --allow-write | Cache tokens and maintain login state. | | | Output results to files. | ## Acknowledgements The following projects and resources made this project possible (alphabetical order): - [all-contributors](https://github.com/all-contributors/all-contributors): Generates the Contributors badge and lists contributors in the readme. - [Cliffy](https://github.com/c4spar/deno-cliffy): Command line framework for deno - [Deno](https://deno.land/): The runtime and a dependency host for this project. - [deno_cache_dir](https://github.com/justjavac/deno_cache_dir): Returns the path to the user's cache directory. - [deno_free_port](https://github.com/axetroy/deno_free_port): Gets an available port. - [Microsoft Graph JavaScript Client Library](https://github.com/microsoftgraph/msgraph-sdk-javascript) - [Microsoft Graph TypeScript Types](https://github.com/microsoftgraph/msgraph-typescript-typings): Provides TypeScript definitions for Microsoft Graph objects. - [oak](https://github.com/oakserver/oak): A middleware framework for Deno. - [OAuth2 Client for Deno](https://github.com/cmd-johnson/deno-oauth2-client): A minimalist OAuth 2.0 client for Deno. - [opener](https://github.com/TanishShinde/opener): Opens URLs in the user's default browsers. - [Skypack](https://www.skypack.dev/): Hosts NPM packages compiled as ES Modules enabling the use of some NPM packages in Deno. - [TypeScript](https://www.typescriptlang.org/): The primary programming language for this project. _(If you feel an acknowledgement is missing, please [open an issue](https://github.com/kleiderer/microsoft_graph_cli/issues/new) explaining the missing project or resource and we'll update the list.)_ ## Contributors This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome! If you would like to contribute to this project, please see the [contributing documentation](CONTRIBUTING.md). Thanks to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)): <!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> <table> <tr> <td align="center"><a href="https://kleiderer.com/"><img src="https://avatars0.githubusercontent.com/u/4278631?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Nicolas Kleiderer</b></sub></a><br /><a href="https://github.com/kleiderer/microsoft_graph_cli/commits?author=nakleiderer" title="Code">💻</a> <a href="https://github.com/kleiderer/microsoft_graph_cli/commits?author=nakleiderer" title="Documentation">📖</a> <a href="#ideas-nakleiderer" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/kleiderer/microsoft_graph_cli/pulls?q=is%3Apr+reviewed-by%3Anakleiderer" title="Reviewed Pull Requests">👀</a></td> </tr> </table> <!-- markdownlint-restore --> <!-- prettier-ignore-end --> <!-- ALL-CONTRIBUTORS-LIST:END --> _(If you have contributed anything to this project and your name is missing, please [open an issue](https://github.com/kleiderer/microsoft_graph_cli/issues/new) referencing your contributions and we'll update the list.)_ ## Notice of Non-Affiliation and Disclaimer This project is not affiliated, associated, authorized, endorsed by, or in any way officially connected with Microsoft, or any of its subsidiaries or its affiliates. The official Microsoft Graph CLI may be found at https://github.com/microsoftgraph/msgraph-cli The name Microsoft as well as related names, marks, emblems and images are registered trademarks of their respective owners. ## License This project is available under the MIT license. See [LICENSE](LICENSE) for the full license.
77.212598
632
0.723435
eng_Latn
0.923966
6fdd914dcb0ecc9bb134b2a5235eeb8425c67e6d
1,914
md
Markdown
user/deployment/atlas.md
TomRiedl/docs-travis-ci-com
c0dd65dec21808faaa73ed286be5b758ae219d42
[ "MIT" ]
4
2019-07-08T12:23:47.000Z
2021-05-19T05:45:48.000Z
user/deployment/atlas.md
TomRiedl/docs-travis-ci-com
c0dd65dec21808faaa73ed286be5b758ae219d42
[ "MIT" ]
17
2021-04-14T15:36:54.000Z
2022-02-12T01:46:17.000Z
user/deployment/atlas.md
TomRiedl/docs-travis-ci-com
c0dd65dec21808faaa73ed286be5b758ae219d42
[ "MIT" ]
3
2018-10-09T07:56:45.000Z
2021-02-11T11:31:35.000Z
--- title: Atlas deployment layout: en --- Travis CI can automatically deploy your application to [Atlas](https://atlas.hashicorp.com/) after a successful build. > Hashicorp [announced](https://www.hashicorp.com/blog/hashicorp-terraform-enterprise-general-availability#decommissioning-atlas) that Atlas is being decommissioned by March 30, 2017. It is replaced by Terraform Enterprise. To deploy your application to Atlas: 1. Sign in to your Atlas account. 2. [Generate](https://atlas.hashicorp.com/settings/tokens) an Atlas API token for Travis CI. 3. Add the following minimum configuration to your `.travis.yml` ```yaml deploy: provider: atlas token: "YOUR ATLAS API TOKEN" app: "YOUR ATLAS USERNAME/YOUR ATLAS APP NAME" ``` {: data-file=".travis.yml"} ## Including or Excluding Files You can include and exclude files by adding the `include` and `exclude` entries to `.travis.yml`. Both are glob patterns of files or directories to include or exclude, and may be specified multiple times. If there is a conflict, excludes have precedence over includes. ```yaml deploy: provider: atlas exclude: "*.log" include: - "build/*" - "bin/*" ``` {: data-file=".travis.yml"} ### Using your Version Control System Get the lists of files to exclude and include from your version control system (Git, Mercurial or Subversion): ```yaml deploy: provider: atlas vcs: true ``` {: data-file=".travis.yml"} ## Other Deployment Options ### Specifying the Address of the Atlas Server: ```yaml deploy: provider: atlas address: "URL OF THE ATLAS SERVER" ``` {: data-file=".travis.yml"} ### Adding Custom Metadata Add one or more items of metadata: ```yaml deploy: provider: atlas metadata: - "custom_name=Jane" - "custom_surname=Doe" ``` {: data-file=".travis.yml"} {{ site.data.snippets.conditional_deploy }} {{ site.data.snippets.before_and_after }}
24.857143
268
0.714734
eng_Latn
0.968388
6fde910fd309fba6adb0c8f8a36f31690eb2e659
83
md
Markdown
README.md
emzubair/EmojiCountrySelector
31480ac85db84a9e8bcdb14b2849b41c2ffd476a
[ "MIT" ]
null
null
null
README.md
emzubair/EmojiCountrySelector
31480ac85db84a9e8bcdb14b2849b41c2ffd476a
[ "MIT" ]
null
null
null
README.md
emzubair/EmojiCountrySelector
31480ac85db84a9e8bcdb14b2849b41c2ffd476a
[ "MIT" ]
null
null
null
# EmojiCountrySelector A country code selector with flag emojis for the countries.
27.666667
59
0.831325
eng_Latn
0.96137
6fdec30d88d46efbe6e26436e0ae3e8ee1b2c26b
185
md
Markdown
README.md
ElderAxe/AngelsAddons-WarehouseSiloFix
4a48c73f94fa1be9790e70ee78cade8117e43759
[ "MIT" ]
1
2018-10-23T20:31:55.000Z
2018-10-23T20:31:55.000Z
README.md
ElderAxe/AngelsAddons-WarehouseSiloFix
4a48c73f94fa1be9790e70ee78cade8117e43759
[ "MIT" ]
null
null
null
README.md
ElderAxe/AngelsAddons-WarehouseSiloFix
4a48c73f94fa1be9790e70ee78cade8117e43759
[ "MIT" ]
null
null
null
# AngelsAddons-WarehouseSiloFix Factorio Mod: Adds buffer warehouse and buffer silo for Angel's mod. Also adds logistic filter for storage warehouse, storage silo and big storage chest
46.25
83
0.827027
eng_Latn
0.945977
6fdf5bfe3c115e4d36e4915681175875a1cdca7e
48
md
Markdown
README.md
CloudMyn/MetaSearch
2b8abff9cc47a5b3959c19d76e148851f51112a1
[ "MIT" ]
null
null
null
README.md
CloudMyn/MetaSearch
2b8abff9cc47a5b3959c19d76e148851f51112a1
[ "MIT" ]
null
null
null
README.md
CloudMyn/MetaSearch
2b8abff9cc47a5b3959c19d76e148851f51112a1
[ "MIT" ]
null
null
null
# MetaSearch Advanced search system for laravel
16
34
0.833333
eng_Latn
0.85402
6fdf98c502bf3f2f5c32ca3a29a3cfd8917c1118
945
md
Markdown
content/docs/examples/c/libzmq/messages_strings_send_recv_multi.md
LeafyLi/zeromq.org
a89c14fe564b2e976f352e146fa5b37355fdfff7
[ "Apache-2.0" ]
29
2019-08-13T18:23:52.000Z
2022-03-18T01:05:43.000Z
content/docs/examples/c/libzmq/messages_strings_send_recv_multi.md
LeafyLi/zeromq.org
a89c14fe564b2e976f352e146fa5b37355fdfff7
[ "Apache-2.0" ]
46
2019-06-16T08:20:34.000Z
2022-02-09T11:03:47.000Z
content/docs/examples/c/libzmq/messages_strings_send_recv_multi.md
LeafyLi/zeromq.org
a89c14fe564b2e976f352e146fa5b37355fdfff7
[ "Apache-2.0" ]
168
2019-08-13T22:37:41.000Z
2022-03-30T16:59:13.000Z
--- name: messages_strings_send_recv_multi language: C library: libzmq --- The following function sends an array of string to a socket. The *ZMQ_SNDMORE* flag tells ZeroMQ to postpone sending until all frames are ready. ```c static void s_send_strings (void *socket, const char[] *strings, int no_of_strings) { for (index = 0; index < no_of_strings; index++) { int FLAG = (index + 1) == no_of_strings ? 0 : ZMQ_SNDMORE; zmq_send (socket, strdup(strings[index]), strlen(strings[index]), FLAG); } } ``` To retrieve a string frames from a multi-part messages we must use the *ZMQ_RCVMORE* `zmq_getsockopt()` option after calling `zmq_recv()` to determine if there are further parts to receive. ```c char *strings[25]; int rcvmore; size_t option_len = sizeof (int); int index = 0; do { strings[index++] = s_recv_string (socket); zmq_getsockopt (socket, ZMQ_RCVMORE, &rcvmore, &option_len); } while (rcvmore); ```
27.794118
80
0.70582
eng_Latn
0.784092
6fe03d2e85354f7c7a9590815e1165ee5aedda07
969
md
Markdown
includes/notification-hubs-selector-get-started.md
OpenLocalizationOrg/azuretest1_zh-CN
358681cf517ea1d0b86fff60a5f45d3c4762670d
[ "CC-BY-3.0" ]
null
null
null
includes/notification-hubs-selector-get-started.md
OpenLocalizationOrg/azuretest1_zh-CN
358681cf517ea1d0b86fff60a5f45d3c4762670d
[ "CC-BY-3.0" ]
null
null
null
includes/notification-hubs-selector-get-started.md
OpenLocalizationOrg/azuretest1_zh-CN
358681cf517ea1d0b86fff60a5f45d3c4762670d
[ "CC-BY-3.0" ]
null
null
null
--- ms.openlocfilehash: 0e8abc17d85f4b3e5ce80d374b876aeee75ad3ae ms.sourcegitcommit: bab1265d669c3e6871daa7cb8a5640a47104947a translationtype: MT --- > [AZURE.SELECTOR] - [Windows 运行时 8.1 世界](../articles/notification-hubs/notification-hubs-windows-store-dotnet-get-started.md) - [Windows Phone Silverlight 8.x](../articles/notification-hubs/notification-hubs-windows-phone-get-started.md) - [iOS](../articles/notification-hubs/notification-hubs-ios-get-started.md) - [Android](../articles/notification-hubs/notification-hubs-android-get-started.md) - [Kindle](../articles/notification-hubs/notification-hubs-kindle-get-started.md) - [Baidu](../articles/notification-hubs/notification-hubs-baidu-get-started.md) - [Xamarin.iOS](../articles/notification-hubs/notification-hubs-ios-get-started.md) - [Xamarin.Android](../articles/notification-hubs/notification-hubs-android-get-started.md) - [Chrome](../articles/notification-hubs/notification-hubs-chrome-get-started.md)
60.5625
111
0.793602
yue_Hant
0.190226
6fe091c3773c29ceb8c334d264e2b313feee125c
821
md
Markdown
clients/client/dotnet/docs/ClientSubmitSelfServiceSettingsFlowWithTotpMethodBody.md
ory/sdk-generator
958314d130922ad6f20f439b5230141a832231a5
[ "Apache-2.0" ]
77
2020-02-14T17:27:36.000Z
2022-03-25T08:44:52.000Z
clients/client/dotnet/docs/ClientSubmitSelfServiceSettingsFlowWithTotpMethodBody.md
ory/sdk-generator
958314d130922ad6f20f439b5230141a832231a5
[ "Apache-2.0" ]
125
2020-02-07T21:45:52.000Z
2022-03-31T12:54:24.000Z
clients/client/dotnet/docs/ClientSubmitSelfServiceSettingsFlowWithTotpMethodBody.md
ory/sdk-generator
958314d130922ad6f20f439b5230141a832231a5
[ "Apache-2.0" ]
44
2020-01-31T22:05:47.000Z
2022-03-09T14:41:22.000Z
# Ory.Client.Model.ClientSubmitSelfServiceSettingsFlowWithTotpMethodBody ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **CsrfToken** | **string** | CSRFToken is the anti-CSRF token | [optional] **Method** | **string** | Method Should be set to \&quot;totp\&quot; when trying to add, update, or remove a totp pairing. | **TotpCode** | **string** | ValidationTOTP must contain a valid TOTP based on the | [optional] **TotpUnlink** | **bool** | UnlinkTOTP if true will remove the TOTP pairing, effectively removing the credential. This can be used to set up a new TOTP device. | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
58.642857
173
0.662607
eng_Latn
0.408744
6fe0bcaceba22fff63eca507d3df4b9e1f55249f
54
md
Markdown
README.md
mubeta06/kodiot-terraform
8e76a994c0d83da38da4326447efe3f90bf97d5f
[ "MIT" ]
null
null
null
README.md
mubeta06/kodiot-terraform
8e76a994c0d83da38da4326447efe3f90bf97d5f
[ "MIT" ]
null
null
null
README.md
mubeta06/kodiot-terraform
8e76a994c0d83da38da4326447efe3f90bf97d5f
[ "MIT" ]
null
null
null
# kodiot-terraform Terraform configuration for Kodiot
18
34
0.851852
eng_Latn
0.335967
6fe125e780202e4557175204945907860b0b0169
691
md
Markdown
README.md
categrace/My-thick-pizza
b2754e4822098e75fc237b5e7c43bc01eabacc66
[ "MIT" ]
null
null
null
README.md
categrace/My-thick-pizza
b2754e4822098e75fc237b5e7c43bc01eabacc66
[ "MIT" ]
null
null
null
README.md
categrace/My-thick-pizza
b2754e4822098e75fc237b5e7c43bc01eabacc66
[ "MIT" ]
null
null
null
# PIZZA ## Description This is a website that allows users to order their choice of pizza and get it delivered to their desired location ## BDD The website is expected to function in the manner below: * Give the user an otion to get their desired size of pizza. * The user should be ale to select their crust of choice. * The user should select the toppings of their choice available on the website. * If the user wants a delivery,prompt the user to enter location details and price for delivery. * On checkout,show the user the total amount payable. ## Tools used * CSS * HTML * Javascript ## Contact Information asdfgcvxczx@gmail.com ## LICENSE * MIT LICENSE * Copyright(c)2021 categrace
36.368421
113
0.772793
eng_Latn
0.99291
6fe16bc3ff1e10b952daf90271641d429879f8b0
1,205
md
Markdown
README.md
SUKOHI/Bakery
a8dc805cb9c70e66be5ae7eb3b648f5c5d9d3e19
[ "MIT" ]
null
null
null
README.md
SUKOHI/Bakery
a8dc805cb9c70e66be5ae7eb3b648f5c5d9d3e19
[ "MIT" ]
null
null
null
README.md
SUKOHI/Bakery
a8dc805cb9c70e66be5ae7eb3b648f5c5d9d3e19
[ "MIT" ]
null
null
null
Bakery ===== A PHP package mainly developed for Laravel to generate breadcrumbs using routes. (This is for Laravel 4.2. [For Laravel 5+](https://github.com/SUKOHI/Bakery)) Installation ==== Add this package name in composer.json "require": { "sukohi/bakery": "2.*" } Execute composer command. composer update Register the service provider in app.php 'providers' => [ ...Others..., Sukohi\Bakery\BakeryServiceProvider::class, ] Also alias 'aliases' => [ ...Others..., 'Bakery' => Sukohi\Bakery\Facades\Bakery::class ] Usage ==== $params = [ 'home' => 'Home', 'home.area:vancouver' => 'Vancouver', 'home.food:sushi,popular' => 'Popular sushi restaurants', '*' => 'Samurai' ]; foreach(\Bakery::get($params) as $bakery) { if($bakery->isCurrent) { echo $bakery->title; } else { echo link_to($bakery->url, $bakery->title) .' &gt; '; } } About parameter pattern ==== 1. 'route' => 'title' 2. 'route:parameter' => 'title' 3. 'route:parameter1,parameter2' => 'title', 4. '*' => 'Current Page' License ==== This package is licensed under the MIT License. Copyright 2014 Sukohi Kuhoh
16.283784
82
0.608299
eng_Latn
0.596382
6fe1ddccb75c098c8def5317bbe7c0f9a1250cd5
693
md
Markdown
wk-starter/README.md
wangyulongln20/yfwpt
742e85dd5b782a8a3adc99d20a0c5160260f356d
[ "Apache-2.0" ]
2
2020-04-18T06:10:46.000Z
2020-05-09T13:52:36.000Z
wk-starter/README.md
Daniel-Radcliffe/NutzWk
b43c7bca7f309c00b61ff50109f775c0a6c78e68
[ "Apache-2.0" ]
9
2020-03-04T23:19:03.000Z
2022-02-16T00:59:37.000Z
wk-starter/README.md
wangyulongln20/yfwpt
742e85dd5b782a8a3adc99d20a0c5160260f356d
[ "Apache-2.0" ]
null
null
null
wk-starter --------------------- 扩展第三方功能的主文件夹 详细Demo代码见 [nutzboot-demo-custom-starter](https://github.com/nutzam/nutzboot/tree/dev/nutzboot-demo/nutzboot-demo-custom/nutzboot-demo-custom-starter) ```java @IocBean public class MainLauncher { public static void main(String[] args) { NbApp app = new NbApp(); // 这里演示2种starter加载方式 // 第一种,io.nutz.demo.custom.starter.MySimpleServerStarter // 它声明在 resources/META-INF/nutz/org.nutz.boot.starter.NbStarter // 不需要在代码中指明 // 第二种, 自行添加, io.nutz.demo.custom.starter2.MyStarter2Add app.addStarterClass(MyStarter2Add.class); app.setPrintProcDoc(true); app.run(); } } ```
30.130435
149
0.659452
yue_Hant
0.251766
6fe1dddb72a8fb9d021f57764f85e82ea7b99ccf
5,053
md
Markdown
ce/developer/use-alternate-key-create-record.md
platkat/dynamics-365-customer-engagement
2e64b92b1522b2da012ee331b238e0aa195f84d8
[ "CC-BY-4.0", "MIT" ]
2
2019-01-30T18:36:48.000Z
2019-01-30T18:36:51.000Z
ce/developer/use-alternate-key-create-record.md
platkat/dynamics-365-customer-engagement
2e64b92b1522b2da012ee331b238e0aa195f84d8
[ "CC-BY-4.0", "MIT" ]
null
null
null
ce/developer/use-alternate-key-create-record.md
platkat/dynamics-365-customer-engagement
2e64b92b1522b2da012ee331b238e0aa195f84d8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Use an alternate key to create a record (Developer Guide for Dynamics 365 for Customer Engagement)| MicrosoftDocs" description: "Alternate keys can be used to create instances of Entity and EntityReference classes. This topic discusses the usage patterns and possible exceptions that might be thrown when using alternate keys." ms.custom: ms.date: 10/31/2017 ms.reviewer: ms.service: crm-online ms.suite: ms.tgt_pltfrm: ms.topic: article applies_to: - Dynamics 365 for Customer Engagement (online) ms.assetid: fa8762cd-b714-49df-8756-ba0a70e6fc97 caps.latest.revision: 15 author: JimDaly ms.author: jdaly manager: amyla search.audienceType: - developer search.app: - D365CE --- # Use an alternate key to create a record [!INCLUDE[](../includes/cc_applies_to_update_9_0_0.md)] You can now use alternate keys to create instances of <xref:Microsoft.Xrm.Sdk.Entity> and <xref:Microsoft.Xrm.Sdk.EntityReference> classes. This topic discusses the usage patterns and possible exceptions that might be thrown when using alternate keys. To understand how to define alternate keys for an entity, see [Define alternate keys for an entity](define-alternate-keys-entity.md). <a name="BKMK_entity"></a> ## Using alternate keys to create an entity You can now create an <xref:Microsoft.Xrm.Sdk.Entity> with a primary ID or with a single `KeyAttribute` in a single call using the new constructor. ```csharp public Entity (string logicalName, Guid id) {…} public Entity (string logicalName, string keyName, object keyValue) {…} public Entity (string logicalName, KeyAttributeCollection keyAttributes) {…} ``` A valid <xref:Microsoft.Xrm.Sdk.Entity> used for update operations includes a logical name of the entity and one of the following: - A value for ID (primary key GUID value) (or) - A <xref:Microsoft.Xrm.Sdk.KeyAttributeCollection> with a valid set of attributes matching a defined key for the entity. <a name="BKMK_EntityReference"></a> ## Using alternate keys to create an EntityReference You can also create an <xref:Microsoft.Xrm.Sdk.EntityReference> without a primary ID, and with a single `KeyAttribute` in a single call using the new constructor. ```csharp public EntityReference(string logicalName, Guid id) {…} public EntityReference(string logicalName, string keyName, object keyValue) {…} public EntityReference(string logicalName, KeyAttributeCollection keyAttributeCollection) {…} ``` A valid <xref:Microsoft.Xrm.Sdk.EntityReference> includes a logical name of the entity and either: - A value for ID (primary key GUID value) or - A <xref:Microsoft.Xrm.Sdk.KeyAttributeCollection> collection with a valid set of attributes matching a defined key for the entity. <a name="BKMK_input"></a> ## Alternative input to messages When passing entities to <xref:Microsoft.Xrm.Sdk.Messages.CreateRequest> and <xref:Microsoft.Xrm.Sdk.Messages.UpdateRequest>, values provided for Lookup attributes using an <xref:Microsoft.Xrm.Sdk.EntityReference> can now use <xref:Microsoft.Xrm.Sdk.EntityReference> with alternate keys defined in <xref:Microsoft.Xrm.Sdk.EntityReference.KeyAttributes> to specify related record. These will be resolved to and replaced by primary ID based entity references before the messages are processed. <a name="BKMK_Exceptions"></a> ## Exceptions when using alternate keys You have to be aware of the following conditions and possible exceptions when using alternate keys: - The primary ID is used if it is provided. If it is not provided, it will examine the <xref:Microsoft.Xrm.Sdk.KeyAttributeCollection>. If the <xref:Microsoft.Xrm.Sdk.KeyAttributeCollection> is not provided, it will throw an error. - If the provided <xref:Microsoft.Xrm.Sdk.KeyAttributeCollection> includes one attribute that is the primary key of the entity and the value is valid, it populates the ID property of the <xref:Microsoft.Xrm.Sdk.Entity> or <xref:Microsoft.Xrm.Sdk.EntityReference> with the provided value. - If the key attributes are provided, the system attempts to match the set of attributes provided with the keys defined for the <xref:Microsoft.Xrm.Sdk.Entity>. If it does not find a match, it will throw an error. If it does find a match, it will validate the provided values for those attributes. If valid, it will retrieve the ID of the record that matched the provided key values, and populate the ID value of the <xref:Microsoft.Xrm.Sdk.Entity> or <xref:Microsoft.Xrm.Sdk.EntityReference> with this value. - If you specify an attribute set that is not defined as a unique key, an error will be thrown indicating that use of unique key attributes is required. ### See also [Define alternate keys for an entity](define-alternate-keys-entity.md) [Use change tracking to synchronize data with external systems](use-change-tracking-synchronize-data-external-systems.md) [Use Upsert to insert or update a record](use-upsert-insert-update-record.md)
60.879518
515
0.765486
eng_Latn
0.983344
6fe21742485dd4047c63ac35418f6307e86c6dac
4,070
md
Markdown
docs/framework/interop/marshaling-a-delegate-as-a-callback-method.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/interop/marshaling-a-delegate-as-a-callback-method.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/interop/marshaling-a-delegate-as-a-callback-method.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Serialización de un delegado como un método de devolución de llamada ms.date: 03/30/2017 dev_langs: - csharp - vb - cpp helpviewer_keywords: - data marshaling, Callback sample - marshaling, Callback sample ms.assetid: 6ddd7866-9804-4571-84de-83f5cc017a5a author: rpetrusha ms.author: ronpet ms.openlocfilehash: 579bc56a538707fd19d6d089c7f3c0c0561ea9eb ms.sourcegitcommit: b22705f1540b237c566721018f974822d5cd8758 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 10/19/2018 ms.locfileid: "49454426" --- # <a name="marshaling-a-delegate-as-a-callback-method"></a>Serialización de un delegado como un método de devolución de llamada En este ejemplo se muestra cómo pasar delegados a una función no administrada que espera recibir punteros de función. Un delegado es una clase que puede contener una referencia a un método y equivale a un puntero de función con seguridad de tipos o a una función de devolución de llamada. > [!NOTE] > Cuando se usa un delegado dentro de una llamada, Common Language Runtime evita la eliminación del delegado por el recolector de elementos no utilizados mientras dure esa llamada. Pero si la función no administrada almacena el delegado para usarlo al finalizar la llamada, debe impedir manualmente la recolección de elementos no utilizados hasta que finalice la función no administrada con el delegado. Para más información, vea [HandleRef (ejemplo)](https://msdn.microsoft.com/library/ab23b04e-1d53-4ec7-b27a-e892d9298959(v=vs.100)) y [GCHandle (ejemplo)](https://msdn.microsoft.com/library/6acce798-0385-4ded-a790-77da842c113f(v=vs.100)). En el ejemplo de devolución de llamada se usan las siguientes funciones no administradas, que se muestran con su declaración de función original: - **TestCallBack** exportada desde PinvokeLib.dll. ``` void TestCallBack(FPTR pf, int value); ``` - **TestCallBack2** exportada desde PinvokeLib.dll. ``` void TestCallBack2(FPTR2 pf2, char* value); ``` [PinvokeLib.dll](https://docs.microsoft.com/previous-versions/dotnet/netframework-4.0/as6wyhwt(v=vs.100)) es una biblioteca personalizada no administrada que contiene una implementación para las funciones enumeradas anteriormente. En este ejemplo, la clase `LibWrap` contiene prototipos administrados para los métodos `TestCallBack` y `TestCallBack2`. Ambos métodos pasan un delegado a una función de devolución de llamada como un parámetro. La firma del delegado debe coincidir con la firma del método al que hace referencia. Por ejemplo, los delegados `FPtr` y `FPtr2` tienen firmas idénticas a las de los métodos `DoSomething` y `DoSomething2`. ## <a name="declaring-prototypes"></a>Declaración de prototipos [!code-cpp[Conceptual.Interop.Marshaling#37](../../../samples/snippets/cpp/VS_Snippets_CLR/conceptual.interop.marshaling/cpp/callback.cpp#37)] [!code-csharp[Conceptual.Interop.Marshaling#37](../../../samples/snippets/csharp/VS_Snippets_CLR/conceptual.interop.marshaling/cs/callback.cs#37)] [!code-vb[Conceptual.Interop.Marshaling#37](../../../samples/snippets/visualbasic/VS_Snippets_CLR/conceptual.interop.marshaling/vb/callback.vb#37)] ## <a name="calling-functions"></a>Llamadas a funciones [!code-cpp[Conceptual.Interop.Marshaling#38](../../../samples/snippets/cpp/VS_Snippets_CLR/conceptual.interop.marshaling/cpp/callback.cpp#38)] [!code-csharp[Conceptual.Interop.Marshaling#38](../../../samples/snippets/csharp/VS_Snippets_CLR/conceptual.interop.marshaling/cs/callback.cs#38)] [!code-vb[Conceptual.Interop.Marshaling#38](../../../samples/snippets/visualbasic/VS_Snippets_CLR/conceptual.interop.marshaling/vb/callback.vb#38)] ## <a name="see-also"></a>Vea también [Diversos ejemplos de serialización](https://msdn.microsoft.com/library/a915c948-54e9-4d0f-a525-95a77fd8ed70(v=vs.100)) [Tipos de datos de invocación de plataforma](https://msdn.microsoft.com/library/16014d9f-d6bd-481e-83f0-df11377c550f(v=vs.100)) [Crear prototipos en código administrado](creating-prototypes-in-managed-code.md)
68.983051
644
0.773956
spa_Latn
0.901888
6fe2a70f5d5fa62321561225c8044bfe7422a51f
4,148
md
Markdown
README.md
PacktPublishing/Binary-Analysis-Cookbook
5ccc584a71d3b9f28dfd1a29a853ff543c6dbac0
[ "MIT" ]
26
2019-04-20T02:58:37.000Z
2022-02-25T04:29:33.000Z
README.md
PacktPublishing/Binary-Analysis-Cookbook
5ccc584a71d3b9f28dfd1a29a853ff543c6dbac0
[ "MIT" ]
null
null
null
README.md
PacktPublishing/Binary-Analysis-Cookbook
5ccc584a71d3b9f28dfd1a29a853ff543c6dbac0
[ "MIT" ]
15
2019-05-28T11:55:46.000Z
2022-03-04T23:24:38.000Z
# Binary Analysis Cookbook <a href="https://www.packtpub.com/in/security/binary-analysis-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781789807608"><img src="https://www.packtpub.com/media/catalog/product/cache/e4d64343b1bc593f1c5348fe05efa4a6/9/7/9781789807608-original.jpeg" alt="Binary Analysis Cookbook " height="256px" align="right"></a> This is the code repository for [Binary Analysis Cookbook ](https://www.packtpub.com/in/security/binary-analysis-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781789807608), published by Packt. **Actionable recipes for disassembling and analyzing binaries for security risks** ## What is this book about? Binary analysis is the process of examining a binary program to determine information security actions. It is a complex, constantly evolving, and challenging topic that crosses over into several domains of information technology and security. This book covers the following exciting features: * Traverse the IA32, IA64, and ELF specifications * Explore Linux tools to disassemble ELF binaries * Identify vulnerabilities in 32-bit and 64-bit binaries * Discover actionable solutions to overcome the limitations in analyzing ELF binaries * Interpret the output of Linux tools to identify security risks in binaries * Understand how dynamic taint analysis works If you feel this book is for you, get your [copy](https://www.amazon.com/dp/1789807603) today! <a href="https://www.packtpub.com/?utm_source=github&utm_medium=banner&utm_campaign=GitHubBanner"><img src="https://raw.githubusercontent.com/PacktPublishing/GitHub/master/GitHub.png" alt="https://www.packtpub.com/" border="5" /></a> ## Instructions and Navigations All of the code is organized into folders. For example, Chapter02. The code will look like the following: ``` ; MUL examples mul edi mul bx mul cl ``` **Following is what you need for this book:** This book is for anyone looking to learn how to dissect ELF binaries using open-source tools available in Linux. If you’re a Linux system administrator or information security professional, you’ll find this guide useful. Basic knowledge of Linux, familiarity with virtualization technologies and the working of network sockets, and experience in basic Python or Bash scripting will assist you with understanding the concepts in this book With the following software and hardware list you can run all code files present in the book (Chapter 1-10). ### Software and Hardware List | Chapter | Software required | Hardware specifications | | -------- | ------------------------------------ | ----------------------------------- | | 1-10 | Windows, Mac, Linux | Laptop or desktop with the following: Intel Processor, 8GB RAM (16GB or more preferred), 250 GB or more HDD/SSD | We also provide a PDF file that has color images of the screenshots/diagrams used in this book. [Click here to download it](https://static.packt-cdn.com/downloads/9781789807608_ColorImages.pdf). ### Related products * Learning Linux Binary Analysis [[Packt]](https://www.packtpub.com/gb/networking-and-servers/learning-linux-binary-analysis?utm_source=github&utm_medium=repository&utm_campaign=9781782167105) [[Amazon]](https://www.amazon.com/dp/1782167102) ## Get to Know the Author **Michael Born** is a senior security consultant for SecureSky, Inc. Michael has earned several industry certifications and has co-taught offensive-focused Python programming classes at OWASP AppSec USA, and AppSec Europe. He enjoys coding in Python, IA32, IA64, PowerShell, participating in, and designing, capture the flag (ctf) challenges, teaching and mentoring others looking to embark on a career in information security, and presenting on various information security topics at local chapters of well-known information security groups. Michael has served on the chapter board for his local OWASP chapter, is a lifetime OWASP member, and participates in the local DC402 group. ### Suggestions and Feedback [Click here](https://docs.google.com/forms/d/e/1FAIpQLSdy7dATC6QmEL81FIUuymZ0Wy9vH1jHkvpY57OiMeKGqib_Ow/viewform) if you have any feedback or suggestions.
71.517241
665
0.782305
eng_Latn
0.957571
6fe32eb4bd793e10ab1616835ac8bea5616decbc
802
md
Markdown
README.md
jiang-jackson/Guizhou-tourism-website
76b2847f8cc6a4ead8b3b0067ace9ac74043aef5
[ "MIT" ]
3
2017-05-31T05:41:45.000Z
2021-07-20T22:36:48.000Z
README.md
jiang-jackson/Guizhou-tourism-website
76b2847f8cc6a4ead8b3b0067ace9ac74043aef5
[ "MIT" ]
null
null
null
README.md
jiang-jackson/Guizhou-tourism-website
76b2847f8cc6a4ead8b3b0067ace9ac74043aef5
[ "MIT" ]
2
2019-04-13T23:24:12.000Z
2021-06-18T06:31:16.000Z
# Guizhou-tourism-website Guizhou tourism website is a scenery, food, hotels, ethnic customs, fashion as a travel site. Which is developed in the mainstream platform Windows 8.1. It is a dynamic website ,getting the support of integration environment for XAPP, using open source scripting language of PHP to develop and Combining with web front-end development language HTML and CSS. JavaScript, etc. 贵州旅游网站是一个集风景、美食、酒店、民族风情、时尚线路等为一体的旅游网站, 在主流平台Windows 8.1上开发,得到XAPP集成环境的支撑 ,采用开源脚本语言PHP开发,是一个动态网站, 结合网页前端开发语言HTML,CSS,JavaScript等完成前后端页面的制作设计,系统界面友好、功能强大、使用方便。 该旅游网站具有旅游信息浏览和查询功能,后台管理员可以进行添加、删除、线路发布、修改景点。通过这些模块实现旅游咨询共享, 为游客提供详细、准确、及时、高效的信息服务。该网站系统采用Sublime Text作为前后台开发工具, 同时系统采用了ThinkPHP框架,使整个系统的设计思路更加清晰,同时还应用了Bootstrap和jQuery框架提供的一些封装语法,使页面代码更加简单明了,更加易于维护。 同时,为了使页面更加人性化,系统中还应用Ajax 技术实现用户登录后评论与留言等功能。
80.2
374
0.84788
eng_Latn
0.47406
6fe47a8a6eeb86441aa8009532f14cc44e2fc316
11,998
md
Markdown
articles/virtual-desktop/configure-vm-gpu.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
12
2017-08-28T07:45:55.000Z
2022-03-07T21:35:48.000Z
articles/virtual-desktop/configure-vm-gpu.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
441
2017-11-08T13:15:56.000Z
2021-06-02T10:39:53.000Z
articles/virtual-desktop/configure-vm-gpu.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
27
2017-11-13T13:38:31.000Z
2022-02-17T11:57:33.000Z
--- title: Konfigurowanie procesora GPU dla pulpitu wirtualnego systemu Windows — Azure description: Jak włączyć procesor GPU w szybszym wyrenderowaniu i kodowaniu na pulpicie wirtualnym systemu Windows. author: gundarev ms.topic: how-to ms.date: 05/06/2019 ms.author: denisgun ms.openlocfilehash: f95b9c1615cc58d9cc0589bad98c7315e571686e ms.sourcegitcommit: 32e0fedb80b5a5ed0d2336cea18c3ec3b5015ca1 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 03/30/2021 ms.locfileid: "105709467" --- # <a name="configure-graphics-processing-unit-gpu-acceleration-for-windows-virtual-desktop"></a>Konfigurowanie przyspieszania procesora graficznego (GPU) dla usługi Windows Virtual Desktop >[!IMPORTANT] >Ta zawartość dotyczy pulpitu wirtualnego systemu Windows z Azure Resource Manager obiektów pulpitu wirtualnego systemu Windows. Jeśli używasz pulpitu wirtualnego systemu Windows (klasycznego) bez Azure Resource Manager obiektów, zobacz [ten artykuł](./virtual-desktop-fall-2019/configure-vm-gpu-2019.md). Pulpit wirtualny systemu Windows obsługuje renderowanie i kodowanie procesora GPU w celu zwiększenia wydajności i skalowalności aplikacji. Przyspieszenie GPU jest szczególnie istotne dla aplikacji intensywnie korzystających z grafiki. Postępuj zgodnie z instrukcjami w tym artykule, aby utworzyć maszynę wirtualną platformy Azure zoptymalizowaną pod kątem procesora GPU, dodać ją do puli hostów i skonfigurować do używania przyspieszenia procesora GPU na potrzeby renderowania i kodowania. W tym artykule przyjęto założenie, że masz już skonfigurowaną dzierżawę pulpitu wirtualnego systemu Windows. ## <a name="select-an-appropriate-gpu-optimized-azure-virtual-machine-size"></a>Wybierz odpowiedni rozmiar maszyny wirtualnej platformy Azure zoptymalizowany pod kątem procesora GPU Wybierz jedną z rozmiarów maszyn wirtualnych [z serii](../virtual-machines/nv-series.md) [NVv3](../virtual-machines/nvv3-series.md)lub [NVv4](../virtual-machines/nvv4-series.md) . Są one dostosowane do wirtualizacji aplikacji i pulpitu oraz umożliwiają przyspieszenie większości aplikacji i interfejsu użytkownika systemu Windows. Wybór właściwy dla puli hostów zależy od wielu czynników, w tym konkretnych obciążeń aplikacji, odpowiedniej jakości środowiska użytkownika i kosztów. Ogólnie rzecz biorąc, większe i wydajniejsze procesory GPU oferują lepsze środowisko użytkownika w danej gęstości użytkownika, podczas gdy mniejsze i ułamkowe rozmiary procesora GPU umożliwiają dokładniejszą kontrolę nad kosztami i jakością. >[!NOTE] >Maszyny wirtualne z serii NC, NCv2, Seria NCV3, ND i NDv2 na platformie Azure zwykle nie są odpowiednie dla hostów sesji usług pulpitu wirtualnego systemu Windows. Te maszyny wirtualne są dostosowane do wyspecjalizowanych, wysoko wydajnych narzędzi obliczeniowych lub uczenia maszynowego, takich jak te utworzone za pomocą technologii NVIDIA CUDA. Nie obsługują przyspieszenia procesora GPU dla większości aplikacji lub interfejsu użytkownika systemu Windows. ## <a name="create-a-host-pool-provision-your-virtual-machine-and-configure-an-app-group"></a>Tworzenie puli hostów, Inicjowanie obsługi administracyjnej maszyny wirtualnej i Konfigurowanie grupy aplikacji Utwórz nową pulę hostów przy użyciu maszyny wirtualnej o wybranym rozmiarze. Aby uzyskać instrukcje, zobacz [Samouczek: Tworzenie puli hostów przy użyciu Azure Portal](./create-host-pools-azure-marketplace.md). Pulpit wirtualny systemu Windows obsługuje renderowanie i kodowanie procesora GPU w następujących systemach operacyjnych: * Windows 10 w wersji 1511 lub nowszej * Windows Server 2016 lub nowszy Należy również skonfigurować grupę aplikacji lub użyć domyślnej grupy aplikacji pulpitu (o nazwie "aplikacja klasyczna"), która jest tworzona automatycznie podczas tworzenia nowej puli hostów. Aby uzyskać instrukcje, zobacz [Samouczek: Zarządzanie grupami aplikacji dla pulpitu wirtualnego systemu Windows](./manage-app-groups.md). ## <a name="install-supported-graphics-drivers-in-your-virtual-machine"></a>Instaluj obsługiwane sterowniki grafiki na maszynie wirtualnej Aby skorzystać z możliwości procesora GPU maszyn wirtualnych z serii N w systemie Windows, należy zainstalować odpowiednie sterowniki grafiki. Postępuj zgodnie z instrukcjami w obszarze [obsługiwane systemy operacyjne i sterowniki,](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) aby zainstalować sterowniki. Obsługiwane są tylko sterowniki dystrybuowane przez platformę Azure. * W przypadku maszyn wirtualnych z serii NV lub serii NVv3, tylko sterowników NVIDIA GRID, a nie sterowników NVIDIA CUDA, obsługa przyspieszania GPU dla większości aplikacji i interfejsu użytkownika systemu Windows. W przypadku wybrania opcji ręcznego instalowania sterowników należy zainstalować sterowniki siatki. Jeśli zdecydujesz się zainstalować sterowniki przy użyciu rozszerzenia maszyny wirtualnej platformy Azure, Sterowniki siatki zostaną automatycznie zainstalowane dla tych rozmiarów maszyn wirtualnych. * W przypadku maszyn wirtualnych z serii Azure NVv4 Zainstaluj sterowniki AMD dostarczone przez platformę Azure. Można je zainstalować automatycznie przy użyciu rozszerzenia maszyny wirtualnej platformy Azure lub można zainstalować je ręcznie. Po zainstalowaniu sterownika wymagane jest ponowne uruchomienie maszyny wirtualnej. Wykonaj kroki weryfikacji opisane powyżej, aby potwierdzić, że sterowniki grafiki zostały pomyślnie zainstalowane. ## <a name="configure-gpu-accelerated-app-rendering"></a>Konfigurowanie renderowania aplikacji przyspieszonej przez procesor GPU Domyślnie aplikacje i komputery stacjonarne działające w konfiguracjach wielosesyjnych są renderowane z użyciem procesora CPU i nie wykorzystują dostępnych procesorów GPU do renderowania. Skonfiguruj zasady grupy dla hosta sesji w celu włączenia renderowania przyspieszanego przez procesor GPU: 1. Połącz się z pulpitem maszyny wirtualnej przy użyciu konta z uprawnieniami administratora lokalnego. 2. Otwórz menu Start i wpisz "gpedit. msc", aby otworzyć Edytor zasady grupy. 3. Przejdź do węzła **Konfiguracja komputera** > **Szablony administracyjne** > **składniki systemu Windows** > **usługi pulpitu zdalnego** > **pulpit zdalny** > **środowisku sesji zdalnej** hosta sesji. 4. Wybierz pozycję zasady **Użyj sprzętowych kart graficznych dla wszystkich sesji usługi pulpitu zdalnego** i **Ustaw dla tych zasad włączenie renderowania** procesora GPU w sesji zdalnej. ## <a name="configure-gpu-accelerated-frame-encoding"></a>Konfigurowanie kodowania ramek z przyspieszeniem procesora GPU Pulpit zdalny koduje wszystkie grafiki renderowane przez aplikacje i komputery stacjonarne (renderowane z procesorem GPU lub z procesorem CPU) do przesyłania do Pulpit zdalny klientów. Gdy część ekranu jest często aktualizowana, ta część ekranu jest zaszyfrowana przy użyciu kodera-dekoder wideo (H. 264/AVC). Domyślnie Pulpit zdalny nie wykorzystuje dostępnych procesorów GPU dla tego kodowania. Skonfiguruj zasady grupy dla hosta sesji, aby umożliwić kodowanie ramek przez procesor GPU. Kontynuując powyższe kroki: >[!NOTE] >Przyspieszenie procesora GPU nie jest dostępne na maszynach wirtualnych z serii NVv4. 1. Wybierz pozycję zasady **Konfiguruj kodowanie sprzętu H. 264/AVC dla połączeń pulpit zdalny** i ustaw te zasady na **włączone** , aby włączyć kodowanie sprzętu dla AVC/H. 264 w sesji zdalnej. >[!NOTE] >W systemie Windows Server 2016 ustaw opcję **Preferuj kodowanie sprzętu AVC** , aby **zawsze próbować**. 2. Teraz, gdy zasady grupy zostały edytowane, Wymuś aktualizację zasad grupy. Otwórz wiersz polecenia i wpisz: ```cmd gpupdate.exe /force ``` 3. Wyloguj się z sesji Pulpit zdalny. ## <a name="configure-fullscreen-video-encoding"></a>Konfigurowanie pełnoekranowego kodowania wideo Jeśli często używasz aplikacji, które generują zawartość o wysokiej rozdzielczości, taką jak modelowanie 3W, programy CAD i wideo, możesz włączyć pełnoekranowe kodowanie wideo dla sesji zdalnej. Pełny profil wideo zapewnia wyższą szybkość klatek i lepszy komfort korzystania z takich aplikacji na koszt przepustowości sieci oraz zasobów hosta sesji i klienta. Zaleca się używanie przyspieszania procesora GPU dla kodowania wideo w trybie pełnoekranowym. Skonfiguruj zasady grupy dla hosta sesji, aby włączyć pełnoekranowe kodowanie wideo. Kontynuując powyższe kroki: 1. Wybierz pozycję zasady **ustalania priorytetów tryb grafiki h. 264/avc 444 dla połączeń pulpit zdalny** i ustaw te zasady tak **, aby** wymusić na koderze-dekoder H. 264/AVC 444 w sesji zdalnej. 2. Teraz, gdy zasady grupy zostały edytowane, Wymuś aktualizację zasad grupy. Otwórz wiersz polecenia i wpisz: ```cmd gpupdate.exe /force ``` 3. Wyloguj się z sesji Pulpit zdalny. ## <a name="verify-gpu-accelerated-app-rendering"></a>Weryfikowanie renderowania aplikacji przyspieszonej przez procesor GPU Aby sprawdzić, czy aplikacje używają procesora GPU do renderowania, spróbuj wykonać jedną z następujących czynności: * W przypadku maszyn wirtualnych platformy Azure z procesorem GPU NVIDIA Użyj `nvidia-smi` narzędzia zgodnie z opisem w temacie [Weryfikowanie instalacji sterowników](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation) , aby sprawdzić użycie procesora GPU podczas uruchamiania aplikacji. * W przypadku obsługiwanych wersji systemu operacyjnego można użyć Menedżera zadań do sprawdzenia użycia procesora GPU. Wybierz procesor GPU na karcie "Performance" (wydajność), aby sprawdzić, czy aplikacje korzystają z procesora GPU. ## <a name="verify-gpu-accelerated-frame-encoding"></a>Weryfikowanie przyspieszanego procesora GPU Aby sprawdzić, czy Pulpit zdalny używa kodowania przyspieszanego przez procesor GPU: 1. Nawiąż połączenie z pulpitem maszyny wirtualnej przy użyciu klienta pulpitu wirtualnego systemu Windows. 2. Uruchom Podgląd zdarzeń i przejdź do następującego węzła: **Dzienniki aplikacji i usług** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **operacyjnego** 3. Aby określić, czy jest używane kodowanie przyspieszone procesora GPU, poszukaj zdarzenia o IDENTYFIKATORze 170. Jeśli widzisz "koder sprzętowy AVC włączony: 1", używane jest kodowanie GPU. ## <a name="verify-fullscreen-video-encoding"></a>Weryfikuj pełnoekranowe kodowanie wideo Aby sprawdzić, czy Pulpit zdalny korzysta z pełnoekranowego kodowania wideo: 1. Nawiąż połączenie z pulpitem maszyny wirtualnej przy użyciu klienta pulpitu wirtualnego systemu Windows. 2. Uruchom Podgląd zdarzeń i przejdź do następującego węzła: **Dzienniki aplikacji i usług** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **operacyjnego** 3. Aby określić, czy jest używane pełnoekranowe kodowanie wideo, poszukaj zdarzenia o IDENTYFIKATORze 162. Jeśli widzisz wartość "dostępnego AVC: 1 początkowy profil: 2048", zostanie użyta wartość AVC 444. ## <a name="next-steps"></a>Następne kroki Instrukcje te powinny obejmować Przyspieszenie GPU na jednym hoście sesji (jednej maszynie wirtualnej). Dodatkowe zagadnienia dotyczące włączania przyspieszenia procesora GPU w większej puli hostów: * Rozważ użycie [rozszerzenia maszyny wirtualnej](../virtual-machines/extensions/overview.md) w celu uproszczenia instalacji sterowników i aktualizacji na wielu maszynach wirtualnych. Użyj [rozszerzenia sterownika procesora GPU NVIDIA](../virtual-machines/extensions/hpccompute-gpu-windows.md) dla maszyn wirtualnych z procesorami GPU firmy NVIDIA i Użyj [rozszerzenia sterownika procesora GPU AMD](../virtual-machines/extensions/hpccompute-amd-gpu-windows.md) dla maszyn wirtualnych z procesorami GPU AMD. * Rozważ użycie zasady grupy Active Directory, aby uprościć konfigurację zasad grupy na wielu maszynach wirtualnych. Aby uzyskać informacje na temat wdrażania zasady grupy w domenie Active Directory, zobacz [Praca z obiektami zasady grupy](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731212(v=ws.11)).
99.983333
723
0.818136
pol_Latn
0.999851
6fe484b5f35627fbf139bffeac33d624d8b5c26b
1,901
md
Markdown
content/project/my-project-name-1/index.md
alhoori/academic
5cb4eba4723da6db9411d8a812c0374418204ba2
[ "MIT" ]
null
null
null
content/project/my-project-name-1/index.md
alhoori/academic
5cb4eba4723da6db9411d8a812c0374418204ba2
[ "MIT" ]
null
null
null
content/project/my-project-name-1/index.md
alhoori/academic
5cb4eba4723da6db9411d8a812c0374418204ba2
[ "MIT" ]
null
null
null
--- # Documentation: https://sourcethemes.com/academic/docs/managing-content/ title: "My Project Name 2" summary: "A simple app to help me record data on my seedlings." authors: ["admin"] tags: ["iOS", "Swift", "application", "app", "open source", "programming"] categories: ["Programming"] date: 2019-10-05T08:29:29-04:00 draft: false # Optional external URL for project (replaces project detail page). external_link: "" # Featured image # To use, add an image named `featured.jpg/png` to your page's folder. # Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. image: caption: "" focal_point: "Center" preview_only: false # Custom links (optional). # Uncomment and edit lines below to show custom links. # links: # - name: Follow # url: https://twitter.com # icon_pack: fab # icon: twitter # Custom links (optional). # Uncomment and edit lines below to show custom links. links: - name: Source url: https://github.com/jhrcook/Germination-Tracker icon_pack: fab icon: github url_code: "" url_pdf: "" url_slides: "" url_video: "" # Slides (optional). # Associate this project with Markdown slides. # Simply enter your slide deck's filename without extension. # E.g. `slides = "example-slides"` references `content/slides/example-slides.md`. # Otherwise, set `slides = ""`. slides: "" --- # Purpose A hobby I have picked up recently is growing succulents and cacti from seed. I have sown *Lithops* and one batch of *Astrophytum*, but still have a bunch more seeds ready for another batch. The goal of this app is to help me track their progress and keep notes on my husbandry. It records how many seed and when I sowed. Further, it tracks their germination rate over time and has a handy section where I can record my notes on the process. Finally, I can visually record their progress in a library of photos.
31.163934
119
0.725934
eng_Latn
0.965314
6fe4a3b33c8f1f8b0e218c38b4e8b96710e1f17b
7,265
md
Markdown
articles/active-directory/develop/active-directory-certificate-credentials.md
Ksantacr/azure-docs.es-es
d3abf102433fd952aafab2c57a55973ea05a9acb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/develop/active-directory-certificate-credentials.md
Ksantacr/azure-docs.es-es
d3abf102433fd952aafab2c57a55973ea05a9acb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/develop/active-directory-certificate-credentials.md
Ksantacr/azure-docs.es-es
d3abf102433fd952aafab2c57a55973ea05a9acb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Credenciales de certificado en Azure AD | Microsoft Docs description: Este artículo describe el registro y el uso de credenciales de certificados para la autenticación de aplicaciones services: active-directory documentationcenter: .net author: rwike77 manager: CelesteDG editor: '' ms.assetid: 88f0c64a-25f7-4974-aca2-2acadc9acbd8 ms.service: active-directory ms.subservice: develop ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 05/21/2019 ms.author: ryanwi ms.reviewer: nacanuma, jmprieur ms.custom: aaddev ms.collection: M365-identity-device-management ms.openlocfilehash: ed4e7559ff6c3b76bbdf49b538ffebf3ad09cc58 ms.sourcegitcommit: 13cba995d4538e099f7e670ddbe1d8b3a64a36fb ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 05/22/2019 ms.locfileid: "66001231" --- # <a name="certificate-credentials-for-application-authentication"></a>Credenciales de certificado para la autenticación de aplicaciones Azure Active Directory (Azure AD) permite que una aplicación use sus propias credenciales para la autenticación, por ejemplo, en el flujo de concesión de credenciales de cliente de OAuth 2.0 ([v1.0](v1-oauth2-client-creds-grant-flow.md) y [v2.0](v2-oauth2-client-creds-grant-flow.md)) y el flujo en nombre de ([v1.0](v1-oauth2-on-behalf-of-flow.md) y [v2.0](v2-oauth2-on-behalf-of-flow.md)). Un formato de credencial que una aplicación puede utilizar para autenticación es una aserción de JSON Web Token (JWT) firmada con un certificado que la aplicación posea. ## <a name="assertion-format"></a>Formato de aserción Para calcular la aserción, puede usar una de las muchas bibliotecas de [JSON Web Token](https://jwt.ms/) en el idioma que prefiera. La información que lleva el token es la siguiente: ### <a name="header"></a>Encabezado | Parámetro | Comentario | | --- | --- | | `alg` | Debe ser **RS256** | | `typ` | Debe ser **JWT** | | `x5t` | Debe ser la huella digital SHA-1 del certificado X.509 | ### <a name="claims-payload"></a>Notificaciones (carga útil) | Parámetro | Comentarios | | --- | --- | | `aud` | Audience: Debe ser **https://login.microsoftonline.com/*tenant_Id*/oauth2/token** | | `exp` | Fecha de expiración: la fecha en que el token expira. La hora se representa como el número de segundos desde el 1 de enero de 1970 (1970-01-01T0:0:0Z) UTC hasta el momento en que la validez del token expira.| | `iss` | Emisor: debe ser el valor de client_id (identificador de la aplicación del servicio de cliente) | | `jti` | GUID: el identificador de JWT | | `nbf` | No antes de: fecha antes de la cual el token no se puede usar. La hora se representa como el número de segundos desde el 1 de enero de 1970 (1970-01-01T0:0:0Z) UTC hasta el momento en que se emitió el token. | | `sub` | Asunto: En cuanto a `iss` debe ser el valor de client_id (identificador de la aplicación del servicio de cliente) | ### <a name="signature"></a>Firma La firma se calcula aplicando el certificado como se describe en la [especificación RFC7519 de JSON Web Token](https://tools.ietf.org/html/rfc7519) ## <a name="example-of-a-decoded-jwt-assertion"></a>Ejemplo de una aserción de JWT descodificada ``` { "alg": "RS256", "typ": "JWT", "x5t": "gx8tGysyjcRqKjFPnd7RFwvwZI0" } . { "aud": "https: //login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/token", "exp": 1484593341, "iss": "97e0a5b7-d745-40b6-94fe-5f77d35c6e05", "jti": "22b3bb26-e046-42df-9c96-65dbd72c1c81", "nbf": 1484592741, "sub": "97e0a5b7-d745-40b6-94fe-5f77d35c6e05" } . "Gh95kHCOEGq5E_ArMBbDXhwKR577scxYaoJ1P{a lot of characters here}KKJDEg" ``` ## <a name="example-of-an-encoded-jwt-assertion"></a>Ejemplo de una aserción de JWT codificada La cadena siguiente es un ejemplo de aserción codificada. Si observa detenidamente, verá tres secciones separadas por puntos (.): * La primera sección codifica el encabezado. * La segunda sección codifica la carga útil. * La última sección es la firma calculada con los certificados a partir del contenido de las dos primeras secciones. ``` "eyJhbGciOiJSUzI1NiIsIng1dCI6Imd4OHRHeXN5amNScUtqRlBuZDdSRnd2d1pJMCJ9.eyJhdWQiOiJodHRwczpcL1wvbG9naW4ubWljcm9zb2Z0b25saW5lLmNvbVwvam1wcmlldXJob3RtYWlsLm9ubWljcm9zb2Z0LmNvbVwvb2F1dGgyXC90b2tlbiIsImV4cCI6MTQ4NDU5MzM0MSwiaXNzIjoiOTdlMGE1YjctZDc0NS00MGI2LTk0ZmUtNWY3N2QzNWM2ZTA1IiwianRpIjoiMjJiM2JiMjYtZTA0Ni00MmRmLTljOTYtNjVkYmQ3MmMxYzgxIiwibmJmIjoxNDg0NTkyNzQxLCJzdWIiOiI5N2UwYTViNy1kNzQ1LTQwYjYtOTRmZS01Zjc3ZDM1YzZlMDUifQ. Gh95kHCOEGq5E_ArMBbDXhwKR577scxYaoJ1P{a lot of characters here}KKJDEg" ``` ## <a name="register-your-certificate-with-azure-ad"></a>Registro del certificado con Azure AD Puede asociar las credenciales del certificado con la aplicación de cliente en Azure AD a través de Azure Portal mediante cualquiera de los métodos siguientes: ### <a name="uploading-the-certificate-file"></a>Cargar el archivo de certificado En el registro de aplicación de Azure para la aplicación cliente: 1. Seleccione **certificados y secretos**. 2. Haga clic en **cargar certificado** y seleccione el archivo de certificado para cargar. 3. Haga clic en **Agregar**. Una vez cargado el certificado, se muestran la huella digital, fecha de inicio y los valores de expiración. ### <a name="updating-the-application-manifest"></a>Actualizar el manifiesto de aplicación Si tiene un certificado, debe calcular: - `$base64Thumbprint`, que es la codificación base64 del hash del certificado - `$base64Value`, que es la codificación base64 de los datos sin procesar del certificado También debe proporcionar un GUID para identificar la clave en el manifiesto de la aplicación (`$keyId`). En el registro de aplicación de Azure para la aplicación cliente: 1. Seleccione **manifiesto** para abrir el manifiesto de aplicación. 2. Reemplace la propiedad *keyCredentials* por la información del nuevo certificado con el siguiente esquema. ``` "keyCredentials": [ { "customKeyIdentifier": "$base64Thumbprint", "keyId": "$keyid", "type": "AsymmetricX509Cert", "usage": "Verify", "value": "$base64Value" } ] ``` 3. Guarde las modificaciones en el manifiesto de aplicación y cárguelo en Azure AD. La propiedad `keyCredentials` es multivalor, por lo que puede cargar varios certificados para una administración de claves más eficaz. ## <a name="code-sample"></a>Código de ejemplo En el ejemplo de código de [Authenticating to Azure AD in daemon apps with certificates](https://github.com/Azure-Samples/active-directory-dotnet-daemon-certificate-credential) (Autenticación en Azure AD en aplicaciones de demonio con certificado) se muestra cómo una aplicación utiliza sus propias credenciales para la autenticación. También se muestra cómo se puede [crear un certificado autofirmado](https://github.com/Azure-Samples/active-directory-dotnet-daemon-certificate-credential#create-a-self-signed-certificate) mediante el comando de PowerShell `New-SelfSignedCertificate`. También puede usar los [scripts de creación de aplicaciones](https://github.com/Azure-Samples/active-directory-dotnet-daemon-certificate-credential/blob/master/AppCreationScripts/AppCreationScripts.md) para crear los certificados, calcular la huella digital, etc.
53.029197
850
0.770131
spa_Latn
0.937384
6fe4a9b93c1793d1cf9faa7612afcd649fe8387c
1,646
md
Markdown
README.md
kittyandrew/convert-document
f40023d2bcb5696c284e5f53fb322e539f489621
[ "MIT" ]
29
2019-04-17T03:46:52.000Z
2020-12-08T02:39:20.000Z
README.md
kittyandrew/convert-document
f40023d2bcb5696c284e5f53fb322e539f489621
[ "MIT" ]
11
2019-11-02T11:24:08.000Z
2020-10-27T09:25:53.000Z
README.md
kittyandrew/convert-document
f40023d2bcb5696c284e5f53fb322e539f489621
[ "MIT" ]
23
2019-04-02T10:20:24.000Z
2020-12-10T17:28:13.000Z
# convert-document A docker container environment to bundle the execution of `LibreOffice` to convert documents of various types (such as Word, OpenDocument, etc.) to PDF. An instance of `LibreOffice` will be run in the background, and controlled via a local socket (i.e. the UNO protocol). ## Usage This service is intended for use exclusively as a docker container. While it may be possible to run this application stand-alone, this is not recommended. For normal usage, you should pull the latest stable image off DockerHub and run it like this: ```shell docker pull alephdata/convert-document docker run -p 3000:3000 -ti alephdata/convert-document ``` Once the service has initialised, files can be sent to the `/convert` endpoint, and a PDF version will be returned as a download: ```shell curl -o out.pdf -F format=pdf -F 'file=@mydoc.doc' http://localhost:3000/convert ``` ## Development To build, run: ```shell docker build --rm -t alephdata/convert-document . ``` To get a development shell: ```shell make shell ``` Forced restart ```shell make build && docker-compose -f docker-compose.dev.yml stop convert-document && docker-compose -f docker-compose.dev.yml up -d convert-document ``` ## License MIT, see `LICENSE`. ## Troubleshooting * `LibreOffice` keeps crashing on startup with `Fatal exception: Signal 11` If [AppArmor](https://help.ubuntu.com/community/AppArmor) is running on the host machine, it may be blocking `LibreOffice` from starting up. Try disabling the `AppArmor` profiles related to `LibreOffice` by following these instructions: [https://askubuntu.com/a/1214363](https://askubuntu.com/a/1214363)
30.481481
271
0.755772
eng_Latn
0.92588
6fe500b976af1965d2fc6093c80dd00a92720043
4,437
md
Markdown
articles/cdn/cdn-rules-engine-details.md
huiw-git/azure-content-zhtw
f20103dc3d404c9c929c155b36c5a47aee5baed6
[ "CC-BY-3.0" ]
null
null
null
articles/cdn/cdn-rules-engine-details.md
huiw-git/azure-content-zhtw
f20103dc3d404c9c929c155b36c5a47aee5baed6
[ "CC-BY-3.0" ]
null
null
null
articles/cdn/cdn-rules-engine-details.md
huiw-git/azure-content-zhtw
f20103dc3d404c9c929c155b36c5a47aee5baed6
[ "CC-BY-3.0" ]
1
2020-11-04T04:34:56.000Z
2020-11-04T04:34:56.000Z
<properties pageTitle="Azure 內容傳遞網路 (CDN) 規則引擎比對條件和功能詳細資料" description="本主題會針對 Azure 內容傳遞網路 (CDN) 規則引擎,列出可用比對條件和功能的詳細說明。" services="cdn" documentationCenter="" authors="camsoper" manager="erikre" editor=""/> <tags ms.service="cdn" ms.workload="media" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="article" ms.date="02/25/2016" ms.author="casoper"/> # CDN 規則引擎比對條件和功能詳細資料 本主題會針對 Azure 內容傳遞網路 (CDN) [規則引擎](cdn-rules-engine.md)列出可用比對條件和功能的詳細說明。 > [AZURE.NOTE] 規則引擎需要高階 CDN 層。如需標準和高階 CDN 層的功能詳細資訊,請參閱 [Azure 內容傳遞網路的概觀](cdn-overview.md)。 ## 比對條件 比對條件會識別特定類型的要求,系統將針對這類要求執行一組功能。 例如,可能會使用它來篩選對於特定位置之內容的要求、產生自特殊 IP 位址或國家/地區的要求,或是依標頭資訊篩選要求。 ### 一律 「一律」比對條件可設計來將一組預設功能套用至所有要求。 ### 裝置 「裝置」比對條件可識別從行動裝置根據其屬性所提出的要求。 ### 位置 這些比對條件是設計來根據要求者的位置識別要求。 名稱 | 目的 -----|-------- AS 號碼 | 識別源自特定網路的要求。 國家 (地區) | 識別源自特定國家/地區的要求。 ### 來源 這些比對條件是設計來識別指向 CDN 儲存體或客戶原始伺服器的要求。 名稱 | 目的 -----|-------- CDN 原點 | 識別對儲存於 CDN 儲存體之內容的要求。 客戶原點 | 識別對儲存於特定客戶原始伺服器上之內容的要求。 ### 要求 這些比對條件是設計來根據要求的位置識別它們。 名稱 | 目的 -----|-------- 用戶端 IP 位址 | 識別源自特定 IP 位址的要求。 Cookie 參數 | 檢查與每個適用於指定值之要求相關聯的 Cookie。 Cookie 參數 Regex | 檢查與每個適用於規則運算式之要求相關聯的 Cookie。 邊緣 Cname | 識別指向特定邊緣 CNAME 的要求。 轉介網域 | 識別從指定主機名稱轉介的要求。 要求標頭常值 | 識別包含設為指定值之指定標頭的要求。 要求標頭 Regex | 識別包含指定標頭的要求,該標頭已設定為符合特定規則運算式的值。 要求標頭萬用字元 | 識別包含指定標頭的要求,該標頭已設定為符合特定模式的值。 要求方法 | 依其 HTTP 方法來識別要求。 要求配置 | 依其 HTTP 通訊協定來識別要求。 ### URL 這些比對條件是設計來根據要求的 URL 來識別它們。 名稱 | 目的 -----|-------- URL 路徑目錄 | 依其相對路徑來識別要求。 URL 路徑的副檔名 | 依其副檔名來識別要求。 URL 路徑的檔案名稱 | 依其檔案名稱來識別要求。 URL 路徑常值 | 比較要求的相對路徑與指定的值。 URL 路徑 Regex | 比較要求的相對路徑與指定的規則運算式。 URL 路徑萬用字元 | 比較要求的相對路徑與指定的模式。 URL 查詢常值 | 比較要求的查詢字串與指定的值。 URL 查詢參數 | 識別包含指定查詢字串參數的要求,該參數已設定為符合特定模式的值。 URL 查詢 Regex | 識別包含指定查詢字串參數的要求,該參數已設定為符合特定規則運算式的值。 URL 查詢萬用字元 | 根據要求的查詢字串來比較指定的值。 ## 特性 功能會定義動作類型,其將套用到透過一組比對條件來識別的要求類型。 ### Access 這些功能是設計來控制內容的存取權。 名稱 | 目的 -----|-------- 拒絕存取 | 判斷所有要求是否已遭拒絕且含有 [403 禁止] 回應。 權杖驗證 | 判斷是否要將權杖型驗證套用到要求。 權杖驗證拒絕代碼 | 判斷在要求因權杖型驗證而遭到拒絕時將傳回給使用者的回應類型。 權杖驗證會忽略 URL 的大小寫 | 判斷透過權杖型驗證所做的 URL 比較是否會區分大小寫。 權杖驗證參數 | 判斷是否應將權杖型驗證查詢字串參數重新命名。 ### 快取 這些功能是設計來自訂快取內容的時機和方法。 名稱 | 目的 -----|-------- 頻寬參數 | 判斷是否將會使用頻寬節流設定參數 (例如 ec\_rate 和 ec\_prebuf)。 頻寬節流設定 | 針對我們的 Edge Server 所提供的回應進行頻寬節流設定。 略過快取 | 判斷要求是否可以利用我們的快取技術。 Cache-Control 標頭處理 | 當 [外部最大壽命] 功能為作用中時,透過 Edge Server 來控制 Cache-Control 標頭的產生。 快取索引鍵查詢字串 | 判斷快取索引鍵是否將包含或排除與要求相關聯的佇列字串參數。 快取索引鍵重寫 | 重寫與要求相關聯的快取索引鍵。 完成快取填滿 | 判斷在要求於 Edge Server 上產生部分快取遺失時會發生什麼事。 壓縮檔案類型 | 定義將在伺服器上壓縮的檔案格式。 預設的內部最大壽命 | 判斷 Edge Server 到原始伺服器快取重新驗證之間的預設最大壽命間隔。 Expires 標頭處理 | 當 [外部最大壽命] 功能為作用中時,透過 Edge Server 來控制 Expires 標頭的產生。 外部最大壽命 | 判斷瀏覽器到 Edge Server 快取重新驗證之間的最大壽命間隔。 強制執行內部最大壽命 | 判斷 Edge Server 到原始伺服器快取重新驗證之間的最大壽命間隔。 H.264 支援 (HTTP 漸進式下載) | 判斷可能用於串流處理內容的 H.264 檔案格式類型。 接受 No-Cache 要求 | 判斷是否要將 HTTP 用戶端的 no-cache 要求轉送到原始伺服器。 忽略原始的 No-Cache | 判斷我們的 CDN 是否將忽略原始伺服器所提供的特定指示詞。 忽略無法滿足的範圍 | 判斷在要求產生「416 無法滿足的要求範圍」狀態代碼時將傳回用戶端的要求。 內部最大過時 | 控制當 Edge Server 無法使用原始伺服器重新驗證快取的資產時,從 Edge Server 所提供的快取資產可能會經歷多長的標準到期時間。 部分快取共用 | 判斷要求是否可以產生部分快取的內容。 預先驗證快取的內容 | 在快取內容的 TTL 到期之前,判斷其是否適用進行早期重新驗證。 重新整理零位元組的快取檔案 | 判斷如何透過我們的 Edge Server 來處理 HTTP 用戶端對於 0 位元組快取資產的要求。 設定可快取的狀態碼 | 定義一組可產生快取內容的狀態碼。 發生錯誤時傳遞過時的內容 | 判斷在快取重新驗證期間發生錯誤時,或者在接收到來自客戶原始伺服器的要求內容時,是否要傳遞到期的快取內容。 在重新驗證時過期 | 允許我們的 Edge Server 在進行重新驗證時提供過時的用戶端給要求者,藉以改善效能。 註解 | 「註解」功能能夠在規則中新增附註。 ### 標頭 這些功能是設計來新增、修改或刪除要求或回應的標頭。 名稱 | 目的 -----|-------- Age 回應標頭 | 判斷 Age 回應標頭是否將包含於傳送到要求者的回應中。 偵錯快取回應標頭 | 判斷回應是否會包含於 X-EC-Debug 回應標頭中,其會在快取原則上提供要求資產的相關資訊。 修改用戶端要求標頭 | 覆寫、附加或刪除要求的標頭。 修改用戶端回應標頭 | 覆寫、附加或刪除回應的標頭。 設定用戶端 IP 自訂標頭 | 允許將要新增到要求的要求用戶端 IP 位址做為自訂要求標頭。 ### 記錄檔 這些功能是設計來自訂儲存於原始記錄檔中的資料。 名稱 | 目的 -----|-------- 自訂記錄欄位 1 | 判斷將指派給原始記錄檔中自訂記錄欄位的格式和內容。 記錄查詢字串 | 判斷查詢字串以及 URL 是否將一起儲存於存取記錄中。 ### 最佳化 這些功能會判斷要求是否將經歷 Edge 最佳化工具所提供的最佳化。 名稱 | 目的 -----|-------- Edge 最佳化工具 | 判斷 Edge 最佳化工具是否可套用至要求。 Edge 最佳化工具 - 具現化設定 | 具現化或啟用與網站相關聯的 Edge 最佳化工具組態。 ### 來源 這些功能是設計來控制 CDN 與原始伺服器通訊的方式。 名稱 | 目的 -----|-------- Keep-Alive 要求的最大值 | 判斷在關閉 Keep-Alive 連線之前,適用於該連線的最大要求數目。 Proxy 特定的標頭 | 定義將從 Edge Server 轉送到原始伺服器之 CDN 特定的要求標頭組。 ### 特殊性 這些功能提供的進階功能僅可供進階使用者使用。 名稱 | 目的 -----|-------- 可快取的 HTTP 方法 | 判斷可在我們的網路上快取的其他 HTTP 方法組。 可快取的要求主體大小 | 定義用以判斷是否可快取 POST 回應的臨界值。 ### URL 這些功能可讓要求重新導向至不同的 URL 或重寫為不同的 URL。 名稱 | 目的 -----|-------- 遵循重新導向 | 判斷要求是否可以重新導向至定義於客戶原始伺服器所傳回之位置標頭中的主機名稱。 [URL 重新導向] | 透過位置標頭將要求重新導向。 URL 重寫 | 重寫要求 URL。 ### Web 應用程式防火牆 Web 應用程式防火牆功能會判斷要求是否將由 Web 應用程式防火牆進行篩選。 ## 另請參閱 * [Azure CDN 概觀](cdn-overview.md) * [使用規則引擎覆寫預設的 HTTP 行為](cdn-rules-engine.md) <!---HONumber=AcomDC_0302_2016-------->
21.229665
90
0.739238
yue_Hant
0.974835
6fe552799ad3284b986fa1de83721511deb0a58e
2,848
md
Markdown
src/cs/2020-04/04/06.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
null
null
null
src/cs/2020-04/04/06.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
null
null
null
src/cs/2020-04/04/06.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
null
null
null
--- title: 'Jak rozlišit dobro a zlo' date: 22/10/2020 --- > <p></p> > 29Ježíš odpověděl: „První je toto: ,Slyš, Izraeli, Hospodin, Bůh náš, jest jediný pán; 30miluj Hospodina, Boha svého, z celého svého srdce, z celé své duše, z celé své mysli a z celé své síly!‘ 31Druhé je toto: ,Miluj svého bližního jako sám sebe!‘ Většího přikázání nad tato dvě není.“ (Mk 12,29–31) **Osobní studium** `Ve zdravé rodině považují rodiče za svůj úkol předat dětem důležité životní hodnoty a naučit je rozpoznávat, co je dobré, a co špatné. Ale podle jakých kritérií to mají dělat? Kdo určí, jak rozlišit dobro a zlo? Stačí nám na to nějaký přirozený cit? Nebo je tu nějaká vyšší autorita, od které se to můžeme naučit?` Před lety Francie řešila otázku trestu smrti. Má být zrušen? Ti, kteří obhajovali jeho zrušení, kontaktovali známého francouzského spisovatele a filozofa Michela Foucaulta. Požádali ho, aby jejich jménem napsal pojednání. Tento pán však neobhajoval pouze zrušení trestu smrti, ale zrušení celého vězeňství a propuštění všech vězňů. Proč? Protože podle Michela Foucaulta jsou všechny morální systémy pouze lidské výtvory a názory, jež zavedli ti, kteří jsou u moci, aby ovládali masy. Proto podle něj tyto morální kodexy nemají skutečnou legitimitu. Jeho extrémní stanovisko je však logickým důsledkem problému, který ve skutečnosti není nový. Musel ho řešit už Mojžíš ve starověkém Izraeli před tisíci lety: „Nebudete už dělat to, co zde děláme dnes, co každý sám pokládá za správné.“ (Dt 12,8; viz též Sd 17,6; Př 12,15) Bůh tehdy jasně vstoupil do dění, aby on byl tím největším Učitelem, morální autoritou, která by dokázala lidem jasně pojmenovat, co je správné, a co ne. Bez tohoto „zjeveného poznání“ by každý pokus o stanovení morálních kritérií byl jen subjektivním lidským pohledem, který není pro ostatní závazný. **Co nás tyto texty učí o zdroji morálních kritérií pro náš život?** `Dt 6,5` `Mk 12,29–31` `Zj 14,12` Pokud bychom se měli řídit pouze podle toho, co považujeme za správné, byl by to problém. My sami nejsme dostatečně spravedliví, svatí ani objektivní, abychom zvládli rozeznat, co je morální a správné. Jak potom můžeme vědět, co máme dělat? Odpovědí je, že Pán, který nás stvořil, nám dal i morální zákon, podle kterého máme žít. Možná to my nepovažujeme za správné, ale Bůh ano. Pokud se Bůh a vykoupení, které nám nabízí, stane centrem našeho křesťanského pohledu na svět, pak bude přirozené, že přijmeme i jeho pohled na dobro a zlo, jak je vyjádřeno v Desateru. Jeho respektování nás sice nemůže zachránit, ale učí nás dívat se na svět očima našeho Boha, k jehož obrazu jsme byli stvořeni. **Aplikace** `Jak můžeme i druhým představit Boží morální zákon tak, aby nebyl vnímán jako omezování a komplikace života, ale aby naopak inspiroval a probouzel v lidech touhu podobat se více našemu Pánu?`
86.30303
574
0.780548
ces_Latn
1.000009
6fe58fbf3bfa47467ef5703e92501aee6b61bada
1,346
md
Markdown
packages/passport-auth/CHANGELOG.md
mshavliuk/keystone-5
82c7d52d06d00d340f9fb9826018adc28a923668
[ "MIT" ]
1
2019-06-21T06:18:20.000Z
2019-06-21T06:18:20.000Z
packages/passport-auth/CHANGELOG.md
mshavliuk/keystone-5
82c7d52d06d00d340f9fb9826018adc28a923668
[ "MIT" ]
null
null
null
packages/passport-auth/CHANGELOG.md
mshavliuk/keystone-5
82c7d52d06d00d340f9fb9826018adc28a923668
[ "MIT" ]
null
null
null
# @keystone-alpha/passport-auth ## 1.0.1 ### Patch Changes - [19fe6c1b](https://github.com/keystonejs/keystone-5/commit/19fe6c1b): Move frontmatter in docs into comments * Updated dependencies [30c1b1e1](https://github.com/keystonejs/keystone-5/commit/30c1b1e1): - @keystone-alpha/fields@7.0.0 ## 1.0.0 ### Major Changes - [2ef2658f](https://github.com/keystonejs/keystone-5/commit/2ef2658f): - Moved Social Login Strategies into its own package `@keystone-alpha/passport-auth`. - Created base strategy `PassportAuthStrategy`. This enables quick addition of new Social Login Strategy based on PassportJs. - Refactored Twitter and Facebook to extend base `PassportAuthStrategy`. - Added Google and GitHub Auth Strategy by extending base `PassportAuthStrategy`. - Removed `passport` and related dependencies from `@keystone-alpha/keystone`. - `test-projects/facebook-login` project is renamed into `test-projects/social-login` - `social-login` project now support for social login with Twitter, Facebook, Google and GitHub inbuilt strategies from `@keystone-alpha/passport-auth` along with an example of how to implement your own PassportJs strategy for WordPress in `WordPressAuthStrategy.js` * Updated dependencies [9dbed649](https://github.com/keystonejs/keystone-5/commit/9dbed649): - @keystone-alpha/fields@6.0.0
44.866667
268
0.770431
eng_Latn
0.80581
6fe622941c8b6d0e3d8afcdbbbc5181f53d14955
87
md
Markdown
packages/infra/README.md
dougkulak/semantic-release-toolkit
ff8a25b1b0aca48460e1df695aec20530d625c5b
[ "MIT" ]
null
null
null
packages/infra/README.md
dougkulak/semantic-release-toolkit
ff8a25b1b0aca48460e1df695aec20530d625c5b
[ "MIT" ]
null
null
null
packages/infra/README.md
dougkulak/semantic-release-toolkit
ff8a25b1b0aca48460e1df695aec20530d625c5b
[ "MIT" ]
null
null
null
# @dougkulak/semrel-infra Infra package: build tools, configs and other shared assets
21.75
59
0.793103
eng_Latn
0.985473
6fe7e7590ae4a0ade2a8d44c36153d188060d704
64
md
Markdown
README.md
Anthony-R-G/AppStoreTransition
6ade9270de9ebb6e4613d2351acd4a75c251ac8d
[ "MIT" ]
null
null
null
README.md
Anthony-R-G/AppStoreTransition
6ade9270de9ebb6e4613d2351acd4a75c251ac8d
[ "MIT" ]
null
null
null
README.md
Anthony-R-G/AppStoreTransition
6ade9270de9ebb6e4613d2351acd4a75c251ac8d
[ "MIT" ]
null
null
null
# AppStoreTransition Replicating the iOS 11 AppStore Animations
21.333333
42
0.859375
kor_Hang
0.88282
6fe8212d055257d5188ec475d7b20f9c4ccb0d82
188
md
Markdown
_posts/0000-01-02-arast8.md
arast8/github-slideshow
696caf0b0bef885194a88ba4d59f23d497baeecb
[ "MIT" ]
null
null
null
_posts/0000-01-02-arast8.md
arast8/github-slideshow
696caf0b0bef885194a88ba4d59f23d497baeecb
[ "MIT" ]
3
2020-04-01T21:37:17.000Z
2020-04-02T04:42:15.000Z
_posts/0000-01-02-arast8.md
arast8/github-slideshow
696caf0b0bef885194a88ba4d59f23d497baeecb
[ "MIT" ]
null
null
null
--- layout: slide title: "Welcome to our second slide!" --- **"It's dangerous to go alone! Take this."** [sonic the eggdog](https://youtu.be/EWG8Cu7kDnU) Use the left arrow to go back!
17.090909
48
0.680851
eng_Latn
0.978354
6fe8de89233df35150d289ec8b0540c15d3fd4bd
23
md
Markdown
README.md
SownBanana/DNA-Decoder-Simulator
65f632d510e81e514604301431164188da567328
[ "MIT" ]
null
null
null
README.md
SownBanana/DNA-Decoder-Simulator
65f632d510e81e514604301431164188da567328
[ "MIT" ]
null
null
null
README.md
SownBanana/DNA-Decoder-Simulator
65f632d510e81e514604301431164188da567328
[ "MIT" ]
null
null
null
# DNA-Decoder-Simulator
23
23
0.826087
ind_Latn
0.398071
6fe8ee0f38640ec763e174aebbbb9b184f9b5467
1,028
md
Markdown
README.md
Zverik/edits_to_josm
d07aa935b315b2b38da9026bbfaad1957b331e0d
[ "MIT" ]
2
2019-09-30T08:27:31.000Z
2019-11-18T13:24:24.000Z
README.md
Zverik/edits_to_josm
d07aa935b315b2b38da9026bbfaad1957b331e0d
[ "MIT" ]
null
null
null
README.md
Zverik/edits_to_josm
d07aa935b315b2b38da9026bbfaad1957b331e0d
[ "MIT" ]
null
null
null
# Converting edits.xml So you have spent a day or more editing data in the wonderful [MAPS.ME](https://maps.me/en/download/) application, but don't see your changes on the map? Most likely there were some errors uploading. Try opening and closing the app, and then waiting for a minute. If that doesn't help, click a searching icon (usually a magnifier) and type `?edits` with no spaces or quotes. You will see a list of your edits. Scroll it down to see if there are any non-uploaded edits. If there are, you may need this script. Use a file manager to locate `/MapsWithMe/edits.xml` file. Share it to your email or messaging app, so you can access the file at your computer. Then run this script: ./edits_to_josm.py /path/to/edits.xml josm_edits.xml Then open the result (`josm_edits.xml`) in JOSM as a new layer. DO NOT UPLOAD THE FILE! Use it to find non-uploaded objects and to compare tags with the objects currently in OpenStreetMap. ## Author and License Written by Ilya Zverev, published under a MIT license.
46.727273
101
0.765564
eng_Latn
0.997989
6fe951c5f6765dc7cc4d5d521c6b741e5d05e35b
61
md
Markdown
README.md
lizhongit/itvop
d6e4eed0e16a3f42ef7df8f3e7a25212c26cea4e
[ "Unlicense" ]
null
null
null
README.md
lizhongit/itvop
d6e4eed0e16a3f42ef7df8f3e7a25212c26cea4e
[ "Unlicense" ]
2
2020-12-04T19:10:39.000Z
2021-05-08T10:53:43.000Z
README.md
lizhongit/itvop
d6e4eed0e16a3f42ef7df8f3e7a25212c26cea4e
[ "Unlicense" ]
null
null
null
# ITVOP This is my personal website static files repository
15.25
51
0.803279
eng_Latn
0.999363
6fe97656a2e5167014f53bc744635984eec0e837
1,671
md
Markdown
sobremim.md
wallissoncarvalho/wallissoncarvalho.github.io
75e0b0e140a39de78a0ca351850cfe7478dea77c
[ "MIT" ]
null
null
null
sobremim.md
wallissoncarvalho/wallissoncarvalho.github.io
75e0b0e140a39de78a0ca351850cfe7478dea77c
[ "MIT" ]
null
null
null
sobremim.md
wallissoncarvalho/wallissoncarvalho.github.io
75e0b0e140a39de78a0ca351850cfe7478dea77c
[ "MIT" ]
null
null
null
--- layout: page title: Sobre mim subtitle: Somente o essencial --- <div style="text-align: center"> Para ver a versão dessa página em inglês, <a href="/about">clique aqui</a>. </div> Sou um profissional motivado com experiência de trabalho em equipe e estou sempre em busca de oportunidades de crescimento. Inclinado às áreas de Recursos Hídricos, Ciência de Dados e Estatística. ### <span class="fa fa-code about-icon"></span> Projetos Na página de <a href="/projetos">projetos</a> você poderá ver alguns dos meus trabalhos desenvolvidos e no que estou trabalhando atualmente. Ou, você pode ter acesso ao meu <a href="/cv">currículo</a>. ### <span class="fa fa-graduation-cap"></span> Formação - M.Sc (2019-Present), Recursos Hídricos e Saneamento, <a href="https://ufal.br/" target="_blank">Universidade Federal de Alagoas</a> - B.Sc (2014-2018), Engenharia Ambiental e Sanitária, <a href="https://ufal.br/" target="_blank">Universidade Federal de Alagoas</a> ## Contato <ul style="list-style: none;"> <li><span class="fa fa-map-pin" aria-hidden="true"></span> Centro de Tecnologia. Universidade Federal de Alagoas. Campus A.C. Simões</li> <li><span class="fa fa-envelope" aria-hidden="true"></span> <a href="mailto:cmwallisson@gmail.com" target="_blank">cmwallisson@gmail.com</a></li> <br> <li>Ou, você pode me mandar uma mensagem nas seguintes redes sociais:</li> <li><span class="fa fa-user" aria-hidden="true"></span> <a href="https://www.researchgate.net/profile/Wallisson_De_Carvalho" target="_blank">ResearchGate</a></li> <li><span class="fa fa-user"></span> <a href="https://linkedin.com/in/wallissoncarvalho" target="_blank">Linkedin</a></li> </ul>
57.62069
163
0.727708
por_Latn
0.975353
6fea9307639ac494f1e4506b3b4470e57ba4cc10
5,530
md
Markdown
README.md
ape-ming/flutter_sliver_tracker
3a70b6a41e14131645bdf80aea5687bb38279d67
[ "BSD-2-Clause" ]
1
2020-04-08T01:49:31.000Z
2020-04-08T01:49:31.000Z
README.md
ape-ming/flutter_sliver_tracker
3a70b6a41e14131645bdf80aea5687bb38279d67
[ "BSD-2-Clause" ]
null
null
null
README.md
ape-ming/flutter_sliver_tracker
3a70b6a41e14131645bdf80aea5687bb38279d67
[ "BSD-2-Clause" ]
null
null
null
# flutter_sliver_tracker 滑动曝光埋点框架,支持SliverList、SliverGrid ## 什么是滑动曝光埋点 滑动曝光埋点用于滑动列表组件中的模块曝光,例如Flutter中的`SliverList`、`SliverGrid`。 当`SliverList`中的某一个行(或列)移动到`ViewPort`中,并且显示比例超过一定阈值时,我们把这个事件记为一次滑动曝光事件。 当然我们对滑动曝光有一些额外的要求: - 需要滑出一定比例的时候才出发曝光(已实现) - 滑动速度快时不触发曝光事件(已实现) - 滑出视野的模块,再次滑入视野时需要再次上报(已实现) - 模块在视野中上下反复移动只触发一次曝光(已实现) ## 运行Demo <img src="https://raw.githubusercontent.com/SBDavid/flutter_sliver_tracker/master/demo.gif" width="270" height="480" alt="图片名称"> - 克隆代码到本地: git clone git@github.com:SBDavid/flutter_sliver_tracker.git - 切换工作路径: cd flutter_sliver_tracker/example/ - 启动模拟器 - 运行: flutter run ## 内部原理 滑动曝光的核心难点是计算组件的露出比例。也是说我们需要知道`ListView`中的组件的`总高度`、`当前显示高度`。 这两个高度做除法就可以得出比例。 ### 组件总高度 组件的总高度可以在`renderObject`中获取。我们可以获取`renderObject`下的`size`属性,其中包含了组件的长宽。 ### 当前显示高度 显示高度可以从`SliverGeometry.paintExtent`中获取。 ## 使用文档 ### 1. 安装 ```yaml dependencies: flutter_sliver_tracker: ^1.0.0 ``` ### 2. 引用插件 ```dart import 'package:xm_sliver_listener/flutter_sliver_tracker.dart'; ``` ### 3. 发送滑动埋点事件 #### 3.1 通过`ScrollViewListener`捕获滚动事件,`ScrollViewListener`必须包裹在`CustomScrollView`之上。 ```dart class _MyHomePageState extends State<MyHomePage> { @override Widget build(BuildContext context) { return Scaffold( // 通过ScrollViewListener捕获滚动事件 body: ScrollViewListener( child: CustomScrollView( slivers: <Widget>[ ], ), ), ); } } ``` #### 3.2 在`SliverToBoxAdapter`中监听滚动停止事件,并计算显示比例 ```dart class _MyHomePageState extends State<MyHomePage> { @override Widget build(BuildContext context) { return Scaffold( // 通过ScrollViewListener捕获滚动事件 body: ScrollViewListener( child: CustomScrollView( slivers: <Widget>[ SliverToBoxAdapter( // 监听停止事件,如果在页面上展示比例,可以自行setState child: SliverEndScrollListener( onScrollInit: (SliverConstraints constraints, SliverGeometry geometry) { // 显示高度 / sliver高度 Fluttertoast.showToast(msg: "展示比例:${geometry.paintExtent / geometry.scrollExtent}"); }, onScrollEnd: (ScrollEndNotification notification, SliverConstraints constraints, SliverGeometry geometry) { Fluttertoast.showToast(msg: "展示比例:${geometry.paintExtent / geometry.scrollExtent}"); }, child: Container( height: 300, color: Colors.amber, ), ), ), ], ), ), ); } } ``` #### 3.3 在`SliverList`和`SliverGrid`中监听滚动停止事件,并计算显示比例 - itemLength:列表项布局高度 - displayedLength:列表项展示高度 - 如果需要在widget中显示高度,可以自行setState ```dart class _MyHomePageState extends State<MyHomePage> { @override Widget build(BuildContext context) { return Scaffold( // 通过ScrollViewListener捕获滚动事件 body: ScrollViewListener( child: CustomScrollView( slivers: <Widget>[ SliverList( delegate: SliverChildBuilderDelegate( (BuildContext context, int index) { // 监听滚动停止 return SliverMultiBoxScrollEndListener( debounce: 1000, child: Container( height: 300, color: Colors.redAccent, child: Center( child: Text("SliverList Item", style: TextStyle(fontSize: 30, color: Colors.white)) ), ), onScrollInit: (double itemLength, double displayedLength) { Fluttertoast.showToast(msg: "显示高度:${displayedLength}"); }, onScrollEnd: (double itemLength, double displayedLength) { Fluttertoast.showToast(msg: "显示高度:${displayedLength}"); }, ); }, childCount: 1 ), ), ], ), ), ); } } ``` #### 3.4 在`SliverList`和`SliverGrid`中监听滚动更新事件,并计算显示比例 ```dart class _MyHomePageState extends State<MyHomePage> { @override Widget build(BuildContext context) { return Scaffold( // 通过ScrollViewListener捕获滚动事件 body: ScrollViewListener( child: CustomScrollView( slivers: <Widget>[ SliverList( delegate: SliverChildBuilderDelegate( (BuildContext context, int index) { // 监听滚动更新事件 return SliverMultiBoxScrollUpdateListener( onScrollInit: (double percent) { // percent 列表项显示比例 }, onScrollUpdate: (double percent) { // percent 列表项显示比例 }, debounce: 1000, // percent 列表项显示比例 builder: (BuildContext context, double percent) { return Container( height: 200, color: Colors.amber.withAlpha((percent * 255).toInt()), child: Center( child: Text("SliverList Item Percent ${percent.toStringAsFixed(2)}", style: TextStyle(fontSize: 30, color: Colors.white)) ), ); }, ); }, childCount: 6 ), ), ], ), ), ); } } ```
28.214286
147
0.553165
yue_Hant
0.557166
6fec7bf25bc82126f022549dcf322c7a44a648b2
4,551
md
Markdown
_posts/DB/2021-01-09-Relation.md
kong9410/kong9410.github.io
17a966be9fb26bef4fffb1813eedf01fdb8b50cf
[ "MIT" ]
1
2021-03-23T12:01:52.000Z
2021-03-23T12:01:52.000Z
_posts/DB/2021-01-09-Relation.md
kong9410/kong9410.github.io
17a966be9fb26bef4fffb1813eedf01fdb8b50cf
[ "MIT" ]
7
2021-09-12T09:13:33.000Z
2022-03-08T16:00:55.000Z
_posts/DB/2021-01-09-Relation.md
kong9410/kong9410.github.io
17a966be9fb26bef4fffb1813eedf01fdb8b50cf
[ "MIT" ]
null
null
null
--- title: 관계형 모델 tags: database --- # 관계형 모델 관계형 모델은 실제 세계의 데이터를 관계라는 개념을 사용해 표현한 데이터 모델이다. 일반적으로 ERD같은 설계 모델을 생각하는 사람이 많은데 실제로는 설계의 의미가 아니라 데이터를 어떻게 표한할까라는 개념의 의미이다. ## 릴레이션 ERD를 생각하는 사람에게서는 테이블 사이의 관계라고 생각하는데 이건 설계 관점에서 본것이므로 정답이 아니다. SQL에 있어서 릴레이션에 해당하는 것은 테이블이다. 릴레이션은 제목과 본체로 구성되어 있다. 제목은 속성이 0개 이상이 모인 집합이고, 속성은 이름과 데이터형으로 되어있다. 본체는 속성값의 집합인 행 또는 튜플의 집합이다. 튜플에 포함된 속성값은 이름과 데이터 형이 제목에서 지정한 것과 서로 일치하지 않으면 안된다. 제목에서 정의하지 않은 속성이 튜플에 존재하거나 반대인 경우에는 규칙 위반이다. 릴레이션이란 튜플의 집합이다. 튜플은 모두 같은 n개의 속성값의 집합으로 데이터 구성이 같다. ## 집합 수학에서 사용되는 개념으로 물건의 모임을 표현하는 개념이다. 각 물건은 요소나 원소라고 한다. 요소에는 특별한 제약이 없고 범용적인 구조로써 집합을 사용 할 수 있다. 집합의 충족 요소 1. 어떤 요소가 집합에 포함돼 있는지 불확정한 요소 없이 판정할 수 있어야 한다. 2. 집합의 요소가 중복돼서는 안 된다. 3. 집합의 요소는 더는 분해될 수 없다. ## 관계형 모델과 NULL 릴레이션에 NULL을 포함할 수 없다. 관계형 모델을 올바르게 구현하려면 NULL은 배제해야한다. ## 릴레이션의 연산 데이터를 릴레이션이라고 표현한다면 그에 대한 연산은 쿼리(질의)다. 관계형 모델은 릴레이션 단위로 다양한 연산을 사용해 질의를 수행하는 데이터 모델이다. 릴레이션을 사용한 연산을 수행하므로 관계형 모델이라고 부른다. ### 제한(Restrict) 릴레이션들 중 특정 조건에 맞는 튜플을 포함한 릴레이션을 반환한다. ### 프로젝션(Projection) 릴레이션에서 특정 속성만 포함하는 릴레이션을 반환한다. ### 확장(Extend) 프로젝션과 반대로 속성을 늘리는 동작이다. 새로운 속성값은 기존의 속성값을 이용해 계산한다. ### 속성명 변경(Rename) 단순히 속성의 이름을 변경하는 동작이다. ### 합집합(Union) 두 개의 릴레이션에 포함된 모든 튜플로 구성된 릴레이션을 반환한다. 공통 속성이 있다면 중복 값은 제거된다. ### 교집합(Intersect) 두 개의 릴레이션에 모두 포함된 릴레이션을 반환한다. ### 차집합(Difference) 두 개의 릴레이션 중 한쪽의 릴레이션에만 포함되어 있는 튜플로 구성된 릴레이션을 반환한다. ### 곱집합(Product) 두 개의 릴레이션에 있는 튜플을 각각 조합한 릴레이션을 반환한다. ### 결합(Join) 공통된 속성을 가진 두 개의 릴레이션에서 공통된 속성값이 같은 튜플끼리 조합한 릴레이션을 반환한다. 일치하지 않는 튜플은 결과에서 제외된다. 이와 같은 형태의 결합을 SQL에서는 내부조인(INNER JOIN)이라고 한다. 외부 조인은 결과에 NULL이 포함될 가능성이 있으므로 릴레이션의 연산으로는 부적절하다. ## 클로저(Closure) 관계형 모델에서는 릴레이션을 사용한 연산 결과가 릴레이션이 된다. 정수와 정수와의 연산은 정수가 되고 또 그 정수는 다른 정수와 연산을 할 수 있다. 이처럼 연산의 입력과 출력이 같은 구데이터 구조를 가진 성질을 클로저라고 한다. 릴레이션도 마찬가지로 릴레이션과 릴레이션이 연산하면 결과값은 릴레이션이다. 이처럼 릴레이션의 연산만을 이용해 복잡한 연산을 표현할 수 있는게 관계형 모델의 진면목이라 할 수 있다. ## 관계형 모델의 데이터 형식 데이터 형식이란 각 속성이 어떤 값을 가질 것인지를 뜻한다. 관계형 모델은 문자 그대로 모델이므로 어떻게 사용할 수 있는지가 정해져 있을 뿐 어떻게 사용해야 한다는 모델을 사용하는 응용프로그램이 정해야한다. ### 변수 변수란 값을 대입할 수 있는 그릇이다. x라는 변수 값이 1이라는 정수이고 x에 다른 값을 대입해 내용이 변경될 수 있다. ### 도메인 변수에 넣을 수 있는 값은 무한정한 값이 아니고, 사용할 수 있는 범위가 한정되어 있다. 컴퓨터가 표현할 수 있는 데이터는 변경할 수 있는 한계가 있다. 즉 데이터 형식은 그 변수에 대입할 수 있는 값의 유한 집합이다. 관계형 모델에서는 데이터 형식은 도메인이라고 한다. 값이라는 것은 집합에서 각 요소들을 말한다. 변수라는 것은 요소들 중 하나를 선택한 것으로 해석한다. 집합 요소에는 변화 없지만 어떤 요소를 선택할 것인가는 시시각각 변한다. 그 집합 전체를 도메인이라고 한다. 관계형 모델에서는 튜플은 제목에서 정의된 요소중 하나이다. 릴레이션은 속성의 도메인의 곱집합에서 특정 튜플만 선택해 구성한 집합이다. ## SQL에서 릴레이션 조작 관계형 모델과 대응 시켜 살펴본다. ### SELECT 질의 기능은 전부 SELECT에 들어가 있다. 가장 간단한 쿼리는 다음과 같이 되어있다. ```sql SELECT 칼럼의 목록 FROM 테이블의 목록 WHERE 검색 조건 ``` 칼럼의 목록은 프로젝션, 테이블의 목록은 곱집합, 검색조건은 제한이다. 이렇게 간단한 select에도 세개의 릴레이션 연산이 있는 것을 알 수 있다. 릴레이션의 연산중 중요한 것중 하나는 연산의 순서이다. 위의 쿼리는 다음 순서의 릴레이션 연산을 한다. 1. 테이블의 목록 (곱집합) 2. 검색 조건 (제한) 3. 칼럼의 목록 (프로젝션) 그러나 실제 RDB에서는 옵티마이저가 최적화 해서 실행하기 때문에 실행 순서는 바뀔수가 있다. 여기서 말하는 연산 순서는 논리적인 의미이다. ### INSERT 관계형 모델에서는 갱신이라는 개념이 존재하지 않는다. 릴레이션은 값이기 때문이다. C를 예를들자면 `int a = 1 + 2`라고 할 때 값은 변경되었지만 정수라는 집합의 요소에서는 벗어나지 않는다. 그러나 SQL에서는 집합에 요소를 변경할 수 있다. 이러한 현상은 테이블이 값과 변수 둘의 역할을 다 하고 있기 때문이다. INSERT는 릴레이션을 해당 릴레이션에 새롭게 INSERT할 튜플을 추가하고 릴레이션과 바꾸는 작업에 해당한다. 즉 INSERT는 기존 릴레이션에 새롭게 추가되는 튜플만을 가지고 있는 릴레이션의 합집합이라고 볼 수 있다. ```sql INSERT INTO 테이블이름 (칼럼1, 칼럼2, 칼럼3) VALUES (값1, 값2, 값3) ``` ### DELETE INSERT가 합집합이라면 DELETE는 차집합이다. ```sqlite DELETE FROM 테이블 WHERE 조건 ``` 전체 릴레이션에서 WHERE 절의 조건에 해당하는 튜플의 집합을 기존 릴레이션에서 차집합 연산을 하는 것과 같다. ### UPDATE ```sql UPDATE 테이블 이름 SET 칼럼 = 변경값, ... WHERE 조건 ``` 전체 릴레이션에서 WHERE 절의 조건에 맞는 튜플 값을 갱신한다. 그러나 관계형 모델에서는 갱신이라는 개념이 없다. 정확하게 표현하자면 다음과 같다 1. 전체 릴레이션에서 조건에 맞는 튜플로 이루어진 릴레이션의 차집합을 구한다 2. where 조건에 맞게 구성된 릴레이션에 수정을 가한 릴레이션과의 합집합을 구한다 3. 합집합을 relvar에 대입한다 ## 관계형 데이터 모델에는 없고 SQL에는 있는 것 ### 요소의 중복 릴레이션은 구조가 같은 튜플의 집합이다. 집합은 중복되지 않으므로 릴레이션도 마찬가지다. 그러나 SQL은 테이블에 같은 행이 있어도 괜찮다. 오류는 없다. 즉, SQL에서 테이블은 집합이 아니다. SQL을 관계형 모델에 맞게 사용하려면 테이블을 집합처럼 사용해야 한다. ### 요소 사이의 순서 집합은 요소사이의 순서가 없다. 그러나 SQL에는 순서가 있다. 칼럼은 정의된 순서대로 나열되고 행을 정렬할 수도 있다. 쿼리를 실행한 결과도 지정한 순서로 나열된다. 관계형 모델에 따라 사용하려면 행이나 칼럼의 위치를 고려한 쿼리를 작성하면 안된다. 예를들어 ROWNUM이나 ORDER BY 1 과 같은 기능이다. ### 릴레이션의 갱신 릴레이션은 값이므로 갱신할 수 없다. 테이블은 값과 변수의 기능을 모두 한다. 관계형 모델을 구성하는데 있어서 명확하게 구분을 해야한다. ### 트랜잭션 트랜잭션은 SQL 사양의 일부지만 관계형 모델과는 다른 개념이다. 트랜잭션은 여러개가 병렬로 실행된 갱신을 모순 없이 수행하기 위한 이론이다. 그러나 릴레이션은 갱신이 없으므로 아무런 관계가 없다. > ACID 특성 > > - Atomicity원자성 > - Consistency 일관성 > - Isolation 독립성 > - Durability 내구성 ### 스토어드 프로시저 관계형 모델에서는 프로시저가 존재하지 않는다. ### NULL 관계형 모델에서 요소란 존재하면 포함하고 존재하지 않으면 포함하지 않는다. 그러나 SQL에서는 NULL을 사용해서 값이 없음을 표현한다. 이는 값이 아니다. 따라서 NULL은 집합에 표현할 수 없다. ### Relvar 연산할때 중간중간의 결과들을 담는 테이블
23.338462
269
0.697429
kor_Hang
1.00001
6fed9ea64f4c054e615ff66bcc90bf73411be026
4,730
md
Markdown
articles/virtual-machines/virtual-machines-windows-classic-change-drive-letter.md
ggailey777/azure-docs
4520cf82cb3d15f97877ba445b0cfd346c81a034
[ "CC-BY-3.0" ]
null
null
null
articles/virtual-machines/virtual-machines-windows-classic-change-drive-letter.md
ggailey777/azure-docs
4520cf82cb3d15f97877ba445b0cfd346c81a034
[ "CC-BY-3.0" ]
null
null
null
articles/virtual-machines/virtual-machines-windows-classic-change-drive-letter.md
ggailey777/azure-docs
4520cf82cb3d15f97877ba445b0cfd346c81a034
[ "CC-BY-3.0" ]
1
2019-03-31T17:25:38.000Z
2019-03-31T17:25:38.000Z
--- title: 'Make the D: drive of a VM a data disk | Microsoft Docs' description: 'Describes how to change drive letters for a Windows VM so that you can use the D: drive as a data drive.' services: virtual-machines-windows documentationcenter: '' author: cynthn manager: timlt editor: '' tags: azure-resource-manager,azure-service-management ms.assetid: 0867a931-0055-4e31-8403-9b38a3eeb904 ms.service: virtual-machines-windows ms.workload: infrastructure-services ms.tgt_pltfrm: vm-windows ms.devlang: na ms.topic: article ms.date: 09/27/2016 ms.author: cynthn --- # Use the D: drive as a data drive on a Windows VM If your application needs to use the D drive to store data, follow these instructions to use a different drive letter for the temporary disk. Never use the temporary disk to store data that you need to keep. If you resize or **Stop (Deallocate)** a virtual machine, this may trigger placement of the virtual machine to a new hypervisor. A planned or unplanned maintenance event may also trigger this placement. In this scenario, the temporary disk will be reassigned to the first available drive letter. If you have an application that specifically requires the D: drive, you need to follow these steps to temporarily move the pagefile.sys, attach a new data disk and assign it the letter D and then move the pagefile.sys back to the temporary drive. Once complete, Azure will not take back the D: if the VM moves to a different hypervisor. For more information about how Azure uses the temporary disk, see [Understanding the temporary drive on Microsoft Azure Virtual Machines](https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/) [!INCLUDE [learn-about-deployment-models](../../includes/learn-about-deployment-models-both-include.md)] ## Attach the data disk First, you'll need to attach the data disk to the virtual machine. * To use the portal, see [How to attach a data disk in the Azure portal](virtual-machines-windows-attach-disk-portal.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) * To use the classic portal, see [How to attach a data disk to a Windows virtual machine](virtual-machines-windows-classic-attach-disk.md?toc=%2fazure%2fvirtual-machines%2fwindows%2fclassic%2ftoc.json). ## Temporarily move pagefile.sys to C drive 1. Connect to the virtual machine. 2. Right-click the **Start** menu and select **System**. 3. In the left-hand menu, select **Advanced system settings**. 4. In the **Performance** section, select **Settings**. 5. Select the **Advanced** tab. 6. In the **Virtual memory** section, select **Change**. 7. Select the **C** drive and then click **System managed size** and then click **Set**. 8. Select the **D** drive and then click **No paging file** and then click **Set**. 9. Click Apply. You will get a warning that the computer needs to be restarted for the changes to take affect. 10. Restart the virtual machine. ## Change the drive letters 1. Once the VM restarts, log back on to the VM. 2. Click the **Start** menu and type **diskmgmt.msc** and hit Enter. Disk Management will start. 3. Right-click on **D**, the Temporary Storage drive, and select **Change Drive Letter and Paths**. 4. Under Drive letter, select drive **G** and then click **OK**. 5. Right-click on the data disk, and select **Change Drive Letter and Paths**. 6. Under Drive letter, select drive **D** and then click **OK**. 7. Right-click on **G**, the Temporary Storage drive, and select **Change Drive Letter and Paths**. 8. Under Drive letter, select drive **E** and then click **OK**. > [!NOTE] > If your VM has other disks or drives, use the same method to reassign the drive letters of the other disks and drives. You want the disk configuration to be: > > * C: OS disk > * D: Data Disk > * E: Temporary disk > > ## Move pagefile.sys back to the temporary storage drive 1. Right-click the **Start** menu and select **System** 2. In the left-hand menu, select **Advanced system settings**. 3. In the **Performance** section, select **Settings**. 4. Select the **Advanced** tab. 5. In the **Virtual memory** section, select **Change**. 6. Select the OS drive **C** and click **No paging file** and then click **Set**. 7. Select the temporary storage drive **E** and then click **System managed size** and then click **Set**. 8. Click **Apply**. You will get a warning that the computer needs to be restarted for the changes to take affect. 9. Restart the virtual machine. ## Next steps * You can increase the storage available to your virtual machine by [attaching a additional data disk](virtual-machines-windows-attach-disk-portal.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).
58.395062
632
0.748414
eng_Latn
0.986587
6feda95849a200a2bff64cda2eed761c8def07bf
461
md
Markdown
README.md
c-sauerborn-mms/react-redux-todo-ts
23574d6a300e130db57baa6d7d4519f5ffaf4d58
[ "MIT" ]
null
null
null
README.md
c-sauerborn-mms/react-redux-todo-ts
23574d6a300e130db57baa6d7d4519f5ffaf4d58
[ "MIT" ]
null
null
null
README.md
c-sauerborn-mms/react-redux-todo-ts
23574d6a300e130db57baa6d7d4519f5ffaf4d58
[ "MIT" ]
null
null
null
# react-redux-todo-ts (modified) This is an example app using typescript, react and redux. You can find an article explaining each parts of the app here : https://medium.com/@nem121/todo-app-with-typescript-redux-e6a4c2f02079 ## Start the application You need to be at the root of the application folder and do : `npm install && npm start` ## Logger If you open a debugging console in the browser, you should see every action logged into the console.
28.8125
100
0.75705
eng_Latn
0.997618
6feeb3afa05ac6148a8628c53067d53fc65104e9
4,590
md
Markdown
content/en/docs/Developerguide/delete.md
opengauss-mirror/docs
869e517825cf56059243a7104f5e4fa6fc490e69
[ "MIT" ]
8
2020-07-01T07:14:05.000Z
2022-03-08T09:20:56.000Z
content/en/docs/Developerguide/delete.md
opengauss-mirror/docs
869e517825cf56059243a7104f5e4fa6fc490e69
[ "MIT" ]
null
null
null
content/en/docs/Developerguide/delete.md
opengauss-mirror/docs
869e517825cf56059243a7104f5e4fa6fc490e69
[ "MIT" ]
6
2020-07-01T15:40:22.000Z
2021-05-28T07:43:19.000Z
# DELETE<a name="EN-US_TOPIC_0289900955"></a> ## Function<a name="en-us_topic_0283136795_en-us_topic_0237122131_en-us_topic_0059778379_se9507fb26df547a795ac7940e3a19ecf"></a> **DELETE** deletes rows that satisfy the **WHERE** clause from the specified table. If the **WHERE** clause is absent, the effect is to delete all rows in the table. The result is a valid, but an empty table. ## Precautions<a name="en-us_topic_0283136795_en-us_topic_0237122131_en-us_topic_0059778379_sfc96c070e8574f4ea9a2726e898fda16"></a> - You must have the **DELETE** permission on the table to delete from it, as well as the **SELECT** permission for any table in the **USING** clause or whose values are read in the **condition**. - For column-store tables, the **RETURNING** clause is currently not supported. ## Syntax<a name="en-us_topic_0283136795_en-us_topic_0237122131_en-us_topic_0059778379_s84baecef89484d5f87f57b0545b46203"></a> ``` [ WITH [ RECURSIVE ] with_query [, ...] ] DELETE [/*+ plan_hint */] FROM [ ONLY ] table_name [ * ] [ [ AS ] alias ] [ USING using_list ] [ WHERE condition | WHERE CURRENT OF cursor_name ] [ RETURNING { * | { output_expr [ [ AS ] output_name ] } [, ...] } ]; ``` ## Parameter Description<a name="en-us_topic_0283136795_en-us_topic_0237122131_en-us_topic_0059778379_s6df87c0dd87c49e29a034e0ff3385ca6"></a> - **WITH \[ RECURSIVE \] with\_query \[, ...\]** Specifies one or more subqueries that can be referenced by name in the main query, which is equivalent to a temporary table. If **RECURSIVE** is specified, it allows a **SELECT** subquery to reference itself by name. Format of **with\_query**: ``` with_query_name [ ( column_name [, ...] ) ] AS [ [ NOT ] MATERIALIZED] ( {select | values | insert | update | delete} ) ``` -- **with\_query\_name** specifies the name of the result set generated by a subquery. Such names can be used to access the subquery result set. – **column\_name** specifies the column name displayed in the subquery result set. – Each subquery can be a **SELECT**, **VALUES**, **INSERT**, **UPDATE** or **DELETE** statement. - **plan\_hint** clause Follows the **DELETE** keyword in the **/\*+ \*/** format. It is used to optimize the plan of a **DELETE** statement block. For details, see [Hint-based Tuning](en-us_topic_0289900289.md). In each statement, only the first **/\*+** _plan\_hint _**\*/** comment block takes effect as a hint. Multiple hints can be written. - **ONLY** If **ONLY** is specified before the table name, matching rows are deleted from the named table only. If **ONLY** is not specified, matching rows are also deleted from any tables inheriting from the named table. - **table\_name** Specifies the name \(optionally schema-qualified\) of the target table. Value range: an existing table name - **alias** Specifies a substitute name for the target table. Value range: a string. It must comply with the naming convention. - **using\_list** Specifies the **USING** clause. - **condition** Specifies an expression that returns a Boolean value. Only rows for which this expression returns **true** will be deleted. - **WHERE CURRENT OF cursor\_name** This parameter is reserved. - **output\_expr** Specifies an expression to be computed and returned by the **DELETE** statement after each row is deleted. The expression can use any column names of the table. Write **\*** to return all columns. - **output\_name** Specifies a name to use for a returned column. Value range: a string. It must comply with the naming convention. ## Examples<a name="en-us_topic_0283136795_en-us_topic_0237122131_en-us_topic_0059778379_s90a3978214f644269ab932c29df31137"></a> ``` -- Create the tpcds.customer_address_bak table. openGauss=# CREATE TABLE tpcds.customer_address_bak AS TABLE tpcds.customer_address; -- Delete employees whose ca_address_sk is smaller than 14888 from the tpcds.customer_address_bak table. openGauss=# DELETE FROM tpcds.customer_address_bak WHERE ca_address_sk < 14888; -- Delete all data from the tpcds.customer_address_bak table. openGauss=# DELETE FROM tpcds.customer_address_bak; Delete the tpcds.customer_address_bak table. openGauss=# DROP TABLE tpcds.customer_address_bak; ``` ## Suggestions<a name="en-us_topic_0283136795_en-us_topic_0237122131_en-us_topic_0059778379_section50155651112741"></a> - delete To delete all records in a table, use the **truncate** syntax.
41.351351
334
0.716558
eng_Latn
0.953998
6fef61547c61a515fb22de3bde83d7ecaccd6d9c
164
md
Markdown
_locations/vassar-taylor.md
japuzzo/hvopen-website
4d6da5e26cfed21c6fecc4f06374465cc5a70cb9
[ "CC-BY-4.0" ]
3
2018-04-15T22:37:39.000Z
2019-02-27T18:11:31.000Z
_locations/vassar-taylor.md
japuzzo/hvopen-website
4d6da5e26cfed21c6fecc4f06374465cc5a70cb9
[ "CC-BY-4.0" ]
31
2018-04-16T13:45:02.000Z
2021-11-15T17:48:30.000Z
_locations/vassar-taylor.md
japuzzo/hvopen-website
4d6da5e26cfed21c6fecc4f06374465cc5a70cb9
[ "CC-BY-4.0" ]
8
2018-03-27T15:49:59.000Z
2020-10-03T00:57:45.000Z
--- title: Taylor 203 - Vassar College lat: 41.6879 lon: -73.8968 parking: [41.6907, -73.8970] address: 124 Raymond Ave city: Poughkeepsie state: NY zip: 12604 ---
14.909091
34
0.70122
kor_Hang
0.287063
6fef8ad61c5a38e085d0ae60af9827bcb0438710
193
md
Markdown
README.md
floscha/meetup-watcher
14eb56b7ad58bdd082245c19f97df4c110ae8a35
[ "MIT" ]
null
null
null
README.md
floscha/meetup-watcher
14eb56b7ad58bdd082245c19f97df4c110ae8a35
[ "MIT" ]
null
null
null
README.md
floscha/meetup-watcher
14eb56b7ad58bdd082245c19f97df4c110ae8a35
[ "MIT" ]
null
null
null
# Meetup Watcher Proof of concept script to be notified about upcoming meetups before all places are taken. Possibly to be integrated into https://github.com/floscha/meetup-bot at some point.
38.6
90
0.803109
eng_Latn
0.993703
6fefd62cfdab5c4ee5af561973c7117728be1c6c
2,461
md
Markdown
docs/upgrade.md
ynohat/cli-property-manager
134de4f385f9c42dd194bcce130e92ded3d0caec
[ "Apache-2.0" ]
19
2018-11-05T16:39:28.000Z
2022-03-30T15:20:03.000Z
docs/upgrade.md
ynohat/cli-property-manager
134de4f385f9c42dd194bcce130e92ded3d0caec
[ "Apache-2.0" ]
51
2018-10-11T14:14:14.000Z
2022-03-28T20:41:22.000Z
docs/upgrade.md
ynohat/cli-property-manager
134de4f385f9c42dd194bcce130e92ded3d0caec
[ "Apache-2.0" ]
17
2018-11-19T14:32:12.000Z
2022-02-14T16:13:30.000Z
# Upgrade to the Latest Property Manager CLI The original Property Manager CLI, [cli-property](https://github.com/akamai/cli-property), has been deprecated. The latest CLI, [cli-property-manager](https://github.com/akamai/cli-property-manager), includes most features from the original. There are differences in command and option names between the two CLI versions. # How do I install the latest version? When upgrading, you install the new version as if it were a new installation. Use the instructions starting with [Get Started](https://github.com/akamai/cli-property-manager/blob/master/README.md#get-started) in the README.md file. # Updated commands Here's a list of commands that have changed between the two versions: Original CLI command | New CLI command | Notes ------------ | ------------- | ------------- `activate <property>` | `activate` | `BOTH` argument no longer supported for the `network` option. `deactivate <property>` | `deactivate` | `BOTH` argument no longer supported for the `network` option. `create <property>` | `new-property` | These options not currently supported: `--cpcode`, `--edgehostname`, `--file`, `--forward`, `--hostnames`, `--origin`, `--newcpcodename`, `--nocopy`, and `--notes`. `delete <property>` | `delete` | The `--property` option replaces the `<property>` argument. `format` | `list-rule-formats` | `groups` | `list-groups` | `contractId` replaces `contract`. `list` | `list-properties` | `contractId` replaces `contract` and `groupId` replaces `group`. `products` | `list-products` | `contractId` replaces `contract`. `retrieve <property>` | `show-ruletree` | The `list-rule-formats` command replaces `--format`, the `list-property-hostnames` command replaces `--hostnames`, and the `list-property-variables` command replaces `--variables`. `update <property>` | `update-property` | The `--property` option replaces the `<property>` argument, the --dry-run option replaces --dryrun, the --note option replaces --notes, and the --message option is an alias of --note. # Updated options Some options from original version were also updated in the new version: Original CLI option | New CLI option ------------ | ------------- `--clone` | `--propertyId` `--config` | `--edgerc` `--contract` | `--contractId` `--debug` | `--verbose` `--email` | `--emails` `--group` | `--groupId` `--notes` | `--message` `--product` | `--productId` `--srcver` | `--propver` Copyright © 2020 Akamai Technologies, Inc.
55.931818
321
0.697278
eng_Latn
0.945967
6ff046da8e92c6942c38859bbef3d1ad5f8289b6
1,375
md
Markdown
README.md
yutakodera/visualize-csv
2e216b60daa77c36ce1845c47ded1f555de89b02
[ "BSD-3-Clause" ]
null
null
null
README.md
yutakodera/visualize-csv
2e216b60daa77c36ce1845c47ded1f555de89b02
[ "BSD-3-Clause" ]
null
null
null
README.md
yutakodera/visualize-csv
2e216b60daa77c36ce1845c47ded1f555de89b02
[ "BSD-3-Clause" ]
null
null
null
# visualize-csv Codes here are a sample of visualization of CSV file using python under the simple CGI setting. One can use these codes for making easily to check the contents of a CSV file and it allows a few modifications on the CSV at the same time. There are two sets of codes as follows for modifying CSV on a browser and generating an HTML consisting of a table of the CSV and its graphs of numerical data. [Set1] This set plays a role in editing a CSV on a browser. 1. form.html 2. style.css 3. csv2table.py 4. edit_table.py The functionalities available in the former set are as follows: 1. Delete several columns from the table by listing up the header (label of each column) with a comma-separated format as well as CSV. It is noted that any spaces must not be involved in the list of columns. 2. Pick up and show several columns in contrast to 1 in the same manner of the above. 3. Download link for the current (modified) table. These codes are developed and confirmed with work on firefox in the following environment. OS: Ubuntu 18.04.3 LTS Server version: Apache/2.4.29 Python version: 3.6.9 Pandas version: 0.25.3 [Set2] This set plays a role in generating HTML for helping visualization. It is noted that the code 'generate_html.py' uses matplotlib to plot the contents of a table. 1. generate_html.py 2. okayama_weatherdata.csv (This is a sample data)
47.413793
207
0.781818
eng_Latn
0.99952
6ff1022a783cd5b4fe139bd01935b8922937223e
210
md
Markdown
_portfolio/rock-on.md
dcschreck/dcschreck.github.io
d1942877a666a682ab2d48c9f5598a286c757296
[ "MIT" ]
null
null
null
_portfolio/rock-on.md
dcschreck/dcschreck.github.io
d1942877a666a682ab2d48c9f5598a286c757296
[ "MIT" ]
null
null
null
_portfolio/rock-on.md
dcschreck/dcschreck.github.io
d1942877a666a682ab2d48c9f5598a286c757296
[ "MIT" ]
null
null
null
--- layout: post title: DiscussIt link: discuss-it thumbnail-path: "img/discuss-it.png" short-description: A discussion forum app. Sign up, add posts, and up-vote your favorites. Built with Ruby on Rails. ---
23.333333
116
0.742857
eng_Latn
0.932537
6ff1126b66006a6a39377e6340f2c3ad818612bf
1,007
md
Markdown
README.md
ameyms/tabulate
8213a9112ba2a69b21c9c32b9431934b3a2dd126
[ "MIT" ]
1
2016-03-01T09:50:12.000Z
2016-03-01T09:50:12.000Z
README.md
ameyms/tabulate
8213a9112ba2a69b21c9c32b9431934b3a2dd126
[ "MIT" ]
null
null
null
README.md
ameyms/tabulate
8213a9112ba2a69b21c9c32b9431934b3a2dd126
[ "MIT" ]
null
null
null
Tabulate [![Build Status](https://travis-ci.org/ameyms/tabulate.png)](https://travis-ci.org/ameyms/tabulate) ======== A jquery plugin for working with paginated tables with emphasis on [Bootstrap](http://getbootstrap.com "Twitter Bootstrap") Final API Goal: ```javascript $('#mytable').tabulate({ // Function that returns a jquery deferred source: xhrSource, //A Function that accepts row, column and item and returns tds content as html renderer: renderer, // String or Function that accepts row, column and item and returns class string cellClass: foo, // String or Function that accepts row, column and item // and returns an object that is set as $.data() for the td cellMeta: bar, pagination: $('#mypagination') //Bootstrap 'pagination' control }); ``` __NOTE:__ Column resize has not yet been implemented License ------ This library is available under the [MIT license](https://github.com/ameyms/tabulate/blob/master/LICENSE "License")
26.5
123
0.703078
eng_Latn
0.823258
6ff1780a7798d51ea1c31d44dbbb62fac26286b8
1,725
md
Markdown
README.md
EaBro/aku
6aea57c8af8637ecb3240e14619d687a19e7400a
[ "Unlicense" ]
1
2020-12-05T06:06:11.000Z
2020-12-05T06:06:11.000Z
README.md
EaBro/aku
6aea57c8af8637ecb3240e14619d687a19e7400a
[ "Unlicense" ]
null
null
null
README.md
EaBro/aku
6aea57c8af8637ecb3240e14619d687a19e7400a
[ "Unlicense" ]
null
null
null
<p align="center"> <img src="https://raw.githubusercontent.com/X-PrCx12/BotAku/master/media/img/Kaguya.png" width="128" height="128"/> </p> <p align="center"> <a href="#"><img title="Kaguya.png" src="https://img.shields.io/badge/BotAku-green?colorA=%23ff0000&colorB=%23017e40&style=for-the-badge"></a> </p> <p align="center"> <a href="https://github.com/X-PrCx12"><img title="Author" src="https://img.shields.io/badge/Author-X-PrCx12-red.svg?style=for-the-badge&logo=github"></a> </p> <p align="center"> <a href="https://github.com/X-PrCx12/followers"><img title="Followers" src="https://img.shields.io/github/followers/X-PrCx12?color=blue&style=flat-square"></a> <a href="https://github.com/X-PrCx12/BotAku/stargazers/"><img title="Stars" src="https://img.shields.io/github/stars/X-PrCx12/BotAku?color=red&style=flat-square"></a> <a href="https://github.com/X-PrCx12/BotAku/network/members"><img title="Forks" src="https://img.shields.io/github/forks/X-PrCx12/BotAku?color=red&style=flat-square"></a> <a href="https://github.com/X-PrCx12/BotAku/watchers"><img title="Watching" src="https://img.shields.io/github/watchers/X-PrCx12/BotAku?label=Watchers&color=blue&style=flat-square"></a> <a href="#"><img title="UNMAINTENED" src="https://img.shields.io/badge/UNMAINTENED-YES-blue.svg"</a> </p> ## INSTAL $ git clone https://github.com/X-PrCx12/BotAku $ cd BotAku $ npm i -g cwebp $ npm i -g ytdl $ npm i $ node index.js *Bot Sudah Siap!,Untuk Scannya Pake Hp lain/temen/doi/ortu ✨* ## Jika Bot Off/Mau Nyalain Lagi! $ cd BotAku (cd nama file) $ node index.js Dan Selesai.... bot kembali aktif ## INFO * [`Instagram Admin`](https://instagram.com/ini.pfff) * [`WhatsApp Admin `](https://wa.me/+6281260899819)
35.204082
185
0.706087
yue_Hant
0.567818
6ff2b7b8ce6da9f7a33907978910cc90d0c3ff95
266
md
Markdown
content/gettingstarted/4expoStart.md
alexander-beaver/react-native-game-docs
6e821aeae2461036c399f56d79632953b44135b8
[ "MIT" ]
null
null
null
content/gettingstarted/4expoStart.md
alexander-beaver/react-native-game-docs
6e821aeae2461036c399f56d79632953b44135b8
[ "MIT" ]
null
null
null
content/gettingstarted/4expoStart.md
alexander-beaver/react-native-game-docs
6e821aeae2461036c399f56d79632953b44135b8
[ "MIT" ]
null
null
null
--- title: "Expo Start" metaTitle: "Expo Start" metaDescription: "Learn how to start your Expo project" --- To start the project, just run `expo start` in your repository <iframe src="https://showterm.io/17b1dc1f0ab412aa42e9e#78" width="100%" height="320"></iframe>
33.25
94
0.736842
eng_Latn
0.61791
6ff2f51f67471ec42365fd8683af0e235ac47148
14,543
md
Markdown
README.md
mkitt/watchn
c47d4ab01ba34ad939750b2c45c1c9c876280cc4
[ "MIT" ]
1
2019-05-14T15:41:15.000Z
2019-05-14T15:41:15.000Z
README.md
mkitt/watchn
c47d4ab01ba34ad939750b2c45c1c9c876280cc4
[ "MIT" ]
1
2017-04-08T16:21:06.000Z
2017-04-08T16:21:06.000Z
README.md
mkitt/watchn
c47d4ab01ba34ad939750b2c45c1c9c876280cc4
[ "MIT" ]
null
null
null
# watchn Intelligently and continuously auto execute tasks on file/directory changes. Language, framework and library agnostic. ## Get It Going npm install watchn -g cd workspace/project watchn runner .watchn watchn .watchn ## More Meat watchn aims to automate the repetitive tasks developers run throughout the day. Tasks such as running tests, generating documentation, concatenating and minifying files. You know all those tasks we hammer together inside a `Makefile`, `Rakefile`, `Cakefile`, `Jakefile` or even `Ant` (_cringe_) tasks. In fact hooking into these files is exactly what it's designed for. watchn is really an elaborate file/directory watcher and directs it's notifications into callbacks defined by the user. watchn is built to run in the background so you can write your code and don't have to leave your current window to run the build scripts or tests. watchn can be as quiet or as loud as you want it to be. It's really up to the user to define what your preference is and what watchn executes on. watchn can associate a file change in a single directory and execute multiple tasks. Say you code your application in [CoffeeScript][coffee] and it's stored in `lib/src` and say you've put a couple of watchers on the `lib` directory for compiling [CoffeeScript][coffee], running your tests, generating documentation and concatenating and minifying the output. Saving a [CoffeeScript][coffee] file will trigger all of these tasks and you can get immediate feedback on the status of their results. watchn also does some fancy code reloading, so it knows when you add a file/directory, remove a file/directory, or even update the runner file you've setup to hook into your tasks. Why use this over the built in watchers that come with most libraries? Generally if you are using various libraries together (say, [scss][sass], [coffeescript][coffee], [jasmine][jasmine], etc..) you most likely would have numerous watcher's activated, generating various output in multiple windows or background tasks. watchn combines these into one single watcher and is ready to yell at you if you get it wrong or pat you on the back when your tasks run successfully. ## Installation npm install watchn -g Once watchn is installed, it will give you an executable you can access from your CLI. Run `watchn help` from the command line and it will give you some basic help information. The second part is creating your own runner file on a per project basis, which is what watchn uses to know what directories/files to watch and how to handle the callbacks when one of these items has changed. watchn can help you with this by creating a stub file: watchn runner .watchn `.watchn` file can be anything your little heart desires. Put a `.` in front of it, call it `peepingtom.js` whatever floats your boat. It doesn't even have to be a `.js` file, but that's what the runner file will be written in so go absolutely nuts. The stub file includes a single watchn method based on running `make test`. This can be changed fairly easily, so take a look at "anatomy of a watchn method" for more information. [Check out the .watchn file][.watchn] to see the runner associated with this project and the various tasks it's calling. [Check out the annotated source files][annotated] to look under the covers. ## Anatomy of a watchn runner Generating the watchn runner stub gives you the following: ```javascript var tests = './test/' var libs = './lib/' module.exports.init = function(watchn) { watchn.watch('test', [tests, libs], function(options) { watchn.execute('make test', options, 'jasmine', false, true) }) } ``` Let's break it down: ```javascript var tests = './test/' var libs = './lib/' ``` The `tests` and `libs` variables are just paths to common directories, these look familiar right? Note, watchn will try and normalize paths, and bark at you if it can't. ```javascript module.exports.init = function(watchn) { watchn.watch('test', [tests, libs], function(options) { watchn.execute('make test', options, 'jasmine', false, true) }) } ``` The `module.exports.init = function(watchn)` method is required, and this should house various `watchn.watch` callbacks. An instance of `watchn` will be passed into this function at initialization. The `watchn.watch` method passes an `id` as the first argument, this can be just about anything you like as long as there aren't any duplicate keys. It's used internally as a key to look up callbacks when a file is changed. Since it's used as a key, name it appropriately (avoid spaces etc..). The second argument is an array of directories to watch. This has to be an array if you are passing more than one directory or multiple files. If it's only a single file/directory that needs watching, watchn will convert it internally for you. The third argument is the callback function. This is triggered when a change is detected. The callback function can house whatever you want done when the file is changed. In this instance the callback is calling `watchn.execute` with some parameters. Oh man, it's about to get so much better right now. This is a convenience function for hooking into some built in `reporters` packaged on up in this puppy. Let's break this function call down: ```javascript watchn.execute('make test', options, 'jasmine', false, true) ``` The first argument `'make test'` is just calling a task in a Makefile. This very easily could be calling `'rake test'`, `'cake test'`, `'jake test'` or shelling out directly to an external script. This argument is just getting sent through node's `child_process.exec()` method. The second argument `options` is just passing those back to watchn from the `watchn.watch` callback. The third option is the string name of the built in or custom reporter to use. See the "Built In Reporters" section for more info on the packaged `reporters`. The fourth and fifth options are for "show [growl][growl] message on success" and "show [growl][growl] message on failure" respectively. By default the success message is set to `false` and the failed [growl][growl] message is set to `true`. Don't worry, if you aren't a fan of [growl][growl] and don't have it installed, watchn won't do something stupid and install it for you. You're on your own with that one. If you want to handle the callbacks on your very own and not use any of the built in `reporters`, check out the sections "Custom Callbacks" and "Build Your Own Reporters". ## Built In Reporters watchn comes packed with a slew of `reporters` for common libraries. These have been tuned for [growl][growl] notifications (if [growl][growl] messages float your boat) and outputting results of a task to the console. Below is a list of `reporters` and the string name used in the `watchn.execute` function call: ### General - reporter: Basic reporter that lets you know what task ran and if it passed or failed. Primarily used for simple tasks or if there isn't a reporter for a specific library - [docco][docco] - [uglify][uglify] ### Testing - [expresso][expresso] - [jasmine][jasmine] - [jasmine_dom][jasmine_dom] - [vows][vows] - [jshint][jshint] ### Languages - [coffee][coffee] - [sass][sass] - [scss][sass] - [stylus][stylus] - [haml][haml] - [jade][jade] Take a look at the [.watchn][.watchn] file for their usage. ## Custom Reporters Not finding a reporter for your favorite library? Write you own and toss it somewhere in your project. Checkout an example of a [custom reporter][custom] to get an idea of what's required and it's guts. There are a couple of prerequisites for these. It needs to be postfixed with "`_reporter.js`" (i.e. `myreporter_reporter.js`). This is to hopefully avoid naming collisions. Then in your `.watchn` file for the reporter `awesomeness_reporter.js` you'd call: ```javascript watchn.execute('make awesomeness', options, 'awesomeness', false, true) ``` Your custom reporter should follow the [example file][custom], but at the least it needs a `constructor` with a `name` and a `report` method. You can stick this file anywhere in you're projects `cwd`, watchn will find it for you. Sick of creating the same custom reporter for each project? Send us a pull request and we'll add it in for you. ## Custom Callbacks Hey **mkitt**, your concept of `reporters` sucks! I just want to do my own thing. Yep we got that. The core of watchn is really just watching files and responding to their changes. If you want to roll your own way of doing things when a file changes, you'll want to do the following in your `.watchn` file. ```javascript watchn.watch('test', [tests, libs], function(options) { if (options.curr > options.prev) { child.exec('make test', function(error, stdout, stderr) { if (error !== null) { // better do something cause it failed } else { // it worked!! } }) } }) ``` Ignoring the `watchn.watch` line (we covered that already). The first conditional checks if the file has been actually changed: ```javascript if (options.curr > options.prev) { // block not shown... } ``` After that, you can run your task (most likely shelling out), grab a beer, smoke a dugan, or do whatever makes you happy. The callback is really just telling you a file changed, now what? is totally up to you. ## Notifications watchn likes to tell you stuff. How it's day went, who took the drunk guy home from the bar last night, if [Jennifer Aniston](http://www.google.com/search?q=jennifer+aniston&hl=en&prmd=imvnsuol&tbm=isch&tbo=u&source=univ&sa=X&ei=uEV0ToqqGcThiALF-5WzAg&ved=0CFgQsAQ&biw=1517&bih=943) is pregnant or not. Don't worry though you don't have to listen, that is unless you want to. By default watchn outputs a bunch of stuff to your console. Did the test pass, did it generate the file, did it fail, what was the stack trace? watchn will tell you. It also checks if you have [growl][growl] installed and can optionally send these messages to you if you opt in. You'll enable [growl][growl] manually from your `.watchn` file. Remember from above, if it passes, we don't care and nothing will [growl][growl], if it fails though, watchn by default will [growl][growl] this message. Override that by passing `false` to both parameters in the `watchn.execute` parameters and [growl][growl] will shut up. Of course if you don't have [growl][growl] installed, watchn won't care and do something stupid like "install it for you". If you could care less, either background the process or start watchn in silent mode: `watchn -s .watchn`. Want to hear everything? Make sure both of those parameters are set to `true` and watchn will it [growl][growl] it for you. Don't have [growl][growl] installed? Try brew install growlnotify ## CLI Options watchn packs up some helper methods for you. It can generate a default `.watchn` runner file, stub out some `watchn.watch` and `watchn.execute` methods for specific `reporters`. Just run `watchn help` to get the list below. Usage: watchn [options] <program> Program <required>: <program> The runner program to respond to watched items Options [optional]: -h, [--]help Output help information -v, [--]version Output the current version -s, [--]silent Quiet watchn except for errors -r, [--]runner <name> Basic stub for a new runner file -t, [--]template <name> Generate a watchn.watch method -l, [--]list List available templates for generation Examples: watchn .watchn Starts watchn with an existing runner file watchn -s .watchn Starts watchn in quiet mode with a runner file watchn -r .watchn Generates a default runner file named ".watchn" watchn -l Lists available templates for various libraries watchn -t coffee Outputs a watch method for coffeescript to stdout watchn -t docco Outputs a watch method for docco to stdout watchn -t expresso Outputs a watch method for expresso to stdout watchn -t generic Outputs a watch method for generic tasks to stdout watchn -t haml Outputs a watch method for haml to stdout watchn -t jade Outputs a watch method for jade to stdout watchn -t jasmine Outputs a watch method for jasmine-node to stdout watchn -t jasmine_dom Outputs a watch method for jasmine-dom to stdout watchn -t jshint Outputs a watch method for jshint to stdout watchn -t sass Outputs a watch method for sass to stdout watchn -t scss Outputs a watch method for scss to stdout watchn -t stylus Outputs a watch method for stylus to stdout watchn -t uglify Outputs a watch method for uglify to stdout watchn -t vows Outputs a watch method for vows to stdout ## Todo - Utility method for finding files based on filetype for watchn - Figure out how to broadcast a message when watchn crashes - Remove [growl][growl] - GH Pages site - Peer review - Upgrade to new node version - Add compass to the reporters? - Test out mocha - Add travis CI - Finish notes ## Inspiration Loosely based on [mynyml's fabulous watchr for ruby](http://mynyml.com/ruby/flexible-continuous-testing) ## Contributing Please do. watchn is in active development and encourages additions, changes and bug fixes. File an issue or send us a pull request and we'll happily get it in. Thanks in advance!! ## License [The MIT License][license] <!-- Links! --> [.watchn]: https://github.com/mkitt/watchn/blob/master/.watchn [license]: https://github.com/mkitt/watchn/blob/master/LICENSE.md [custom]: https://github.com/mkitt/watchn/blob/master/examples/custom-reporter/custom_reporter.js [annotated]: http://mkitt.github.com/watchn/ [docco]: http://jashkenas.github.com/docco/ [uglify]: https://github.com/mishoo/UglifyJS [expresso]: http://visionmedia.github.com/expresso/ [jasmine]: https://github.com/mhevery/jasmine-node/ [jasmine_dom]: https://github.com/andrewpmckenzie/node-jasmine-dom [jshint]: https://github.com/jshint/node-jshint [vows]: http://vowsjs.org/ [coffee]: http://jashkenas.github.com/coffee-script/ [sass]: http://sass-lang.com/ [stylus]: http://learnboost.github.com/stylus/docs/js.html [haml]: http://haml-lang.com/ [jade]: http://jade-lang.com/ [growl]: http://growl.info/
57.255906
1,347
0.729561
eng_Latn
0.996702
6ff3221c5de5b1bb2885b9c283cc8a2c2e46cf82
113
md
Markdown
README.md
YuriRevin/Coursera_HSE_Py_Ass-4.6-Exponentiation
ea6213ace3cd52f81662596da4daf503b572b544
[ "MIT" ]
null
null
null
README.md
YuriRevin/Coursera_HSE_Py_Ass-4.6-Exponentiation
ea6213ace3cd52f81662596da4daf503b572b544
[ "MIT" ]
null
null
null
README.md
YuriRevin/Coursera_HSE_Py_Ass-4.6-Exponentiation
ea6213ace3cd52f81662596da4daf503b572b544
[ "MIT" ]
null
null
null
# Coursera_HSE_Py_Ass-4.6-Exponentiation Coursera_High_School_of_Economics_Python_Assignments 4.6-Exponentiation
37.666667
71
0.911504
eng_Latn
0.235174
6ff3505be0bb338144e389648c40557b622e228b
8,835
md
Markdown
2017/regras/posicoes.md
brazilianleague/brazilianleague.github.io
081665d3ff0d852ea2cb11ac70af61e2034ac5a9
[ "MIT" ]
null
null
null
2017/regras/posicoes.md
brazilianleague/brazilianleague.github.io
081665d3ff0d852ea2cb11ac70af61e2034ac5a9
[ "MIT" ]
null
null
null
2017/regras/posicoes.md
brazilianleague/brazilianleague.github.io
081665d3ff0d852ea2cb11ac70af61e2034ac5a9
[ "MIT" ]
null
null
null
# Posições e Pontuação Para o ano de 2017, **Brazilian League** contará com **10** posições, são elas: 1. [_Team Quarterback_ (TQB)](#team-quarterback) 1. [_Running Back_ (RB)](#running-back) 1. [_Wide Receiver_ (WR)](#wide-receiver) 1. [_Tight End_ (TE)](#tight-end) 1. [_Flex_ (RB/WR/TE)](#flex) 1. [_Flex_ (RB/WR/TE)](#flex) 1. [_Defensive Player Utility_ (DP)](#defensive-player-utility) 1. [_Team Defense/Special Teams_ (D/ST)](#team-defensespecial-teams) 1. [_Place Kicker_ (K)](#place-kicker) 1. [_Head Coach_ (HC)](#head-coach) Além das 10 posições, cada time poderá possuir até **5** jogadores no [_Bench_ (BE)](#bench) e até **3** jogadores lesionados na [_Injured Reserve_ (IR)](#injured-reserve). A pontuação é dividida em **8** categorias, são elas: 1. [_Passing_](#passing) 1. [_Rushing_](#rushing) 1. [_Receiving_](#receiving) 1. [_Miscellaneous_](#miscellaneous) 1. [_Kicking_](#kicking) 1. [_Defensive Players_](#defensive-players) 1. [_Head Coach_](#head-coach-1) 1. [_Team Defense/Special Teams_](#team-defensespecial-teams-1) ------ # Posições ## _Team Quarterback_ * **Sigla**: TQB * **Descrição**: Posição de um time de futebol americano cuja função é iniciar as jogadas do time e fazer passes para os _wide receivers_ e _tight ends_. * **Quantidade Máxima no Time**: 2 * **Pontuação Principal**: [_Passing_](#passing) * **Pontuações Extraordinárias**: [_Rushing_](#rushing), [_Miscellaneous_](#miscellaneous) ## _Running Back_ * **Sigla**: RB * **Descrição**: Posição de um jogador de futebol americano cuja função é correr com a bola que pode ser passada por ele pelo _quarterback_ ou em um _snap_ direto do _center_. * **Quantidade Máxima no Time**: 3 * **Pontuação Principal**: [_Rushing_](#rushing) * **Pontuações Extraordinárias**: [_Receiving_](#receiving), [_Miscellaneous_](#miscellaneous) ## _Wide Receiver_ * **Sigla**: WR * **Descrição**: Posição de um jogador de futebol americano cuja função é pegar passes do _quarterback_. * **Quantidade Máxima no Time**: 3 * **Pontuação Principal**: [_Receiving_](#receiving) * **Pontuações Extraordinárias**: [_Rushing_](#rushing), [_Miscellaneous_](#miscellaneous) ## _Tight End_ * **Sigla**: TE * **Descrição**: Posição de um jogador de futebol americano cuja função é bloquear para o _running back_ e _quarterback_, além de receber passes do mesmo. * **Quantidade Máxima no Time**: 3 * **Pontuação Principal**: [_Receiving_](#receiving) * **Pontuações Extraordinárias**: [_Rushing_](#rushing), [_Miscellaneous_](#miscellaneous) ## _Flex_ * **Sigla**: RB/WR/TE * **Descrição**: Uma das seguintes posições de um jogador de futebol americano: _Running back_, _wide receiver_ ou _tight end_. * **Quantidade Máxima no Time**: Até 3 de cada posição * **Pontuação**: De acordo com a posição escolhida ## _Defensive Player Utility_ * **Sigla**: DP * **Descrição**: Uma das seguintes posições de um jogador de futebol americano: _Defensive Tackle_, _defensive end_, _linebacker_, _cornerback_ ou _safety_. * **Quantidade Máxima no Time**: Até 2 de cada posição * **Pontuação Principal**: [_Defensive Players_](#defensive-players) * **Pontuação Extraordinária**: [_Miscellaneous_](#miscellaneous) ## _Team Defense/Special Teams_ * **Sigla**: D/ST * **Descrição**: Posições de um time de futebol americano que contempla as ações da equipe defensiva e da equipe de especialistas do time. * **Quantidade Máxima no Time**: 2 * **Pontuação Principal**: [_Team Defense/Special Teams_](#team-defensespecial-teams-1) * **Pontuação Extraordinária**: [_Miscellaneous_](#miscellaneous) ## _Place Kicker_ * **Sigla**: K * **Descrição**: Posição de um jogador de futebol americano cuja função é chutar _field goals_, _extra points_ e _kickoffs_. * **Quantidade Máxima no Time**: 2 * **Pontuação Principal**: [_Kicking_](#kicking) * **Pontuação Extraordinária**: [_Miscellaneous_](#miscellaneous) ## _Head Coach_ * **Sigla**: HC * **Descrição**: Posição de um time de futebol cuja função é garantir que o time vença. * **Quantidade Máxima no Time**: 1 * **Pontuação**: [_Head Coach_](#head-coach-1) ## _Bench_ _Bench_ é um conjunto de posições que determinam a reserva de um time. Há, ao todo, **5** posições de _bench_ disponíveis para cada time. Qualquer posição pode ocupar uma posição de _bench_, desde que satisfaça a quantidade máxima no time. A quantidade máxima de cada posição é definida a seguir: * **Team Quarterback**: 2 * **Running Back**: 3 * **Wide Receiver**: 3 * **Tight End**: 3 * **Defensive Tackle**: 2 * **Defensive End**: 2 * **Linebacker**: 2 * **Cornerback**: 2 * **Safety**: 2 * **Team Defense/Special Teams**: 2 * **Place Kicker**: 2 * **Head Coach**: 1 ## _Injured Reserve_ Em casos excepcionais, até **3** posições da _injured reserve_ podem ser ocupadas por jogadores lesionados. ------ # Pontuação ## _Passing_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |PY25|+5|Cada 25 jardas de passe| |PC|+2|Passe completo| |INC|-2|Passe incompleto| |PTD|+40|Passe para TD| |PTD40|+10|Bônus para passe para TD de mais de 40 jardas| |PTD50|+20|Bônus para passe para TD de mais de 50 jardas| |INT|-40|Passe interceptado| |2PC|+10|Passe para conversão de 2 pontos| |P300|+20|Jogo com 300-399 jardas de passe| |P400|+40|Jogo com 400+ jardas de passe| |SK|-20|Sack sofrido| ## _Rushing_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |RY10|+5|Cada 10 jardas corridas| |RTD|+40|Corrida para TD| |RTD40|+10|Bônus para corrida para TD de mais de 40 jardas| |RTD50|+20|Bônus para corrida para TD de mais de 50 jardas| |2PR|+10|Corrida para conversão de 2 pontos| |RY100|+20|Jogo com 100-199 jardas corridas| |RY200|+40|Jogo com 200+ jardas corridas| ## _Receiving_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |REY10|+5|Cada 10 jardas com recepção| |REC|+10|Recepção Feita| |RETD|+40|Recepção para TD| |RETD40|+10|Bônus para recepção para TD de mais de 40 jardas| |RETD50|+20|Bônus para recepção para TD de mais de 50 jardas| |2PRE|+10|Recepção para conversão de 2 pontos| |REY100|+20|Jogo com 100-199 jardas com recepção| |REY200|+40|Jogo com 200+ jardas com recepção| ## _Miscellaneous_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |KRTD|+40|TD feito após retorno de Kickoff| |PRTD|+40|TD feito após retorno de Punt| |FTD|+40|TD feito após Fumble recuperado| |FUML|-20|Fumble perdido| |INTTD|+40|TD feito após retorno de Interceptação| |FRTD|+40|TD feito após retorno de Fumble| |BLKKRTD|+40|TD feito após retorno de Bloqueio de Punt ou FG| |2PTRET|+20|Conversão de 2 pontos feito após retorno| |1PSF|+20|Safety realizado| ## _Kicking_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |PAT|+10|PAT realizado| |PATM|-20|PAT falhado| |FG|+40|FG realizado| |FG40|+40|FG realizado entre 40 e 49 jardas| |FG50|+80|FG realizado acima de 50 jardas| |FGM0|-20|FG falhado entre 0 e 39 jardas| |FGM40|-10|FG falhado entre 40 e 49 jardas| ## _Defensive Players_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |SK|+50|Sack realizado| |TK|+10|Tackle realizado| |BLKK|+50|Punt, PAT ou FG bloqueado| |INT|+50|Interceptação realizada| |FR|+50|Fumble realizado e recuperado| |FF|+20|Fumble realizado não-Recuperado| |SF|+50|Safety realizado| |TKA|+5|Tackle realizado (Assistido)| |TKS|+20|Tackle realizado (Solo)| |SF|+20|Tackle que leva ao fim do ataque adversário| |PD|+20|Passes defendidos| ## _Head Coach_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |TW|+350|Time venceu| |TL|-100|Time perdeu| |TIE|+150|Time empatou| ## _Team Defense/Special Teams_ |Sigla|Pontuação|Descrição| |:-:|:-:|:-:| |KR10|+5|Cada 10 jardas de Kickoff| |PR10|+5|Cada 10 jardas de Punt| |SK|+20|Sack realizado| |INTTD|+40|TD feito após retorno de Interceptação| |FRTD|+40|TD feito após retorno de Fumble| |KRTD|+40|TD feito após retorno de Kickoff| |PRTD|+40|TD feito após retorno de Punt| |BLKKRTD|+40|TD feito após retorno de bloqueio de Punt ou FG| |BLKK|+40|Bloqueio de Punt, PAT ou FG| |INT|+40|Interceptação realizada| |FR|+40|Fumble realizado recuperado| |FF|+10|Fumble realizado não-recuperado| |SF|+20|Safety realizado| |SF|+10|Tackle que leva ao fim do ataque adversário| |PA0|+150|Jogo com 0 pontos sofridos| |PA1|+100|Jogo entre 1 e 6 pontos sofridos| |PA7|+50|Jogo entre 7 e 13 pontos sofridos| |PA18|-25|Jogo entre 18 e 21 pontos sofridos| |PA22|-50|Jogo entre 22 e 27 pontos sofridos| |PA28|-75|Jogo entre 28 e 34 pontos sofridos| |PA35|-100|Jogo entre 35 e 45 pontos sofridos| |PA46|-150|Jogo com mais de 45 pontos sofridos| |YA100|+150|Menos de 100 jardas permitidas| |YA199|+100|Entre 100 e 199 jardas permitidas| |YA349|-50|Entre 300 e 349 jardas permitidas| |YA399|-50|Entre 350 e 399 jardas permitidas| |YA449|-75|Entre 400 e 449 jardas permitidas| |YA499|-75|Entre 450 e 499 jardas permitidas| |YA549|-100|Entre 500 e 549 jardas permitidas| |YA550|-150|Mais de 549 jardas permitidas| |2PTRET|+40|Conversão de 2 pontos feito após Retorno| |1PSF|+20|Safety realizado|
34.377432
175
0.720996
por_Latn
0.991212
6ff41398111c0f9fedfbaa184fe7fff433321a6d
109
md
Markdown
release-notes/android/xamarin.android_4/xamarin.android_4.11/level_14_diff/mono.android.export.dll/index.md
xamarin/release-notes-archive
9bac84d3db0a16bcb258c602f71eccfa814ba0a0
[ "CC-BY-4.0", "MIT" ]
5
2020-06-17T17:52:53.000Z
2021-06-29T04:11:41.000Z
release-notes/android/xamarin.android_4/xamarin.android_4.12/level_14_diff/mono.android.export.dll/index.md
xamarin/release-notes-archive
9bac84d3db0a16bcb258c602f71eccfa814ba0a0
[ "CC-BY-4.0", "MIT" ]
null
null
null
release-notes/android/xamarin.android_4/xamarin.android_4.12/level_14_diff/mono.android.export.dll/index.md
xamarin/release-notes-archive
9bac84d3db0a16bcb258c602f71eccfa814ba0a0
[ "CC-BY-4.0", "MIT" ]
12
2019-08-07T14:31:02.000Z
2022-02-14T09:33:24.000Z
--- id: F33C1125-2264-4885-87B3-92B02BF5EAA6 title: "Mono.Android.Export.dll" --- # Mono.Android.Export.dll
15.571429
40
0.724771
kor_Hang
0.174051
6ff42e14163100a03e240f57b68d2d5b67c7b780
1,792
md
Markdown
_posts/codewars/2020-04-15-Codewars-24.md
sunlike0508/sunlike0508.github.io
3772e5e81ed736bf2e6caf7dee76d3424bebfd0d
[ "MIT" ]
null
null
null
_posts/codewars/2020-04-15-Codewars-24.md
sunlike0508/sunlike0508.github.io
3772e5e81ed736bf2e6caf7dee76d3424bebfd0d
[ "MIT" ]
null
null
null
_posts/codewars/2020-04-15-Codewars-24.md
sunlike0508/sunlike0508.github.io
3772e5e81ed736bf2e6caf7dee76d3424bebfd0d
[ "MIT" ]
null
null
null
--- title: "CodeWars 스물 네 번째 문제" excerpt: "Double Cola" classes: wide categories: - CodeWars tags: - CodeWars - 5kyu last_modified_at: 2020-04-15 --- #### [Double Cola](https://www.codewars.com/kata/551dd1f424b7a4cdae0001f0) ```java public static String WhoIsNext(String[] names, int n) { int beforeCircleFirstNamesOrder = 1; int afterCircleFirstNamesOrder = 1; int beforecircleEachNameLength = 1; int aftercircleEachNameLength = 1; for(int circleTime = 1; n >= afterCircleFirstNamesOrder; circleTime++) { beforeCircleFirstNamesOrder = afterCircleFirstNamesOrder; beforecircleEachNameLength = aftercircleEachNameLength; aftercircleEachNameLength = findCircleEachNameLength(circleTime); afterCircleFirstNamesOrder = findCircleFirstNamesOrder(aftercircleEachNameLength, names.length); } return names[(n - beforeCircleFirstNamesOrder) / beforecircleEachNameLength] ; } public static int findCircleFirstNamesOrder(int circleEachNameLength, int firstNameLength) { return (circleEachNameLength* firstNameLength) - (firstNameLength-1); } public static int findCircleEachNameLength(int circleTime) { return (int)Math.pow(2, circleTime); } ``` *하... 나름 좋게 풀었다 생각했는데 best 코드는 정말 간단히(?) 풀었다. 문제는 봐도 이런 수식이 왜 나오는지 모르겠다는 거다. *처음 문제 풀때 문제의 의도를 이해 못했다. *문제의 의도는 다음과 같다. ``` S L P R H SS LL PP RR HH (SSS LLL PPP RRR HHH 이거 아니다 밑에 바로 이름당 4개로 늘어난다) SSSS L(L)LL PPPP RRRE HHHH ``` *Best 코드는 결국 이해 못했다. 포기..ㅠ *이래서 공대생은 기본적으로 수학을 가까이 하는게 맞는 것 같다. 아무리 개발자라도 해도.. 결국 알고리즘은 수학인걸 ```java // 이게 베스트 코드인데.. 할말이 없다. 하 이건 죽었다 깨어나도 생각 못할 것 같다. public class Line { public static String WhoIsNext(String[] names, int n){ while ( n > names.length){ n = (n - (names.length - 1)) / 2; } return names[n-1]; } } ```
25.239437
104
0.704799
kor_Hang
0.977855
6ff4a35ae4422cd9db44e6c5df4cff2608c0ad16
43,241
md
Markdown
socrata/86i3-9wpd.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
7
2017-05-02T16:08:17.000Z
2021-05-27T09:59:46.000Z
socrata/86i3-9wpd.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
5
2017-11-27T15:40:39.000Z
2017-12-05T14:34:14.000Z
socrata/86i3-9wpd.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
3
2017-03-03T14:48:48.000Z
2019-05-23T12:57:42.000Z
# 2014 Transparency Provider Level Data ## Dataset | Name | Value | | :--- | :---- | | Catalog | [Link](https://catalog.data.gov/dataset/2014-transparency-provider-level-data) | | Metadata | [Link](https://data.illinois.gov/api/views/86i3-9wpd) | | Data: JSON | [100 Rows](https://data.illinois.gov/api/views/86i3-9wpd/rows.json?max_rows=100) | | Data: CSV | [100 Rows](https://data.illinois.gov/api/views/86i3-9wpd/rows.csv?max_rows=100) | | Host | data.illinois.gov | | Id | 86i3-9wpd | | Name | 2014 Transparency Provider Level Data | | Category | Health-Medicaid | | Tags | hfs, medicaid, health | | Created | 2016-07-01T18:58:01Z | | Publication Date | 2016-07-05T13:49:31Z | ## Columns ```ls | Included | Schema Type | Field Name | Name | Data Type | Render Type | | ======== | ============== | ========================= | ========================= | ========= | =========== | | Yes | series tag | providerkeyid | #ProviderKeyID | text | number | | Yes | numeric metric | npi | NPI | number | number | | Yes | numeric metric | providertypecd | ProviderTypeCd | number | number | | Yes | series tag | providertypedesc | ProviderTypeDesc | text | text | | Yes | series tag | providername | ProviderName | text | text | | Yes | series tag | provzipcd | ProvZipCd | text | number | | Yes | numeric metric | officecountycd | OfficeCountyCd | number | number | | Yes | series tag | officecountydesc | OfficeCountyDesc | text | text | | Yes | numeric metric | reimbursementtypecd | ReimbursementTypeCd | number | number | | Yes | series tag | reimbursementtypedesc | ReimbursementTypeDesc | text | text | | Yes | numeric metric | criticalaccessind | CriticalAccessInd | number | number | | Yes | numeric metric | pcpind | PCPInd | number | number | | Yes | series tag | primspeccddesc | PrimSpecCdDesc | text | text | | Yes | numeric metric | carecoordrins | CareCoordRINS | number | number | | Yes | numeric metric | casemgmt_insurancerins | CaseMgmt_InsuranceRINS | number | number | | Yes | numeric metric | clinicservicesrins | ClinicServicesRINS | number | number | | Yes | numeric metric | dentalservicesrins | DentalServicesRINS | number | number | | Yes | numeric metric | epsdtrins | EPSDTRINS | number | number | | Yes | numeric metric | errins | ERRINS | number | number | | Yes | numeric metric | hcbsrins | HCBSRINS | number | number | | Yes | numeric metric | homehealthrins | HomeHealthRINS | number | number | | Yes | numeric metric | hospicerins | HospiceRINS | number | number | | Yes | numeric metric | icfmrrins | ICFMRRINS | number | number | | Yes | numeric metric | inpatientcarerins | InpatientCareRINS | number | number | | Yes | numeric metric | labradiologyrins | LabRadiologyRINS | number | number | | Yes | numeric metric | nursingfacilityrins | NursingFacilityRINS | number | number | | Yes | numeric metric | otherservicesrins | OtherServicesRINS | number | number | | Yes | numeric metric | outpatientrins | OutPatientRINS | number | number | | Yes | numeric metric | pddrins | PDDRINS | number | number | | Yes | numeric metric | prescdrugsrins | PrescDRUGsRINS | number | number | | Yes | numeric metric | rehabrins | RehabRINS | number | number | | Yes | numeric metric | schoolbasedrins | SchoolBasedRINS | number | number | | Yes | numeric metric | therapyrins | TherapyRINS | number | number | | Yes | numeric metric | casemgmt_insuranceevents | CaseMgmt_InsuranceEvents | number | number | | Yes | numeric metric | clinicservicesevents | ClinicServicesEvents | number | number | | Yes | numeric metric | dentalservicesevents | DentalServicesEvents | number | number | | Yes | numeric metric | epsdtevents | EPSDTEvents | number | number | | Yes | numeric metric | erevents | EREvents | number | number | | Yes | numeric metric | hcbsevents | HCBSEvents | number | number | | Yes | numeric metric | homehealthevents | HomeHealthEvents | number | number | | Yes | numeric metric | hospiceevents | HospiceEvents | number | number | | Yes | numeric metric | icfmrevents | ICFMREvents | number | number | | Yes | numeric metric | inpatientcareevents | InpatientCareEvents | number | number | | Yes | numeric metric | labradiologyevents | LabRadiologyEvents | number | number | | Yes | numeric metric | nursingfacilityevents | NursingFacilityEvents | number | number | | Yes | numeric metric | otherservicesevents | OtherServicesEvents | number | number | | Yes | numeric metric | outpatientevents | OutPatientEvents | number | number | | Yes | numeric metric | pddevents | PDDEvents | number | number | | Yes | numeric metric | prescdrugsevents | PrescDRUGsEvents | number | number | | Yes | numeric metric | rehabevents | RehabEvents | number | number | | Yes | numeric metric | schoolbasedevents | SchoolBasedEvents | number | number | | Yes | numeric metric | therapyevents | TherapyEvents | number | number | | Yes | numeric metric | casemgmt_insuranceuos | CaseMgmt_InsuranceUOS | number | number | | Yes | numeric metric | clinicservicesuos | ClinicServicesUOS | number | number | | Yes | numeric metric | dentalservicesuos | DentalServicesUOS | number | number | | Yes | numeric metric | epsdtuos | EPSDTUOS | number | number | | Yes | numeric metric | eruos | ERUOS | number | number | | Yes | numeric metric | hcbsuos | HCBSUOS | number | number | | Yes | numeric metric | homehealthuos | HomeHealthUOS | number | number | | Yes | numeric metric | hospiceuos | HospiceUOS | number | number | | Yes | numeric metric | icfmruos | ICFMRUOS | number | number | | Yes | numeric metric | inpatientcareuos | InpatientCareUOS | number | number | | Yes | numeric metric | labradiologyuos | LabRadiologyUOS | number | number | | Yes | numeric metric | nursingfacilityuos | NursingFacilityUOS | number | number | | Yes | numeric metric | otherservicesuos | OtherServicesUOS | number | number | | Yes | numeric metric | outpatientuos | OutPatientUOS | number | number | | Yes | numeric metric | pdduos | PDDUOS | number | number | | Yes | numeric metric | prescdrugsuos | PrescDRUGsUOS | number | number | | Yes | numeric metric | rehabuos | RehabUOS | number | number | | Yes | numeric metric | schoolbaseduos | SchoolBasedUOS | number | number | | Yes | numeric metric | therapyuos | TherapyUOS | number | number | | Yes | numeric metric | casemgmt_insurancecost | CaseMgmt_InsuranceCost | number | number | | Yes | numeric metric | clinicservicescost | ClinicServicesCost | number | number | | Yes | numeric metric | dentalservicescost | DentalServicesCost | number | number | | Yes | numeric metric | epsdtcost | EPSDTCost | number | number | | Yes | numeric metric | ercost | ERCost | number | number | | Yes | numeric metric | hcbscost | HCBSCost | number | number | | Yes | numeric metric | homehealthcost | HomeHealthCost | number | number | | Yes | numeric metric | hospicecost | HospiceCost | number | number | | Yes | numeric metric | icfmrcost | ICFMRCost | number | number | | Yes | numeric metric | inpatientcarecost | InpatientCareCost | number | number | | Yes | numeric metric | labradiologycost | LabRadiologyCost | number | number | | Yes | numeric metric | nursingfacilitycost | NursingFacilityCost | number | number | | Yes | numeric metric | otherservicescost | OtherServicesCost | number | number | | Yes | numeric metric | outpatientcost | OutPatientCost | number | number | | Yes | numeric metric | pddcost | PDDCost | number | number | | Yes | numeric metric | prescdrugscost | PrescDRUGsCost | number | number | | Yes | numeric metric | rehabcost | RehabCost | number | number | | Yes | numeric metric | schoolbasedcost | SchoolBasedCost | number | number | | Yes | numeric metric | therapycost | TherapyCost | number | number | | Yes | numeric metric | totalcost | TotalCost | number | number | | Yes | numeric metric | hospstaticpayment | HospStaticPayment | number | number | | Yes | numeric metric | capitationpayments | CapitationPayments | number | number | | Yes | numeric metric | hospencounteraddonpayment | HospEncounterAddOnPayment | number | number | ``` ## Time Field ```ls Value = 2014 Format & Zone = yyyy ``` ## Data Commands ```ls series e:86i3-9wpd d:2014-01-01T00:00:00.000Z t:provzipcd=626759467 t:officecountydesc=Menard t:providername="KNOWSKI PEGGY L" t:providerkeyid=1000000004 t:providertypedesc="Nurse Practitioners" m:schoolbaseduos=0 m:otherservicesrins=48 m:criticalaccessind=0 m:otherservicesuos=58 m:clinicservicescost=0 m:npi=1497093264 m:rehabcost=0 m:homehealthevents=0 m:hcbsevents=0 m:inpatientcareuos=0 m:schoolbasedevents=0 m:inpatientcarecost=0 m:errins=0 m:epsdtevents=0 m:casemgmt_insurancecost=0 m:outpatientcost=0 m:prescdrugsuos=0 m:outpatientevents=0 m:labradiologyuos=0 m:hospicerins=0 m:dentalservicesevents=0 m:erevents=0 m:dentalservicesrins=0 m:homehealthcost=0 m:outpatientrins=0 m:prescdrugscost=0 m:pddrins=0 m:providertypecd=16 m:pdduos=0 m:labradiologyrins=0 m:labradiologycost=0 m:otherservicesevents=58 m:officecountycd=73 m:schoolbasedrins=0 m:therapyrins=0 m:pcpind=0 m:hcbsuos=0 m:homehealthrins=0 m:clinicservicesevents=0 m:icfmruos=0 m:pddcost=0 m:epsdtrins=0 m:hospstaticpayment=0 m:epsdtcost=0 m:casemgmt_insuranceevents=0 m:icfmrrins=0 m:pddevents=0 m:prescdrugsevents=0 m:labradiologyevents=0 m:clinicservicesuos=0 m:hospencounteraddonpayment=0 m:icfmrcost=0 m:totalcost=586.74 m:rehabuos=0 m:hcbsrins=0 m:carecoordrins=0 m:clinicservicesrins=0 m:eruos=0 m:rehabrins=0 m:hospiceuos=0 m:nursingfacilityrins=0 m:nursingfacilityuos=0 m:inpatientcarerins=0 m:nursingfacilitycost=0 m:otherservicescost=586.74 m:casemgmt_insuranceuos=0 m:dentalservicescost=0 m:hcbscost=0 m:hospicecost=0 m:schoolbasedcost=0 m:epsdtuos=0 m:ercost=0 m:therapycost=0 m:rehabevents=0 m:therapyevents=0 m:nursingfacilityevents=0 m:dentalservicesuos=0 m:casemgmt_insurancerins=0 m:inpatientcareevents=0 m:therapyuos=0 m:icfmrevents=0 m:hospiceevents=0 m:outpatientuos=0 m:prescdrugsrins=0 m:homehealthuos=0 m:capitationpayments=0 series e:86i3-9wpd d:2014-01-01T00:00:00.000Z t:officecountydesc=Warren t:providerkeyid=1000000009 t:providertypedesc="Waiver service provider--Disability (DHS/DRS)" m:schoolbaseduos=0 m:otherservicesrins=0 m:criticalaccessind=0 m:otherservicesuos=0 m:clinicservicescost=0 m:rehabcost=0 m:homehealthevents=0 m:hcbsevents=7 m:inpatientcareuos=0 m:errins=0 m:schoolbasedevents=0 m:inpatientcarecost=0 m:epsdtevents=0 m:casemgmt_insurancecost=0 m:outpatientcost=0 m:prescdrugsuos=0 m:outpatientevents=0 m:labradiologyuos=0 m:hospicerins=0 m:dentalservicesevents=0 m:erevents=0 m:dentalservicesrins=0 m:homehealthcost=0 m:outpatientrins=0 m:prescdrugscost=0 m:pddrins=0 m:providertypecd=92 m:pdduos=0 m:labradiologyrins=0 m:labradiologycost=0 m:otherservicesevents=0 m:officecountycd=102 m:schoolbasedrins=0 m:therapyrins=0 m:pcpind=0 m:hcbsuos=7 m:homehealthrins=0 m:clinicservicesevents=0 m:icfmruos=0 m:pddcost=0 m:epsdtrins=0 m:hospstaticpayment=0 m:epsdtcost=0 m:casemgmt_insuranceevents=0 m:icfmrrins=0 m:pddevents=0 m:prescdrugsevents=0 m:labradiologyevents=0 m:clinicservicesuos=0 m:hospencounteraddonpayment=0 m:icfmrcost=0 m:totalcost=1144.74 m:rehabuos=0 m:hcbsrins=1 m:carecoordrins=0 m:clinicservicesrins=0 m:eruos=0 m:rehabrins=0 m:hospiceuos=0 m:nursingfacilityrins=0 m:nursingfacilityuos=0 m:inpatientcarerins=0 m:nursingfacilitycost=0 m:otherservicescost=0 m:casemgmt_insuranceuos=0 m:dentalservicescost=0 m:hcbscost=1144.74 m:hospicecost=0 m:schoolbasedcost=0 m:epsdtuos=0 m:ercost=0 m:therapycost=0 m:rehabevents=0 m:therapyevents=0 m:nursingfacilityevents=0 m:dentalservicesuos=0 m:casemgmt_insurancerins=0 m:inpatientcareevents=0 m:therapyuos=0 m:icfmrevents=0 m:hospiceevents=0 m:outpatientuos=0 m:prescdrugsrins=0 m:homehealthuos=0 m:capitationpayments=0 series e:86i3-9wpd d:2014-01-01T00:00:00.000Z t:officecountydesc=Woodford t:providerkeyid=1000000270 t:providertypedesc="Waiver service provider--Adults (DHS/DDD)" m:schoolbaseduos=0 m:otherservicesrins=0 m:criticalaccessind=0 m:otherservicesuos=0 m:clinicservicescost=0 m:rehabcost=0 m:homehealthevents=0 m:hcbsevents=316 m:inpatientcareuos=0 m:errins=0 m:schoolbasedevents=0 m:inpatientcarecost=0 m:epsdtevents=0 m:casemgmt_insurancecost=0 m:outpatientcost=0 m:prescdrugsuos=0 m:outpatientevents=0 m:labradiologyuos=0 m:hospicerins=0 m:dentalservicesevents=0 m:erevents=0 m:dentalservicesrins=0 m:homehealthcost=0 m:outpatientrins=0 m:prescdrugscost=0 m:pddrins=0 m:providertypecd=91 m:pdduos=0 m:labradiologyrins=0 m:labradiologycost=0 m:otherservicesevents=0 m:officecountycd=110 m:schoolbasedrins=0 m:therapyrins=0 m:pcpind=0 m:hcbsuos=316 m:homehealthrins=0 m:clinicservicesevents=0 m:icfmruos=0 m:pddcost=0 m:epsdtrins=0 m:hospstaticpayment=0 m:epsdtcost=0 m:casemgmt_insuranceevents=0 m:icfmrrins=0 m:pddevents=0 m:prescdrugsevents=0 m:labradiologyevents=0 m:clinicservicesuos=0 m:hospencounteraddonpayment=0 m:icfmrcost=0 m:totalcost=15082.85 m:rehabuos=0 m:hcbsrins=1 m:carecoordrins=0 m:clinicservicesrins=0 m:eruos=0 m:rehabrins=0 m:hospiceuos=0 m:nursingfacilityrins=0 m:nursingfacilityuos=0 m:inpatientcarerins=0 m:nursingfacilitycost=0 m:otherservicescost=0 m:casemgmt_insuranceuos=0 m:dentalservicescost=0 m:hcbscost=15082.85 m:hospicecost=0 m:schoolbasedcost=0 m:epsdtuos=0 m:ercost=0 m:therapycost=0 m:rehabevents=0 m:therapyevents=0 m:nursingfacilityevents=0 m:dentalservicesuos=0 m:casemgmt_insurancerins=0 m:inpatientcareevents=0 m:therapyuos=0 m:icfmrevents=0 m:hospiceevents=0 m:outpatientuos=0 m:prescdrugsrins=0 m:homehealthuos=0 m:capitationpayments=0 ``` ## Meta Commands ```ls metric m:npi p:long l:NPI t:dataTypeName=number metric m:providertypecd p:integer l:ProviderTypeCd t:dataTypeName=number metric m:officecountycd p:integer l:OfficeCountyCd t:dataTypeName=number metric m:reimbursementtypecd p:long l:ReimbursementTypeCd t:dataTypeName=number metric m:criticalaccessind p:integer l:CriticalAccessInd t:dataTypeName=number metric m:pcpind p:integer l:PCPInd t:dataTypeName=number metric m:carecoordrins p:integer l:CareCoordRINS t:dataTypeName=number metric m:casemgmt_insurancerins p:integer l:CaseMgmt_InsuranceRINS t:dataTypeName=number metric m:clinicservicesrins p:integer l:ClinicServicesRINS t:dataTypeName=number metric m:dentalservicesrins p:integer l:DentalServicesRINS t:dataTypeName=number metric m:epsdtrins p:integer l:EPSDTRINS t:dataTypeName=number metric m:errins p:integer l:ERRINS t:dataTypeName=number metric m:hcbsrins p:integer l:HCBSRINS t:dataTypeName=number metric m:homehealthrins p:integer l:HomeHealthRINS t:dataTypeName=number metric m:hospicerins p:integer l:HospiceRINS t:dataTypeName=number metric m:icfmrrins p:integer l:ICFMRRINS t:dataTypeName=number metric m:inpatientcarerins p:integer l:InpatientCareRINS t:dataTypeName=number metric m:labradiologyrins p:integer l:LabRadiologyRINS t:dataTypeName=number metric m:nursingfacilityrins p:integer l:NursingFacilityRINS t:dataTypeName=number metric m:otherservicesrins p:integer l:OtherServicesRINS t:dataTypeName=number metric m:outpatientrins p:integer l:OutPatientRINS t:dataTypeName=number metric m:pddrins p:integer l:PDDRINS t:dataTypeName=number metric m:prescdrugsrins p:integer l:PrescDRUGsRINS t:dataTypeName=number metric m:rehabrins p:integer l:RehabRINS t:dataTypeName=number metric m:schoolbasedrins p:integer l:SchoolBasedRINS t:dataTypeName=number metric m:therapyrins p:integer l:TherapyRINS t:dataTypeName=number metric m:casemgmt_insuranceevents p:integer l:CaseMgmt_InsuranceEvents t:dataTypeName=number metric m:clinicservicesevents p:integer l:ClinicServicesEvents t:dataTypeName=number metric m:dentalservicesevents p:integer l:DentalServicesEvents t:dataTypeName=number metric m:epsdtevents p:integer l:EPSDTEvents t:dataTypeName=number metric m:erevents p:integer l:EREvents t:dataTypeName=number metric m:hcbsevents p:integer l:HCBSEvents t:dataTypeName=number metric m:homehealthevents p:integer l:HomeHealthEvents t:dataTypeName=number metric m:hospiceevents p:integer l:HospiceEvents t:dataTypeName=number metric m:icfmrevents p:integer l:ICFMREvents t:dataTypeName=number metric m:inpatientcareevents p:integer l:InpatientCareEvents t:dataTypeName=number metric m:labradiologyevents p:integer l:LabRadiologyEvents t:dataTypeName=number metric m:nursingfacilityevents p:integer l:NursingFacilityEvents t:dataTypeName=number metric m:otherservicesevents p:integer l:OtherServicesEvents t:dataTypeName=number metric m:outpatientevents p:integer l:OutPatientEvents t:dataTypeName=number metric m:pddevents p:integer l:PDDEvents t:dataTypeName=number metric m:prescdrugsevents p:integer l:PrescDRUGsEvents t:dataTypeName=number metric m:rehabevents p:integer l:RehabEvents t:dataTypeName=number metric m:schoolbasedevents p:integer l:SchoolBasedEvents t:dataTypeName=number metric m:therapyevents p:integer l:TherapyEvents t:dataTypeName=number metric m:casemgmt_insuranceuos p:integer l:CaseMgmt_InsuranceUOS t:dataTypeName=number metric m:clinicservicesuos p:integer l:ClinicServicesUOS t:dataTypeName=number metric m:dentalservicesuos p:integer l:DentalServicesUOS t:dataTypeName=number metric m:epsdtuos p:integer l:EPSDTUOS t:dataTypeName=number metric m:eruos p:integer l:ERUOS t:dataTypeName=number metric m:hcbsuos p:integer l:HCBSUOS t:dataTypeName=number metric m:homehealthuos p:integer l:HomeHealthUOS t:dataTypeName=number metric m:hospiceuos p:integer l:HospiceUOS t:dataTypeName=number metric m:icfmruos p:integer l:ICFMRUOS t:dataTypeName=number metric m:inpatientcareuos p:integer l:InpatientCareUOS t:dataTypeName=number metric m:labradiologyuos p:integer l:LabRadiologyUOS t:dataTypeName=number metric m:nursingfacilityuos p:integer l:NursingFacilityUOS t:dataTypeName=number metric m:otherservicesuos p:integer l:OtherServicesUOS t:dataTypeName=number metric m:outpatientuos p:integer l:OutPatientUOS t:dataTypeName=number metric m:pdduos p:integer l:PDDUOS t:dataTypeName=number metric m:prescdrugsuos p:integer l:PrescDRUGsUOS t:dataTypeName=number metric m:rehabuos p:integer l:RehabUOS t:dataTypeName=number metric m:schoolbaseduos p:integer l:SchoolBasedUOS t:dataTypeName=number metric m:therapyuos p:integer l:TherapyUOS t:dataTypeName=number metric m:casemgmt_insurancecost p:double l:CaseMgmt_InsuranceCost t:dataTypeName=number metric m:clinicservicescost p:double l:ClinicServicesCost t:dataTypeName=number metric m:dentalservicescost p:double l:DentalServicesCost t:dataTypeName=number metric m:epsdtcost p:double l:EPSDTCost t:dataTypeName=number metric m:ercost p:double l:ERCost t:dataTypeName=number metric m:hcbscost p:double l:HCBSCost t:dataTypeName=number metric m:homehealthcost p:float l:HomeHealthCost t:dataTypeName=number metric m:hospicecost p:double l:HospiceCost t:dataTypeName=number metric m:icfmrcost p:double l:ICFMRCost t:dataTypeName=number metric m:inpatientcarecost p:double l:InpatientCareCost t:dataTypeName=number metric m:labradiologycost p:double l:LabRadiologyCost t:dataTypeName=number metric m:nursingfacilitycost p:double l:NursingFacilityCost t:dataTypeName=number metric m:otherservicescost p:double l:OtherServicesCost t:dataTypeName=number metric m:outpatientcost p:double l:OutPatientCost t:dataTypeName=number metric m:pddcost p:double l:PDDCost t:dataTypeName=number metric m:prescdrugscost p:double l:PrescDRUGsCost t:dataTypeName=number metric m:rehabcost p:double l:RehabCost t:dataTypeName=number metric m:schoolbasedcost p:double l:SchoolBasedCost t:dataTypeName=number metric m:therapycost p:double l:TherapyCost t:dataTypeName=number metric m:totalcost p:double l:TotalCost t:dataTypeName=number metric m:hospstaticpayment p:double l:HospStaticPayment t:dataTypeName=number metric m:capitationpayments p:double l:CapitationPayments t:dataTypeName=number metric m:hospencounteraddonpayment p:double l:HospEncounterAddOnPayment t:dataTypeName=number entity e:86i3-9wpd l:"2014 Transparency Provider Level Data" t:url=https://data.illinois.gov/api/views/86i3-9wpd property e:86i3-9wpd t:meta.view v:id=86i3-9wpd v:category=Health-Medicaid v:averageRating=0 v:name="2014 Transparency Provider Level Data" property e:86i3-9wpd t:meta.view.owner v:id=xh33-g7e8 v:screenName="HFS - Administrator" v:displayName="HFS - Administrator" property e:86i3-9wpd t:meta.view.tableauthor v:id=xh33-g7e8 v:screenName="HFS - Administrator" v:roleName=publisher v:displayName="HFS - Administrator" ``` ## Top Records ```ls | providerkeyid | npi | providertypecd | providertypedesc | providername | provzipcd | officecountycd | officecountydesc | reimbursementtypecd | reimbursementtypedesc | criticalaccessind | pcpind | primspeccddesc | carecoordrins | casemgmt_insurancerins | clinicservicesrins | dentalservicesrins | epsdtrins | errins | hcbsrins | homehealthrins | hospicerins | icfmrrins | inpatientcarerins | labradiologyrins | nursingfacilityrins | otherservicesrins | outpatientrins | pddrins | prescdrugsrins | rehabrins | schoolbasedrins | therapyrins | casemgmt_insuranceevents | clinicservicesevents | dentalservicesevents | epsdtevents | erevents | hcbsevents | homehealthevents | hospiceevents | icfmrevents | inpatientcareevents | labradiologyevents | nursingfacilityevents | otherservicesevents | outpatientevents | pddevents | prescdrugsevents | rehabevents | schoolbasedevents | therapyevents | casemgmt_insuranceuos | clinicservicesuos | dentalservicesuos | epsdtuos | eruos | hcbsuos | homehealthuos | hospiceuos | icfmruos | inpatientcareuos | labradiologyuos | nursingfacilityuos | otherservicesuos | outpatientuos | pdduos | prescdrugsuos | rehabuos | schoolbaseduos | therapyuos | casemgmt_insurancecost | clinicservicescost | dentalservicescost | epsdtcost | ercost | hcbscost | homehealthcost | hospicecost | icfmrcost | inpatientcarecost | labradiologycost | nursingfacilitycost | otherservicescost | outpatientcost | pddcost | prescdrugscost | rehabcost | schoolbasedcost | therapycost | totalcost | hospstaticpayment | capitationpayments | hospencounteraddonpayment | | ============= | ========== | ============== | ============================================= | =============== | ========= | ============== | ================ | =================== | ===================== | ================= | ====== | ============== | ============= | ====================== | ================== | ================== | ========= | ====== | ======== | ============== | =========== | ========= | ================= | ================ | =================== | ================= | ============== | ======= | ============== | ========= | =============== | =========== | ======================== | ==================== | ==================== | =========== | ======== | ========== | ================ | ============= | =========== | =================== | ================== | ===================== | =================== | ================ | ========= | ================ | =========== | ================= | ============= | ===================== | ================= | ================= | ======== | ===== | ======= | ============= | ========== | ======== | ================ | =============== | ================== | ================ | ============= | ====== | ============= | ======== | ============== | ========== | ====================== | ================== | ================== | ========= | ====== | ======== | ============== | =========== | ========= | ================= | ================ | =================== | ================= | ============== | ======= | ============== | ========= | =============== | =========== | ========= | ================= | ================== | ========================= | | 1000000004 | 1497093264 | 16 | Nurse Practitioners | KNOWSKI PEGGY L | 626759467 | 73 | Menard | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 58 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 58 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 586.74 | 0 | 0 | 0 | 0 | 0 | 0 | 586.74 | 0 | 0 | 0 | | 1000000009 | | 92 | Waiver service provider--Disability (DHS/DRS) | | | 102 | Warren | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1144.74 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1144.74 | 0 | 0 | 0 | | 1000000270 | | 91 | Waiver service provider--Adults (DHS/DDD) | | | 110 | Woodford | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15082.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15082.85 | 0 | 0 | 0 | | 1000000481 | | 91 | Waiver service provider--Adults (DHS/DDD) | | | 96 | St Clair | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 23768.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 23768.4 | 0 | 0 | 0 | | 1000000606 | | 92 | Waiver service provider--Disability (DHS/DRS) | | | 200 | Cook | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12414.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12414.32 | 0 | 0 | 0 | | 1000001364 | | 24 | Speech Therapists | | | 200 | Cook | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 1000001540 | | 91 | Waiver service provider--Adults (DHS/DDD) | | | 96 | St Clair | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 1000001621 | | 92 | Waiver service provider--Disability (DHS/DRS) | | | 200 | Cook | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 1000001922 | | 23 | Occupational Therapists | | | 200 | Cook | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 1000003398 | | 92 | Waiver service provider--Disability (DHS/DRS) | | | 42 | Hancock | | | 0 | 0 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ```
127.931953
1,819
0.434911
eng_Latn
0.156083
6ff4b233e950cd56a64c7b7fd22189fe121e1114
628
md
Markdown
README.md
otus-kuber-2020-11/devgrav_platform
9224970c610f32aabe0180bc686032a6a82d6fd6
[ "MIT" ]
null
null
null
README.md
otus-kuber-2020-11/devgrav_platform
9224970c610f32aabe0180bc686032a6a82d6fd6
[ "MIT" ]
null
null
null
README.md
otus-kuber-2020-11/devgrav_platform
9224970c610f32aabe0180bc686032a6a82d6fd6
[ "MIT" ]
null
null
null
# devgrav_platform devgrav Platform repository Homework #1 Реализовано: * Установлен minikube, kubectl, dashboard, k9s * Написан Docker образ для запуска nginx в контейнере и выложен на Docker Hub * Написан манифест для запуска пода, разработанного ранее, под в кластере minikube * Добавлен init контейнер * Собран и запущен образ с frontend приложением Hipster Shop * Исправлена ошибка в автоматически сгенерированном манифесте для запуска frontend приложения. Добавлены переменные окружения с адресами других приложений Запуск: `kubectl apply -f web-pod.yaml` `kubectl port-forward --address localhost pod/web 8000:8000`
34.888889
154
0.815287
rus_Cyrl
0.777766
6ff4b8076eb38cb9c117e1b74966d271cf1d520f
511
md
Markdown
protobuf-visualizer/README.md
lcskrishna/nn-tools
3e63e0bc24b6870c8811e2b6f3dad2305b180c84
[ "MIT" ]
null
null
null
protobuf-visualizer/README.md
lcskrishna/nn-tools
3e63e0bc24b6870c8811e2b6f3dad2305b180c84
[ "MIT" ]
null
null
null
protobuf-visualizer/README.md
lcskrishna/nn-tools
3e63e0bc24b6870c8811e2b6f3dad2305b180c84
[ "MIT" ]
null
null
null
# protobuf-reader This is a simple testing tool, that takes a caffemodel and converts the binary proto file into readable protobuf format that shows the exact network being saved. This tool is mostly used for debugging and testing purposes for caffemodels. ## Usage: ``` % g++ --std=c++11 bin2proto.cpp caffe.pb.cc `pkg-config --cflags --libs protobuf` -o bin2proto % bin2proto <net.caffemodel> ``` The readable protobuf for a binary caffemodel file is present in "net.prototxt" after running this tool.
31.9375
105
0.759295
eng_Latn
0.988382
6ff53e9fbcf47144b827dde2541c2fa477ce89e9
1,378
md
Markdown
_posts/2017-08-23-Karishma-Creations-Prom-Dressess-Style-15042.md
hyperdressyou/hyperdressyou.github.io
9849173bcaac57aa413f9b885975103e554300f1
[ "MIT" ]
null
null
null
_posts/2017-08-23-Karishma-Creations-Prom-Dressess-Style-15042.md
hyperdressyou/hyperdressyou.github.io
9849173bcaac57aa413f9b885975103e554300f1
[ "MIT" ]
null
null
null
_posts/2017-08-23-Karishma-Creations-Prom-Dressess-Style-15042.md
hyperdressyou/hyperdressyou.github.io
9849173bcaac57aa413f9b885975103e554300f1
[ "MIT" ]
null
null
null
--- layout: post date: 2017-08-23 title: "Karishma Creations Prom Dressess Style 15042" category: Karishma Creations tags: [Karishma Creations] --- ### Karishma Creations Prom Dressess Style 15042 Just **$539.99** ### <table><tr><td>BRANDS</td><td>Karishma Creations</td></tr></table> <a href="https://www.readybrides.com/en/karishma-creations/87537-karishma-creations-prom-dressess-style-15042.html"><img src="//img.readybrides.com/227683/karishma-creations-prom-dressess-style-15042.jpg" alt="Karishma Creations Prom Dressess Style 15042" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/karishma-creations/87537-karishma-creations-prom-dressess-style-15042.html"><img src="//img.readybrides.com/227684/karishma-creations-prom-dressess-style-15042.jpg" alt="Karishma Creations Prom Dressess Style 15042" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/karishma-creations/87537-karishma-creations-prom-dressess-style-15042.html"><img src="//img.readybrides.com/227682/karishma-creations-prom-dressess-style-15042.jpg" alt="Karishma Creations Prom Dressess Style 15042" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/karishma-creations/87537-karishma-creations-prom-dressess-style-15042.html](https://www.readybrides.com/en/karishma-creations/87537-karishma-creations-prom-dressess-style-15042.html)
81.058824
296
0.766328
yue_Hant
0.246985
6ff551d4cc2d87e4d4124f3e7333750ca7414f2d
1,303
md
Markdown
README.md
budnyjj/bsuir_templates
a97bb7372774fb85e415921fc2132b3b4993eff9
[ "MIT" ]
3
2018-12-11T15:20:39.000Z
2021-09-18T14:16:35.000Z
README.md
budnyjj/templates
a97bb7372774fb85e415921fc2132b3b4993eff9
[ "MIT" ]
null
null
null
README.md
budnyjj/templates
a97bb7372774fb85e415921fc2132b3b4993eff9
[ "MIT" ]
3
2018-12-11T15:20:41.000Z
2021-09-12T17:37:35.000Z
Шаблоны ======= Шаблоны для написания лабораторных и курсовых работ в соответствии с СТП-01–2010 в среде LaTeX. В папке /lab находится шаблон отчета для лабораторной работы. Использование ------------- Для того, чтобы использовать эти шаблоны, у вас должен быть установлен, естественно, LaTeX; в шаблоне используется достатчно много внешних пакетов (смотри sys/packages.tex), для того, чтобы их установить, используйте [texlive](http://www.tug.org/texlive/). Компиляция шаблона осуществляется следующим образом: ```bash $ make && make compile ``` Скомпированный шаблон находится в корне используемого шаблона в формате pdf: lab.pdf. Дополнительно ------------- + Все доступные make-команды находятся в соответствующем Makefile. + Для того, чтобы при генерации отчета использовался красивый кириллический Times New Roman, установите latex-пакет [pscyr](http://donik.org/wiki/index.php/%D0%A3%D1%81%D1%82%D0%B0%D0%BD%D0%BE%D0%B2%D0%BA%D0%B0_%D0%BF%D0%B0%D0%BA%D0%B5%D1%82%D0%B0_PSCyr_%D0%B2_LaTeX) и раскомментируйте следующую строчку в файле tex/packages.tex ```latex \usepackage{pscyr} ``` Bugs & Reports -------------- Замечания и предложения приветствуются! Присылайте их сюда: [budnyjj@gmail.com](mailto:budnyjj@gmail.com); [anashkevichp@gmail.com](mailto:anashkevichp@gmail.com).
30.302326
178
0.753645
rus_Cyrl
0.874863
6ff5bc8b0b31d0b622f06f2c4b8601e986ec03c9
6,569
md
Markdown
README.md
dariustehrani/advanced-analytics-apps-on-azure-appservice
88e86dd8102db29cd666d27f3b5d131202986955
[ "MIT" ]
null
null
null
README.md
dariustehrani/advanced-analytics-apps-on-azure-appservice
88e86dd8102db29cd666d27f3b5d131202986955
[ "MIT" ]
null
null
null
README.md
dariustehrani/advanced-analytics-apps-on-azure-appservice
88e86dd8102db29cd666d27f3b5d131202986955
[ "MIT" ]
1
2019-05-06T19:21:12.000Z
2019-05-06T19:21:12.000Z
# advanced-analytics-apps-on-azure-appservice Instructions on how to deploy containerized Data Analytics on Azure App Service. # Overview ![analytics-docker-appservice-overview](/images/overview.png) # Required Tools Please make sure you have the following tools installed on your machine. * Code Editor: Visual Studio Code https://code.visualstudio.com/ * git source-code management https://git-scm.com/ * azure command line interface 'az' https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest (Azure CLI) * Docker for Windows https://docs.docker.com/docker-for-windows/install/ # Create an Azure DevOps project We will use Azure DevOps to host the source-code and secondly utilize Azure Pipelines to automate build and deployment tasks. * login at https://dev.azure.com using your corporate credentials. * Please refer to https://docs.microsoft.com/en-us/azure/devops/organizations/projects/create-project?view=azure-devops for further guidance. # Clone the repository In your DevOps project please proceed to "Repos". Clone the repository to your local computer. # Download the example project for Bokeh Please proceed to the following URL. **Please just perform the ZIP download and extract the content in the previously cloned git repository** https://github.com/dariustehrani/bokeh-on-docker # Test the Dockerfile * Open Powershell (in Visual Studio Code use STRG+ö on a german keyboard, type powershell) * CD into the project folder * run ````docker build -t bokeh-on-docker:latest .```` * test the image by running ````docker run -p 8080:8080 -it bokeh-on-docker:latest```` * Open a browser and point it to http://localhost:8080 You should see a simple bokeh demo app. **Did this work for you?** Excellent. Within your project directory perform: ````git add . ```` ````git commit . -m "your commit message"```` ````git push```` Visit the repository view in Azure DevOps to check if your code has arrived there. # Azure Container Registry (azurecr.io) Typically we would not want to build and persist Docker images locally but have Azure DevOps manage this for us. Let's create a Container Registry. You can do this in the portal or using the following Azure CLI commands (in PowerShell): ````az account show```` If you are not logged in, run: ````az login```` Create a resource group: ````az group create -l westeurope -n bokehondockerYOURNAME```` Create your azure container registry ````az acr create -n bokehYOURNAME -g bokehondockerYOURNAME --sku Standard --admin-enabled```` azure container registry docker login ````az acr login --name bokehYOURNAME```` (optional) show the current username and password ````az acr credential show -n bokehyourname```` # Azure Pipeline Setup ### Create a new service principal (SP) The SP will be used by Azure DevOps to connect to your Azure subscription and manage resources on your behalf. You need to replace the values with your subscription ID and Resource Group Name. ````az account show```` ````az ad sp create-for-rbac -n "bokehondocker" --role contributor --scopes /subscriptions/{SubID}/resourceGroups/{ResourceGroup1}```` # Setup Azure DevOps Service Connections ## AzureRM Service connection Insert the credentials you receive here: * Proceed to your Azure DevOps project. * Click "Project settings" -> "Service connections" -> "New service connection" -> Select "Azure Resource Manager". * Define a meaningful name. Select "Scope:Subscription" "Subscription: YOURSUBSCRIPTION" "Resource Group: NAMEOFRGYOUCREATED" * Click "use the automated version of the service connection dialog." * Insert the AppID from the shell output into "Service principal client id" . * Insert the key into "Service principal client ID". * Click "Verify connection". * Make sure you tick the "Allow all pipeline to use this connection" box. ## Azure Container Registry Service connection: * Click "New servic connection" and select "Docker Registry". * Select "Azure Container Registry" and fill out the remaining information. * Take note of your service connection names. # Bring up the Azure DevOps Pipeline * Click on "Pipelines" -> "Builds" * Click "New Pipeline" * Select "Azure Repos Git" and click "Continue". * Choose "Hosted Ubuntu 1604" as Agent pool. * Select YAML and specify the path to your existing azure-pipelines.yml file. * Save the build pipeline * Proceed to the "Pipelines" overview and choose to edit the newly created pipeline. * Replace the containerRegistry with your service connection name. * Click "Run" # Azure Web App * Create a new Web App on https://portal.azure.com * Choose your subscription and resource group and select a compact but meaningful name. * Choose "Docker Image", "Linux" and the location "West Europe" * Create a new App Service Plan with the SKU "P1v2" * Leave the rest as is and create the App Service ![create-web-app-view](/images/createwebapp.PNG) * Once the service has been created proceed to your App Service and click on the Container settings * Choose your Azure Container Registry with the image listed Tag "latest" and Continuous deploy: OFF * Scroll down to "Diagnostic logs" in the Monitoring section Enable Application Logging filesystem 100MB quota 30 days retention. Click save. * Head over to “Deployment Slots” Create a DEV slot. Make sure to copy the settings from the other slot. * Proceed to azure DevOps, uncomment the settings in your azure-pipeline.yml (second section) by removing the # signs. Modify the code to correspond with your Subscription, Resource Group, WebApp, Slot Name as well as the service connection you have created. * Save your settings and run the pipeline. * Find the URL of your Web App where you should see your bokeh demo app! # Shiny R on Microft Open R (ADVANCED USERS ONLY) As building the R Packages take a lot of time, the following example separates the creation of the runtime container from the actual Shiny app container. This will save 18 Minutes on average per deployment. Secondly you might want to set up a scheduled trigger, e.g. on a weekly basis so that your base images always includes latest updates and patches. ### Microsoft R Open Shiny BASE Image Build a CI/CD Pipeline for the base Image. https://github.com/dariustehrani/mro-shiny-base ### Microsoft R Open Shiny APP Image The following example will build shiny app container. You will need to modify the FROM entry to reference your azure container registry. ````FROM YOUR.azurecr.io/mro-shiny-base:latest```` https://github.com/dariustehrani/mro-on-docker
44.385135
353
0.765109
eng_Latn
0.9489
6ff61f59df0b26e5d9a060bb4f19199816b764ac
1,334
md
Markdown
src/DesignerPlugin/readme.md
Watch-Later/SARibbon
01be23def19672dfac0196592fe49caed34398ce
[ "MIT" ]
1
2021-11-04T10:28:10.000Z
2021-11-04T10:28:10.000Z
src/DesignerPlugin/readme.md
Watch-Later/SARibbon
01be23def19672dfac0196592fe49caed34398ce
[ "MIT" ]
null
null
null
src/DesignerPlugin/readme.md
Watch-Later/SARibbon
01be23def19672dfac0196592fe49caed34398ce
[ "MIT" ]
null
null
null
# 简介 这里将介绍SARibbon的qt designer插件是如何实现的 这个插件比较复杂,它的`bool isContainer()`函数返回`true`,表明它能接收qt designer一些拖曳窗口的事件。 qt 帮助文档中有个较为详细的例子:Container Extension Example介绍了此类插件的编写。 在插件的`isContainer`为`true`时,插件除了需要继承`QDesignerCustomWidgetInterface`以外,还需要面对几个重要类: - `QExtensionManager` - `QExtensionFactory` - `QDesignerContainerExtension` - `QDesignerFormEditorInterface` - `QDesignerFormWindowInterface` - `QDesignerPropertySheetExtension` # 一些问题 如果没有成功加载插件,可以通过qt designer的“帮助”->“关于插件”中查看错误信息 ![](https://cdn.jsdelivr.net/gh/czyt1988/SARibbon/src/DesignerPlugin/doc/pic/01-aboutplugin.png) # 编写插件注意事项 注意插件类要导出信息,通过`Q_PLUGIN_METADATA`宏,如果没有使用这个宏,在designer里将不会显示,且会提示错误: ![](https://cdn.jsdelivr.net/gh/czyt1988/SARibbon/src/DesignerPlugin/doc/pic/02-fault-info.png) 这时只需加入`Q_PLUGIN_METADATA`宏即可 ```cpp class SARibbonMainWindowDesignerPlugin : public QObject, public QDesignerCustomWidgetInterface { Q_OBJECT Q_PLUGIN_METADATA(IID "SA.SARibbon.SARibbonMainWindow") Q_INTERFACES(QDesignerCustomWidgetInterface) ``` 这是因为作为一个lib,自定义的plugin没有任何导出的描述符,只能通过某些操作通知qt哪些类需要导出。 当然,如果定义了`QDesignerCustomWidgetCollectionInterface`插件集合,只需在继承`QDesignerCustomWidgetCollectionInterface`的类中申明一次`Q_PLUGIN_METADATA`即可,这里会通过customWidgets告诉qt designer有哪些plugin要导出,这样就不需要每个plugin类都定义一下`Q_PLUGIN_METADATA`,如果要导出多个控件,建议使用这种方法。
31.761905
234
0.826837
yue_Hant
0.96241
6ff7190679ef1d65586a603eb41b3e505bca1686
6,609
md
Markdown
_drafts/2017-11-01-quase-um-ano.md
pbamotra/pankesh
f4fa64a02ce4094969a41d374faafe5e9e2c2ef4
[ "MIT" ]
null
null
null
_drafts/2017-11-01-quase-um-ano.md
pbamotra/pankesh
f4fa64a02ce4094969a41d374faafe5e9e2c2ef4
[ "MIT" ]
null
null
null
_drafts/2017-11-01-quase-um-ano.md
pbamotra/pankesh
f4fa64a02ce4094969a41d374faafe5e9e2c2ef4
[ "MIT" ]
null
null
null
--- title: 'Quase um ano...' date: 2017-11-01 tags: life description: > Já faz quase um ano inteiro que vivo em Barcelona, e tenho que admitir, não foi nada do que eu esperava. --- Já faz quase um ano inteiro que vivo em Barcelona, e tenho que admitir, não foi nada do que eu esperava. Dentro de um ano, tive trabalho, algumas **viagens**, **músicas latinas** que grudam mais que sertanejo, queimaduras de **sol**, **um ataque terrorista**, uma **tentativa de independência** e uma **ligeira introdução** a cultura Espanhola/Catalã. *Parece bom, não é?* Se alguém tivesse me perguntado há alguns anos atrás, como eu me imaginaria morando na Espanha, a resposta provavelmente seria algo completamente diferente do que se passa na minha cabeça hoje. Eu expliquei em um [post anterior sobre minha vinda para Barcelona](/2017/hora-de-seguir-em-frente-ola-barcelona/), e já que sempre quis trabalhar fora do Brasil, tive que tomar algumas decisões, avaliar prós e contras, mesmo que naquele momento não ter nada a perder e nada pendente ajudou muito. Quando nos mudamos, precisamos nos **adaptar com um novo lugar**, fazer **novos amigos**, etc... e no meu caso tentar entender o mercado de trabalho e também a comunidade de tecnologia do local. Penso que **os primeiros 6 meses foram os piores**, principalmente por se estar sozinho em um lugar não só desconhecido, mas também por ter uma cultura e idioma diferentes. Ter **vindo durante o inverno** não facilitou muito se socializar no início, mas depois de alguns meses, trabalhando/estudando e viajando as coisas começaram a fluir melhor até que me adaptei. Os amigos que fiz, em grande parte são do Brasil, depois latinos/espanhóis, além de gente de diversos países que acabaram virando amigos nesse meio tempo. Ao invés de falar sobre procurar moradia, clima e preocupações sobre o idioma/cultura, tenho que dizer que no começo foi um pouco turvo e **um aprendizado constante**, pois, se socializar foi um dos pontos fundamentais para mim, e foi algo que me ajudou a se adaptar. ## O lado ruim 1. Uma das coisas mais tristes é não fazer mais parte do cotidiano das pessoas, e com isso não ir a casamentos e aniversários de amigos e membros da sua família, afinal, você esta **morando fora**, é normal sentir saudades de casa. Mas isso não quer dizer que você não vai mais existir para eles, já que família e bons amigos são para sempre. 2. Eventuais desentendimentos por conta de diferenças culturais, que as vezes farão algumas pessoas ter a ideia errada sobre você, logo nem mesmo tendo o mesmo idioma fará que as coisas sejam as mesmas. Mas uma vez que aprendemos e respeitamos as coisas tudo fica numa boa, apenas seja humilde. 3. Você esta em **um lugar estranho**, e será sempre **o imigrante**, com uma cultura diferente e idioma diferente. Mas isso não quer dizer que você vai sofrer algum tipo de racismo/preconceito/bullying, mas poderá impactar na hora de procurar um trabalho quando for comparado com um nativo com suas mesmas qualidades, logo, use o que poder ao seu favor e tenha seu diferencial. Conheci alguns brasileiros que estão aqui trabalhando em qualquer trabalho, seja como garçom ou atendente de bares (que precisa lavar banheiro), ajudantes de cozinha (que lavam pratos), trabalhar em construção, empregadas/faxineiras, etc. 4. O financeiro é algo que só quem esta aqui vai entender, uma vez que já não se pode fazer comparação de valores pelo câmbio, mas sim pelo que aquilo te permite ter. Aqui a lei do **quem converte não se diverte** se aplica o tempo todo, especialmente quando ir a um novo destino esta por menos de 50€. Mesmo com as diferenças, um pouco de perda de referência pela absorvição uma nova cultura, ter um círculo social restrito no início possa ser algo negativo, nós sempre podemos nos adaptar, e se as coisas não estiverem como queremos, sempre existe um novo destino. Conheci um brasileiro que vive em Barcelona há 3 anos e trabalha como garçom em um restaurante, e me disse em um breve resumo: > Entre uma situação ruim no Brasil e aqui, eu prefiro continuar aqui. :punch: Creio que é algo a se pensar. Sem mais. ## O lado bom 1. **Morar fora** vai te prover a oportunidade de conquistar sua própria independência e também irá gerar uma experiência única, mesmo com partes ruins. Diferente de apenas viajar para algum lugar, o sentimento de **estar longe de casa** com o tempo já não é algo tão forte, uma vez que você se adapta com o lugar em questão tempo. 2. Você vai ter a oportunidade de aprender coisas novas, além de uma nova cultura, idioma ou até mesmo novas habilidades culinárias, você terá **um novo jeito de viver e ver o mundo**. 3. Se seu foco é currículo, **existem vários cursos** por ai, muitas vezes **de graça**, tanto do idioma local do país, tantos outros quantos oferecidos como bolsas de estudos por universidades. 4. **O mundo parece encolher**, e mesmo sendo um lugar imenso e intocável, com o tempo irá parecer algo da nossa imaginação, não só pelos lugares turísticos, mas especialmente por aqueles pouco conhecidos e ainda assim, cheios de história e cultura. Ter a mentalidade mudada para um viajante ou mochileiro, te faz um eterno turista, e em apenas um ano eu viajei quase 20 diferentes destinos (sendo 7 na Espanha) em 8 países diferentes, onde além de conhecer lugares com sua cultura e história fascinantes, também teve muita gente nova e até alguns amigos, além de experimentar a gastronomia local e ver lugares extraordinários. Próximos destinos: Escandinávia, Grécia e Ásia. Com alguns outros destinos que possam aparecer ou simplesmente se repetir. > Esta sendo genial viver na Europa! ## E o resto? Não quero me estender neste post, acredito que não preciso citar coisas como **transporte público**, **pouco trânsito** e **sistema publico de saúde** que funciona como pontos fortes, já que isso é um padrão que existe em toda Europa. Em um futuro direi mas coisas sobre o **custo de vida de se viver em Barcelona**, embora você possa encontrar a resposta para essa pergunta perguntando ao Google. #### Obs Sim, realmente a parte do atentado foi algo que me chocou, mas essas coisas **podem acontecer em qualquer lugar**, e mesmo sendo triste, não vamos esquecer que temos diversos atentados em nosso país **todos os dias**. Sobre a independência, não sei dizer o que vai acontecer, mas acho que não vai dar em nada por um bom tempo, até faço uma comparação com a **Revolução Farroupilha**, onde por mais que entenda as motivações, não é algo tão simples. **Opinião política, é algo que guardo para mim**. :ghost: Até a próxima.
88.12
618
0.777425
por_Latn
1.000006
6ff79114e32f3931343137879ba1da39683b8ea4
10,108
md
Markdown
docs/reference/library.md
toonn/haskell.nix
2e7a9925f5922ec785b0782e6b1457166dcb127c
[ "Apache-2.0" ]
null
null
null
docs/reference/library.md
toonn/haskell.nix
2e7a9925f5922ec785b0782e6b1457166dcb127c
[ "Apache-2.0" ]
null
null
null
docs/reference/library.md
toonn/haskell.nix
2e7a9925f5922ec785b0782e6b1457166dcb127c
[ "Apache-2.0" ]
null
null
null
[Haskell.nix][] contains a library of functions for creating buildable package sets from their Nix expression descriptions. The library is what you get when importing [Haskell.nix][]. It might be helpful to load the library in the [Nix REPL](../user-guide.md#using-nix-repl) to test things. * [Types](#types) — the kinds of data that you will encounter working with [Haskell.nix][]. * [Top-level attributes](#top-level-attributes) — Functions and derivations defined in the Haskell.nix attrset. * [Package-set functions](#package-set-functions) — Helper functions defined on the `hsPkgs` package set. # Types ## Package Set The result of `mkPkgSet`. This is an application of the NixOS module system. ``` { options = { ... }; config = { hsPkgs = { ... }; packages = { ... }; compiler = { version = "X.Y.Z"; nix-name = "ghcXYZ"; packages = { ... }; }; }; } ``` | Attribute | Type | Description | |----------------|------|-----------------------------------------------------| | `options` | Module options | The combination of all options set through the `modules` argument passed to `mkPkgsSet`. | | `config` | | The result of evaluating and applying the `options` with [Haskell.nix][] | | `.hsPkgs` | Attrset of [Haskell Packages](#haskell-package) | Buildable packages, created from `packages` | | `.packages` | Attrset of [Haskell Package descriptions](#haskell-package-descriptions) | Configuration for each package in `hsPkgs` | | `.compiler` | Attrset | | ## Haskell Package description The _Haskell package descriptions_ are values of the `pkgSet.config.packages` attrset. These are not derivations, but just the configuration for building an individual package. The configuration options are described under `packages.<name>` in [Module options](./modules.md). ## Component description The _component descriptions_ are values of the `pkgSet.config.packages.<package>.components` attrset. These are not derivations, but just the configuration for building an individual component. The configuration options are described under `packages.<name>.components.*` in [Module options](./modules.md). ## Haskell Package In [Haskell.nix][], a _Haskell package_ is a derivation which has a `components` attribute. This derivation is actually just for the package `Setup.hs` script, and isn't very interesting. To actually use the package, look within the components structure. ``` components = { library = COMPONENT; exes = { NAME = COMPONENT; }; tests = { NAME = COMPONENT; }; benchmarks = { NAME = COMPONENT; }; all = COMPONENT; } ``` ## Component In [Haskell.nix][], a _component_ is a derivation corresponding to a [Cabal component](https://www.haskell.org/cabal/users-guide/developing-packages.html) of a package. [Haskell.nix][] also defines a special `all` component, which is the union of all components in the package. ## Identifier A package identifier is an attrset pair of `name` and `version`. ## Extras Extras allow adding more packages to the package set. These will be functions taking a single parameter `hackage`. They should return an attrset of package descriptions. ## Modules Modules are the primary method of configuring building of the package set. They are either: 1. an attrset containing [option declarations](./options.md), or 2. a function that returns an attrset containing option declarations. If using the function form of a module, the following named parameters will be passed to it: | Argument | Type | Description | |------------------|------|---------------------| | `haskellLib` | attrset | The [haskellLib](#haskelllib) utility functions. | | `pkgs` | | The Nixpkgs collection. | | `pkgconfPkgs` | | A mapping of cabal build-depends names to Nixpkgs packages. (TODO: more information about this) | | `buildModules` | | | | `config` | | | | `options` | | | # Top-level attributes ## mkStackPkgSet Creates a [package set](#package-set) based on the `pkgs.nix` output of `stack-to-nix`. ```nix mkStackPkgSet = { stack-pkgs, pkg-def-extras ? [], modules ? []}: ... ``` | Argument | Type | Description | |------------------|------|---------------------| | `stack-pkgs` | | `import ./pkgs.nix` — The imported file generated by `stack‑to‑nix`. | | `pkg‑def‑extras` | List of [Extras](#extras) | For overriding the package set. | | `modules` | List of [Modules](#modules) | For overriding the package set. | **Return value**: a [`pkgSet`](#package-set) ## mkCabalProjectPkgSet Creates a [package set](#package-set) based on the `pkgs.nix` output of `plan-to-nix`. ```nix mkCabalProjectPkgSet = { plan-pkgs, pkg-def-extras ? [], modules ? []}: ... ``` | Argument | Type | Description | |------------------|------|---------------------| | `plan-pkgs` | | `import ./pkgs.nix` — The imported file generated by `plan‑to‑nix`. | | `pkg‑def‑extras` | List of [Extras](#extras) | For overriding the package set. | | `modules` | List of [Modules](#modules) | For overriding the package set. | **Return value**: a [`pkgSet`](#package-set) ## mkPkgSet This is the base function used by both `mkStackPkgSet` and `mkCabalProjectPkgSet`. **Return value**: a [`pkgSet`](#package-set) ## snapshots This is an attrset of `hsPkgs` packages from Stackage. ## haskellPackages A `hsPkgs` package set, which is one of the recent LTS Haskell releases from [`snapshots`](#snapshots). The chosen LTS is updated occasionally in [Haskell.nix][], though a manual process. ## nix-tools A derivation containing the `nix-tools` [command-line tools](commands.md). ## callStackToNix Runs `stack-to-nix` and produces the output needed for `importAndFilterProject`. **Example**: ```nix pkgSet = mkStackPkgSet { stack-pkgs = (importAndFilterProject (callStackToNix { src = ./.; })).pkgs; pkg-def-extras = []; modules = []; }; ``` ## callCabalProjectToNix Runs `cabal new-configure` and `plan-to-nix` and produces the output needed for `importAndFilterProject`. **Example**: ```nix pkgSet = mkCabalProjectPkgSet { plan-pkgs = (importAndFilterProject (callCabalProjectToNix { index-state = "2019-04-30T00:00:00Z"; src = ./.; })).pkgs; ``` ## importAndFilterProject Imports from a derivation created by `callStackToNix` or `callCabalProjectToNix`. The result is an attrset with the following values: | Attribute | Type | Description | |----------------|------|-----------------------------------------------------| | `pkgs` | attrset | that can be passed to `mkStackPkgSet` (as `stack-pkgs`) or `mkCabalProjectPkgSet` (as `plan-pkgs`). | | `nix` | | this can be built and cached so that the amount built in the evaluation phase is not too great (helps to avoid timeouts on Hydra). | ## hackage ## stackage ## fetchExternal ## cleanSourceHaskell ## haskellLib Assorted functions for operating on [Haskell.nix][] data. This is distinct from `pkgs.haskell.lib` in the current Nixpkgs Haskell Infrastructure. ### collectComponents Extracts a selection of components from a Haskell [package set](#package-set). This can be used to filter out all test suites or benchmarks of your project, so that they can be built in Hydra. ``` collectComponents = group: packageSel: haskellPackages: ... ``` | Argument | Type | Description | |-------------------|--------|---------------------| | `group` | String | A [sub-component type](#subComponentTypes). | | `packageSel` | A function `Package -> Bool` | A predicate to filter packages with. | | `haskellPackages` | [Package set](#package-set) | All packages in the build. | **Return value**: a recursive attrset mapping package names → component names → components. **Example**: ```nix tests = collectComponents "tests" (package: package.identifier.name == "mypackage") hsPkgs; ``` Will result in moving derivations from `hsPkgs.mypackage.components.tests.unit-tests` to `tests.mypackage.unit-tests`. #### subComponentTypes Sub-component types identify [components](#component) and are one of: - `sublibs` - `foreignlibs` - `exes` - `tests` - `benchmarks` # Package-set functions These functions exist within the `hsPkgs` package set. ## shellFor Create a `nix-shell` [development environment](../user-guide/development.md) for developing one or more packages with `ghci` or `cabal v2-build` (but not Stack). ``` shellFor = { packages, withHoogle ? true, exactDeps ? false, ...}: ... ``` | Argument | Type | Description | |----------------|------|---------------------| | `packages` | Function | Package selection function. It takes a list of [Haskell packages](#haskell-package) and returns a subset of these packages. | | `withHoogle` | Boolean | Whether to build a Hoogle documentation index and provide the `hoogle` command. | | `exactDeps` | Boolean | Prevents the Cabal solver from choosing any package dependency other than what are in the package set. | | `{ ... }` | Attrset | All the other arguments are passed to [`mkDerivation`](https://nixos.org/nixpkgs/manual/#sec-using-stdenv). | **Return value**: a derivation !!! warning `exactDeps = true` will set the `CABAL_CONFIG` environment variable to disable remote package servers. This is a [known limitation](../dev/removing-with-package-wrapper.md) which we would like to solve. Use `exactDeps = false` if this is a problem. ## ghcWithPackages Creates a `nix-shell` [development environment](../user-guide/development.md) including the given packages selected from this package set. **Parameter**: a package selection function. **Return value**: a derivation **Example**: ``` haskell.haskellPackages.ghcWithPackages (ps: with ps; [ lens conduit ]) ``` ## ghcWithHoogle The same as `ghcWithPackages`, except, a `hoogle` command with a Hoogle documentation index of the packages will be included in the shell. [haskell.nix]: https://github.com/input-output-hk/haskell.nix
30.354354
155
0.668381
eng_Latn
0.945534
6ff7dc7cd8953f9248301bca987f5371c6ad4a51
2,679
md
Markdown
sdk-api-src/content/wslapi/ne-wslapi-wsl_distribution_flags.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/wslapi/ne-wslapi-wsl_distribution_flags.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/wslapi/ne-wslapi-wsl_distribution_flags.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NE:wslapi.__unnamed_enum_0 title: WSL_DISTRIBUTION_FLAGS (wslapi.h) description: The WSL_DISTRIBUTION_FLAGS enumeration specifies the behavior of a distribution in the Windows Subsystem for Linux (WSL). helpviewer_keywords: ["WSL_DISTRIBUTION_FLAGS","WSL_DISTRIBUTION_FLAGS enumeration","WSL_DISTRIBUTION_FLAGS_APPEND_NT_PATH","WSL_DISTRIBUTION_FLAGS_ENABLE_DRIVE_MOUNTING","WSL_DISTRIBUTION_FLAGS_ENABLE_INTEROP","WSL_DISTRIBUTION_FLAGS_NONE","wsl.wsl_distribution_flags","wslapi/WSL_DISTRIBUTION_FLAGS","wslapi/WSL_DISTRIBUTION_FLAGS_APPEND_NT_PATH","wslapi/WSL_DISTRIBUTION_FLAGS_ENABLE_DRIVE_MOUNTING","wslapi/WSL_DISTRIBUTION_FLAGS_ENABLE_INTEROP","wslapi/WSL_DISTRIBUTION_FLAGS_NONE"] old-location: wsl\wsl_distribution_flags.htm tech.root: wsl ms.assetid: C0E67521-2C18-4464-B0BC-BBBC4C1FCAF0 ms.date: 12/05/2018 ms.keywords: WSL_DISTRIBUTION_FLAGS, WSL_DISTRIBUTION_FLAGS enumeration, WSL_DISTRIBUTION_FLAGS_APPEND_NT_PATH, WSL_DISTRIBUTION_FLAGS_ENABLE_DRIVE_MOUNTING, WSL_DISTRIBUTION_FLAGS_ENABLE_INTEROP, WSL_DISTRIBUTION_FLAGS_NONE, wsl.wsl_distribution_flags, wslapi/WSL_DISTRIBUTION_FLAGS, wslapi/WSL_DISTRIBUTION_FLAGS_APPEND_NT_PATH, wslapi/WSL_DISTRIBUTION_FLAGS_ENABLE_DRIVE_MOUNTING, wslapi/WSL_DISTRIBUTION_FLAGS_ENABLE_INTEROP, wslapi/WSL_DISTRIBUTION_FLAGS_NONE req.header: wslapi.h req.include-header: req.target-type: Windows req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: targetos: Windows req.typenames: WSL_DISTRIBUTION_FLAGS req.redist: ms.custom: 19H1 f1_keywords: - WSL_DISTRIBUTION_FLAGS - wslapi/WSL_DISTRIBUTION_FLAGS dev_langs: - c++ topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - wslapi.h api_name: - WSL_DISTRIBUTION_FLAGS --- # WSL_DISTRIBUTION_FLAGS enumeration ## -description The <b>WSL_DISTRIBUTION_FLAGS</b> enumeration specifies the behavior of a distribution in the Windows Subsystem for Linux (WSL). ## -enum-fields ### -field WSL_DISTRIBUTION_FLAGS_NONE:0x0 No flags are being supplied. ### -field WSL_DISTRIBUTION_FLAGS_ENABLE_INTEROP:0x1 Allow the distribution to interoperate with Windows processes (for example, the user can invoke "cmd.exe" or "notepad.exe" from within a WSL session). ### -field WSL_DISTRIBUTION_FLAGS_APPEND_NT_PATH:0x2 Add the Windows %PATH% environment variable values to WSL sessions. ### -field WSL_DISTRIBUTION_FLAGS_ENABLE_DRIVE_MOUNTING:0x4 Automatically mount Windows drives inside of WSL sessions (for example, "C:\" will be available under "/mnt/c").
36.69863
487
0.829414
yue_Hant
0.832221
6ff83bf06d02ddc8d04bcd2516f1255254b2afb9
5,668
md
Markdown
articles/virtual-network/tutorial-routing-preference-virtual-machine-portal.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-network/tutorial-routing-preference-virtual-machine-portal.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-network/tutorial-routing-preference-virtual-machine-portal.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Konfigurieren der Routingpräferenz für einen virtuellen Computer: Azure-Portal' description: Hier erfahren Sie, wie Sie mithilfe des Azure-Portals einen virtuellen Computer mit öffentlicher IP-Adresse und Routingpräferenz erstellen. services: virtual-network documentationcenter: na author: KumudD manager: mtillman ms.service: virtual-network ms.devlang: na ms.topic: how-to ms.tgt_pltfrm: na ms.workload: infrastructure-services ms.date: 02/01/2021 ms.author: mnayak ms.openlocfilehash: 0559d02ec603d12578fa46d9790d0711fde5e38b ms.sourcegitcommit: b4647f06c0953435af3cb24baaf6d15a5a761a9c ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/02/2021 ms.locfileid: "101670893" --- # <a name="configure-routing-preference-for-a-vm-using-the-azure-portal"></a>Konfigurieren der Routingpräferenz für einen virtuellen Computer mithilfe des Azure-Portals In diesem Artikel erfahren Sie, wie Sie die Routingpräferenz für einen virtuellen Computer konfigurieren. Der Internetdatenverkehr vom virtuellen Computer wird über das ISP-Netzwerk geleitet, wenn Sie **Internet** als Option für die Routingpräferenz auswählen. Das Standardrouting erfolgt über das globale Netzwerk von Microsoft. In diesem Artikel wird gezeigt, wie Sie einen virtuellen Computer mit einer öffentlichen IP-Adresse erstellen, die zum Weiterleiten von Datenverkehr über das öffentliche Internet unter Verwendung des Azure-Portals festgelegt ist. ## <a name="sign-in-to-azure"></a>Anmelden bei Azure Melden Sie sich beim [Azure-Portal](https://portal.azure.com/) an. ## <a name="create-a-virtual-machine"></a>Erstellen eines virtuellen Computers 1. Klicken Sie im Azure-Portal links oben auf **+ Ressource erstellen**. 2. Wählen Sie **Compute** und dann **Windows Server 2016-VM** oder ein anderes Betriebssystem Ihrer Wahl aus. 3. Geben Sie die folgenden Informationen ein, oder wählen Sie sie aus, übernehmen Sie die Standardwerte für die übrigen Einstellungen, und klicken Sie auf **OK**: |Einstellung|Wert| |---|---| |Name|myVM| |Benutzername| Geben Sie den gewünschten Benutzernamen ein.| |Kennwort| Geben Sie das gewünschte Kennwort ein. Das Kennwort muss mindestens zwölf Zeichen lang sein und die [definierten Anforderungen an die Komplexität](../virtual-machines/windows/faq.md?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm) erfüllen.| |Subscription| Wählen Sie Ihr Abonnement aus.| |Resource group| Wählen Sie **Vorhandene verwenden** und dann **myResourceGroup** aus.| |Standort| Wählen Sie **USA, Osten** aus.| 4. Wählen Sie eine Größe für den virtuellen Computer aus, und klicken Sie dann auf **Auswählen**. 5. Klicken Sie unter der Registerkarte **Netzwerk** auf **Neu erstellen** für **Öffentliche IP-Adresse**. 6. Geben Sie *myPublicIpAddress* ein, wählen Sie „sku“ als **Standard** und dann die Routingpräferenz vom Typ **Internet** aus, und drücken Sie dann **OK**, wie in der folgenden Abbildung gezeigt: ![Auswählen der Option „Statisch“](./media/tutorial-routing-preference-virtual-machine-portal/routing-preference-internet-new.png) 6. Wählen Sie unter **Öffentliche eingehende Ports hinzufügen** einen Port oder keine Ports aus. Port 3389 ist ausgewählt, um den Remotezugriff auf den virtuellen Windows Server-Computer über das Internet zu aktivieren. Bei Produktionsworkloads wird vom Öffnen von Port 3389 über das Internet abgeraten. ![Auswählen eines Ports](./media/tutorial-routing-preference-virtual-machine-portal/pip-ports-new.png) 7. Übernehmen Sie die restlichen Standardeinstellungen, und wählen Sie **OK** aus. 8. Klicken Sie auf der Seite **Zusammenfassung** auf **Erstellen**. Das Bereitstellen des virtuellen Computers dauert einige Minuten. 9. Nachdem der virtuelle Computer bereitgestellt wurde, geben Sie in das Suchfeld am oberen Rand des Portals *myPublicIpAddress* ein. Wenn **myPublicIpAddress** in den Suchergebnissen angezeigt wird, wählen Sie diese Angabe aus. 10. Die zugewiesene öffentliche IP-Adresse und die Adresse, die dem virtuellen Computer **myVM** zugewiesen ist, werden angezeigt, wie in der folgenden Abbildung gezeigt wird: ![Screenshot der öffentlichen IP-Adresse der NIC für die Netzwerkschnittstelle mynic.](./media/tutorial-routing-preference-virtual-machine-portal/pip-properties-new.png) 11. Wählen Sie **Netzwerk** aus, klicken Sie dann auf die NIC **mynic**, und wählen Sie dann die öffentliche IP-Adresse aus, um zu bestätigen, dass die Routingpräferenz vom Typ **Internet** zugewiesen ist. ![Screenshot der öffentlichen IP-Adresse und Routingeinstellung für eine öffentliche IP-Adresse.](./media/tutorial-routing-preference-virtual-machine-portal/pip-routing-internet-new.png) ## <a name="clean-up-resources"></a>Bereinigen von Ressourcen Löschen Sie die Ressourcengruppe mit allen ihren Ressourcen, wenn Sie sie nicht mehr benötigen: 1. Geben Sie im oben im Portal im Feld *Suche* die Zeichenfolge **myResourceGroup** ein. Wenn **myResourceGroup** in den Suchergebnissen angezeigt wird, wählen Sie diese Angabe aus. 2. Wählen Sie die Option **Ressourcengruppe löschen**. 3. Geben Sie für **Geben Sie den Ressourcengruppennamen ein:** den Namen *myResourceGroup* ein, und klicken Sie auf **Löschen**. ## <a name="next-steps"></a>Nächste Schritte - Erfahren Sie mehr über [öffentliche IP-Adressen mit Routingpräferenz](routing-preference-overview.md). - Lesen Sie mehr über [öffentliche IP-Adressen](./public-ip-addresses.md#public-ip-addresses) in Azure. - Erfahren Sie mehr über alle [Einstellungen für öffentliche IP-Adressen](virtual-network-public-ip-address.md#create-a-public-ip-address).
71.746835
329
0.788109
deu_Latn
0.992903
6ff83fc1340990fd7a9d3154d1d6d44369720920
828
md
Markdown
_posts/2016-08-16-MarnuGarcia-2015-Wedding-dresses-Style-MG0636-2015.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
_posts/2016-08-16-MarnuGarcia-2015-Wedding-dresses-Style-MG0636-2015.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
_posts/2016-08-16-MarnuGarcia-2015-Wedding-dresses-Style-MG0636-2015.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
--- layout: post date: 2016-08-16 title: "MarnuGarcia 2015 Wedding dresses Style MG0636 2015" category: MarnuGarcia tags: [MarnuGarcia,2015] --- ### MarnuGarcia 2015 Wedding dresses Style MG0636 Just **$339.99** ### 2015 <table><tr><td>BRANDS</td><td>MarnuGarcia</td></tr><tr><td>Years</td><td>2015</td></tr></table> <a href="https://www.readybrides.com/en/marnugarcia/73705-marnugarcia-2015-wedding-dresses-style-mg0636.html"><img src="//img.readybrides.com/173403/marnugarcia-2015-wedding-dresses-style-mg0636.jpg" alt="MarnuGarcia 2015 Wedding dresses Style MG0636" style="width:100%;" /></a> <!-- break --> Buy it: [https://www.readybrides.com/en/marnugarcia/73705-marnugarcia-2015-wedding-dresses-style-mg0636.html](https://www.readybrides.com/en/marnugarcia/73705-marnugarcia-2015-wedding-dresses-style-mg0636.html)
51.75
278
0.746377
yue_Hant
0.476289
6ff8d32696cb0df348801b02fbd9fb7617337659
183
md
Markdown
README.md
AddiTormento/amspracticevms
eb3bf38c4cc818bc75603c7d4a40f4514e1dffa8
[ "Unlicense" ]
null
null
null
README.md
AddiTormento/amspracticevms
eb3bf38c4cc818bc75603c7d4a40f4514e1dffa8
[ "Unlicense" ]
null
null
null
README.md
AddiTormento/amspracticevms
eb3bf38c4cc818bc75603c7d4a40f4514e1dffa8
[ "Unlicense" ]
null
null
null
# amspracticevms This will be a public directory into creating an online vm database for students to learn coding, linux, server management and processes, and general Linux training.
61
165
0.819672
eng_Latn
0.995282
6ff933c53bf85b0529df47b81fc0f632e2b6116a
18,628
md
Markdown
src/Beta/Users.ActivityFeed/Users.ActivityFeed/docs/Update-MgUserActivityHistoryItem.md
marcoscheel/msgraph-sdk-powershell
6c4b62c33627b372fcbacbc9ab9146453777b6ff
[ "MIT" ]
null
null
null
src/Beta/Users.ActivityFeed/Users.ActivityFeed/docs/Update-MgUserActivityHistoryItem.md
marcoscheel/msgraph-sdk-powershell
6c4b62c33627b372fcbacbc9ab9146453777b6ff
[ "MIT" ]
1
2020-04-22T20:48:58.000Z
2020-04-22T20:48:58.000Z
src/Beta/Users.ActivityFeed/Users.ActivityFeed/docs/Update-MgUserActivityHistoryItem.md
marcoscheel/msgraph-sdk-powershell
6c4b62c33627b372fcbacbc9ab9146453777b6ff
[ "MIT" ]
null
null
null
--- external help file: Module Name: Microsoft.Graph.Users.ActivityFeed online version: https://docs.microsoft.com/en-us/powershell/module/microsoft.graph.users.activityfeed/update-mguseractivityhistoryitem schema: 2.0.0 --- # Update-MgUserActivityHistoryItem ## SYNOPSIS Update the navigation property historyItems in users ## SYNTAX ### UpdateExpanded (Default) ``` Update-MgUserActivityHistoryItem -ActivityHistoryItemId <String> -UserActivityId <String> -UserId <String> [-ActiveDurationSeconds <Int32>] [-Activity <IMicrosoftGraphUserActivity>] [-CreatedDateTime <DateTime>] [-ExpirationDateTime <DateTime>] [-Id <String>] [-LastActiveDateTime <DateTime>] [-LastModifiedDateTime <DateTime>] [-StartedDateTime <DateTime>] [-Status <String>] [-UserTimezone <String>] [-PassThru] [-Confirm] [-WhatIf] [<CommonParameters>] ``` ### Update ``` Update-MgUserActivityHistoryItem -ActivityHistoryItemId <String> -UserActivityId <String> -UserId <String> -BodyParameter <IMicrosoftGraphActivityHistoryItem> [-PassThru] [-Confirm] [-WhatIf] [<CommonParameters>] ``` ### UpdateViaIdentity ``` Update-MgUserActivityHistoryItem -InputObject <IUsersActivityFeedIdentity> -BodyParameter <IMicrosoftGraphActivityHistoryItem> [-PassThru] [-Confirm] [-WhatIf] [<CommonParameters>] ``` ### UpdateViaIdentityExpanded ``` Update-MgUserActivityHistoryItem -InputObject <IUsersActivityFeedIdentity> [-ActiveDurationSeconds <Int32>] [-Activity <IMicrosoftGraphUserActivity>] [-CreatedDateTime <DateTime>] [-ExpirationDateTime <DateTime>] [-Id <String>] [-LastActiveDateTime <DateTime>] [-LastModifiedDateTime <DateTime>] [-StartedDateTime <DateTime>] [-Status <String>] [-UserTimezone <String>] [-PassThru] [-Confirm] [-WhatIf] [<CommonParameters>] ``` ## DESCRIPTION Update the navigation property historyItems in users ## EXAMPLES ### Example 1: {{ Add title here }} ```powershell PS C:\> {{ Add code here }} {{ Add output here }} ``` {{ Add description here }} ### Example 2: {{ Add title here }} ```powershell PS C:\> {{ Add code here }} {{ Add output here }} ``` {{ Add description here }} ## PARAMETERS ### -ActiveDurationSeconds Optional. The duration of active user engagement. if not supplied, this is calculated from the startedDateTime and lastActiveDateTime. ```yaml Type: System.Int32 Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -Activity userActivity To construct, see NOTES section for ACTIVITY properties and create a hash table. ```yaml Type: Microsoft.Graph.PowerShell.Models.IMicrosoftGraphUserActivity Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -ActivityHistoryItemId key: activityHistoryItem-id of activityHistoryItem ```yaml Type: System.String Parameter Sets: Update, UpdateExpanded Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -BodyParameter activityHistoryItem To construct, see NOTES section for BODYPARAMETER properties and create a hash table. ```yaml Type: Microsoft.Graph.PowerShell.Models.IMicrosoftGraphActivityHistoryItem Parameter Sets: Update, UpdateViaIdentity Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByValue) Accept wildcard characters: False Dynamic: False ``` ### -CreatedDateTime Set by the server. DateTime in UTC when the object was created on the server. ```yaml Type: System.DateTime Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -ExpirationDateTime Optional. UTC DateTime when the historyItem will undergo hard-delete. Can be set by the client. ```yaml Type: System.DateTime Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -Id Read-only. ```yaml Type: System.String Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -InputObject Identity Parameter To construct, see NOTES section for INPUTOBJECT properties and create a hash table. ```yaml Type: Microsoft.Graph.PowerShell.Models.IUsersActivityFeedIdentity Parameter Sets: UpdateViaIdentity, UpdateViaIdentityExpanded Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByValue) Accept wildcard characters: False Dynamic: False ``` ### -LastActiveDateTime Optional. UTC DateTime when the historyItem (activity session) was last understood as active or finished - if null, historyItem status should be Ongoing. ```yaml Type: System.DateTime Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -LastModifiedDateTime Set by the server. DateTime in UTC when the object was modified on the server. ```yaml Type: System.DateTime Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -PassThru Returns true when the command succeeds ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -StartedDateTime Required. UTC DateTime when the historyItem (activity session) was started. Required for timeline history. ```yaml Type: System.DateTime Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -Status status ```yaml Type: System.String Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -UserActivityId key: userActivity-id of userActivity ```yaml Type: System.String Parameter Sets: Update, UpdateExpanded Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -UserId key: user-id of user ```yaml Type: System.String Parameter Sets: Update, UpdateExpanded Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -UserTimezone Optional. The timezone in which the user's device used to generate the activity was located at activity creation time. Values supplied as Olson IDs in order to support cross-platform representation. ```yaml Type: System.String Parameter Sets: UpdateExpanded, UpdateViaIdentityExpanded Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -Confirm Prompts you for confirmation before running the cmdlet. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: cf Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: wi Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False Dynamic: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### Microsoft.Graph.PowerShell.Models.IMicrosoftGraphActivityHistoryItem ### Microsoft.Graph.PowerShell.Models.IUsersActivityFeedIdentity ## OUTPUTS ### System.Boolean ## ALIASES ## NOTES ### COMPLEX PARAMETER PROPERTIES To create the parameters described below, construct a hash table containing the appropriate properties. For information on hash tables, run Get-Help about_Hash_Tables. #### ACTIVITY <IMicrosoftGraphUserActivity>: userActivity - `[Id <String>]`: Read-only. - `[ActivationUrl <String>]`: Required. URL used to launch the activity in the best native experience represented by the appId. Might launch a web-based app if no native app exists. - `[ActivitySourceHost <String>]`: Required. URL for the domain representing the cross-platform identity mapping for the app. Mapping is stored either as a JSON file hosted on the domain or configurable via Windows Dev Center. The JSON file is named cross-platform-app-identifiers and is hosted at root of your HTTPS domain, either at the top level domain or include a sub domain. For example: https://contoso.com or https://myapp.contoso.com but NOT https://myapp.contoso.com/somepath. You must have a unique file and domain (or sub domain) per cross-platform app identity. For example, a separate file and domain is needed for Word vs. PowerPoint. - `[AppActivityId <String>]`: Required. The unique activity ID in the context of the app - supplied by caller and immutable thereafter. - `[AppDisplayName <String>]`: Optional. Short text description of the app used to generate the activity for use in cases when the app is not installed on the user’s local device. - `[AttributionAddImageQuery <Boolean?>]`: Optional; parameter used to indicate the server is able to render image dynamically in response to parameterization. For example – a high contrast image - `[AttributionAlternateText <String>]`: Optional; alt-text accessible content for the image - `[AttributionAlternativeText <String>]`: - `[AttributionIconUrl <String>]`: Optional; URI that points to an icon which represents the application used to generate the activity - `[ContentInfo <IMicrosoftGraphJson>]`: Json - `[ContentUrl <String>]`: Optional. Used in the event the content can be rendered outside of a native or web-based app experience (for example, a pointer to an item in an RSS feed). - `[CreatedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was created on the server. - `[ExpirationDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object expired on the server. - `[FallbackUrl <String>]`: Optional. URL used to launch the activity in a web-based app, if available. - `[HistoryItems <IMicrosoftGraphActivityHistoryItem[]>]`: Optional. NavigationProperty/Containment; navigation property to the activity's historyItems. - `[Id <String>]`: Read-only. - `[ActiveDurationSeconds <Int32?>]`: Optional. The duration of active user engagement. if not supplied, this is calculated from the startedDateTime and lastActiveDateTime. - `[Activity <IMicrosoftGraphUserActivity>]`: userActivity - `[CreatedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was created on the server. - `[ExpirationDateTime <DateTime?>]`: Optional. UTC DateTime when the historyItem will undergo hard-delete. Can be set by the client. - `[LastActiveDateTime <DateTime?>]`: Optional. UTC DateTime when the historyItem (activity session) was last understood as active or finished - if null, historyItem status should be Ongoing. - `[LastModifiedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was modified on the server. - `[StartedDateTime <DateTime?>]`: Required. UTC DateTime when the historyItem (activity session) was started. Required for timeline history. - `[Status <String>]`: status - `[UserTimezone <String>]`: Optional. The timezone in which the user's device used to generate the activity was located at activity creation time. Values supplied as Olson IDs in order to support cross-platform representation. - `[LastModifiedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was modified on the server. - `[Status <String>]`: status - `[UserTimezone <String>]`: Optional. The timezone in which the user's device used to generate the activity was located at activity creation time; values supplied as Olson IDs in order to support cross-platform representation. - `[VisualElementBackgroundColor <String>]`: Optional. Background color used to render the activity in the UI - brand color for the application source of the activity. Must be a valid hex color - `[VisualElementContent <IMicrosoftGraphJson>]`: Json - `[VisualElementDescription <String>]`: Optional. Longer text description of the user's unique activity (example: document name, first sentence, and/or metadata) - `[VisualElementDisplayText <String>]`: Required. Short text description of the user's unique activity (for example, document name in cases where an activity refers to document creation) #### BODYPARAMETER <IMicrosoftGraphActivityHistoryItem>: activityHistoryItem - `[Id <String>]`: Read-only. - `[ActiveDurationSeconds <Int32?>]`: Optional. The duration of active user engagement. if not supplied, this is calculated from the startedDateTime and lastActiveDateTime. - `[Activity <IMicrosoftGraphUserActivity>]`: userActivity - `[Id <String>]`: Read-only. - `[ActivationUrl <String>]`: Required. URL used to launch the activity in the best native experience represented by the appId. Might launch a web-based app if no native app exists. - `[ActivitySourceHost <String>]`: Required. URL for the domain representing the cross-platform identity mapping for the app. Mapping is stored either as a JSON file hosted on the domain or configurable via Windows Dev Center. The JSON file is named cross-platform-app-identifiers and is hosted at root of your HTTPS domain, either at the top level domain or include a sub domain. For example: https://contoso.com or https://myapp.contoso.com but NOT https://myapp.contoso.com/somepath. You must have a unique file and domain (or sub domain) per cross-platform app identity. For example, a separate file and domain is needed for Word vs. PowerPoint. - `[AppActivityId <String>]`: Required. The unique activity ID in the context of the app - supplied by caller and immutable thereafter. - `[AppDisplayName <String>]`: Optional. Short text description of the app used to generate the activity for use in cases when the app is not installed on the user’s local device. - `[AttributionAddImageQuery <Boolean?>]`: Optional; parameter used to indicate the server is able to render image dynamically in response to parameterization. For example – a high contrast image - `[AttributionAlternateText <String>]`: Optional; alt-text accessible content for the image - `[AttributionAlternativeText <String>]`: - `[AttributionIconUrl <String>]`: Optional; URI that points to an icon which represents the application used to generate the activity - `[ContentInfo <IMicrosoftGraphJson>]`: Json - `[ContentUrl <String>]`: Optional. Used in the event the content can be rendered outside of a native or web-based app experience (for example, a pointer to an item in an RSS feed). - `[CreatedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was created on the server. - `[ExpirationDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object expired on the server. - `[FallbackUrl <String>]`: Optional. URL used to launch the activity in a web-based app, if available. - `[HistoryItems <IMicrosoftGraphActivityHistoryItem[]>]`: Optional. NavigationProperty/Containment; navigation property to the activity's historyItems. - `[LastModifiedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was modified on the server. - `[Status <String>]`: status - `[UserTimezone <String>]`: Optional. The timezone in which the user's device used to generate the activity was located at activity creation time; values supplied as Olson IDs in order to support cross-platform representation. - `[VisualElementBackgroundColor <String>]`: Optional. Background color used to render the activity in the UI - brand color for the application source of the activity. Must be a valid hex color - `[VisualElementContent <IMicrosoftGraphJson>]`: Json - `[VisualElementDescription <String>]`: Optional. Longer text description of the user's unique activity (example: document name, first sentence, and/or metadata) - `[VisualElementDisplayText <String>]`: Required. Short text description of the user's unique activity (for example, document name in cases where an activity refers to document creation) - `[CreatedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was created on the server. - `[ExpirationDateTime <DateTime?>]`: Optional. UTC DateTime when the historyItem will undergo hard-delete. Can be set by the client. - `[LastActiveDateTime <DateTime?>]`: Optional. UTC DateTime when the historyItem (activity session) was last understood as active or finished - if null, historyItem status should be Ongoing. - `[LastModifiedDateTime <DateTime?>]`: Set by the server. DateTime in UTC when the object was modified on the server. - `[StartedDateTime <DateTime?>]`: Required. UTC DateTime when the historyItem (activity session) was started. Required for timeline history. - `[Status <String>]`: status - `[UserTimezone <String>]`: Optional. The timezone in which the user's device used to generate the activity was located at activity creation time. Values supplied as Olson IDs in order to support cross-platform representation. #### INPUTOBJECT <IUsersActivityFeedIdentity>: Identity Parameter - `[ActivityHistoryItemId <String>]`: key: activityHistoryItem-id of activityHistoryItem - `[UserActivityId <String>]`: key: userActivity-id of userActivity - `[UserId <String>]`: key: user-id of user ## RELATED LINKS
39.803419
653
0.771849
eng_Latn
0.678891
6ff93f0d2713b6c0f75b25c741bb477bd350ab2d
46
md
Markdown
README.md
hinti/algodat
6f5eaa071ebe09d665b818564553b835d4a93ca9
[ "MIT" ]
null
null
null
README.md
hinti/algodat
6f5eaa071ebe09d665b818564553b835d4a93ca9
[ "MIT" ]
null
null
null
README.md
hinti/algodat
6f5eaa071ebe09d665b818564553b835d4a93ca9
[ "MIT" ]
null
null
null
# algodat Some algorithms and data structures
15.333333
35
0.826087
eng_Latn
0.984857
6ff9f2c4355d29402f9279b7023f830739ab7764
794
md
Markdown
content/docs/releasenotes/enterprise/_index.md
dspalmer99/enterprise-docs
aa5c485d04aea481c0b663199481b723ca4b6ea0
[ "Apache-2.0" ]
null
null
null
content/docs/releasenotes/enterprise/_index.md
dspalmer99/enterprise-docs
aa5c485d04aea481c0b663199481b723ca4b6ea0
[ "Apache-2.0" ]
null
null
null
content/docs/releasenotes/enterprise/_index.md
dspalmer99/enterprise-docs
aa5c485d04aea481c0b663199481b723ca4b6ea0
[ "Apache-2.0" ]
null
null
null
--- title: "Anchore Enterprise Release Notes" linkTitle: "Enterprise" weight: 8 --- * [Anchore Enterprise Version 2.4.1]({{< ref "/docs/releasenotes/enterprise/241.md" >}}) * [Anchore Enterprise Version 2.4.0]({{< ref "/docs/releasenotes/enterprise/240.md" >}}) * [Anchore Enterprise Version 2.3.2]({{< ref "/docs/releasenotes/enterprise/232.md" >}}) * [Anchore Enterprise Version 2.3.1]({{< ref "/docs/releasenotes/enterprise/231.md" >}}) * [Anchore Enterprise Version 2.3.0]({{< ref "/docs/releasenotes/enterprise/2.3.0" >}}) * [Anchore Enterprise Version 2.2.0]({{< ref "/docs/releasenotes/enterprise/220.md" >}}) * [Anchore Enterprise Version 2.1.0]({{< ref "/docs/releasenotes/enterprise/210.md" >}}) * [Anchore Enterprise Version 2.0.0]({{< ref "/docs/releasenotes/enterprise/200.md" >}})
61.076923
88
0.687657
kor_Hang
0.723736
6ffa0c7abfb94905d022f7e200f541b526ac41ec
184
md
Markdown
README.md
GaryMK/CMS
c63fe08cd9bb38b869609b67f6ab61c7eb56d234
[ "MIT" ]
null
null
null
README.md
GaryMK/CMS
c63fe08cd9bb38b869609b67f6ab61c7eb56d234
[ "MIT" ]
null
null
null
README.md
GaryMK/CMS
c63fe08cd9bb38b869609b67f6ab61c7eb56d234
[ "MIT" ]
null
null
null
<<<<<<< HEAD #CMS ###Curriculum design of Web system and technology. ======= # CMS ### Curriculum design of Web system and technology. >>>>>>> 3f057e35e77b259b07f1900b63b20c8bfaa59ae7
23
51
0.706522
eng_Latn
0.621401
6ffa16cea6735bed49aea6bcf0711598eb497ea4
675
md
Markdown
README.md
etiennemarais/MeetupIonicApi
f21d01f9bbb5c88215bbc48572aca8ca275649bf
[ "MIT" ]
null
null
null
README.md
etiennemarais/MeetupIonicApi
f21d01f9bbb5c88215bbc48572aca8ca275649bf
[ "MIT" ]
null
null
null
README.md
etiennemarais/MeetupIonicApi
f21d01f9bbb5c88215bbc48572aca8ca275649bf
[ "MIT" ]
null
null
null
# Introduction to Hybrid App Development ## Homework sheet ### Required Installs [Xcode and Xcode Command Tools](http://cordova.apache.org/docs/en/3.3.0/guide_platforms_ios_index.md.html#iOS%20Platform%20Guide) [NodeJS and NPM](https://nodejs.org/) [Android SDK (iOS included in xCode)](http://cordova.apache.org/docs/en/3.3.0/guide_platforms_android_index.md.html#Android%20Platform%20Guide) #### Packages Cordova, Ionic, Gulp and Bower npm install -g cordova ionic gulp bower --- ### Required Homework Complete this tutorial: http://ionicframework.com/getting-started/ Basic Angular: http://campus.codeschool.com/courses/shaping-up-with-angular-js/intro
24.107143
143
0.761481
kor_Hang
0.293775