url
stringlengths
11
2.25k
text
stringlengths
88
50k
ts
timestamp[s]date
2026-01-13 08:47:33
2026-01-13 09:30:40
https://www.linkedin.com/products/hclsoftware-unica/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click#main-content
HCL Unica | LinkedIn Skip to main content LinkedIn HCLSoftware in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in HCL Unica Marketing Automation Software by HCLSoftware See who's skilled in this Add as skill Contact us Report this product About HCL Software Unica Suite is the enterprise marketing platform to deliver Precision Marketing at Scale. Designed from ground up for marketers and powered by AI, it empowers marketing automation across a wide variety of outbound and inbound channels – direct mail, email, website, call center, IVR mobile, ATM, Kiosk, POS and much more. Unica is a premises-agnostic, cloud native platform and can be deployed through Docker framework or natively. Unica in an integrated platform that delivers total flexibility, scalable power and seamless integration with 3rd party Martech solutions. This product is intended for Marketing Manager Sales Marketing Manager Sales And Marketing Specialist Product Marketing Manager Inside Sales Manager Sales Marketing Executive Digital Marketing Manager Senior Campaign Manager Campaign Specialist Online Campaign Manager Media Products media viewer No more previous content Why Unica is a Great Fit for Healthcare Customer experience has become a strategic mandate for healthcare, but most organizations struggle to deliver on their client’s expectations. The path to always-on patient and member engagement begins with #Unica. Find out why. Unica - Machine Learning with Unica Interact This video is about machine learning capabilities of Unica Interact, through Built-in Self Learning Module. No more next content Similar products Marketing Cloud Marketing Cloud Marketing Automation Software Zoho Campaigns Zoho Campaigns Marketing Automation Software Freshmarketer Freshmarketer Marketing Automation Software Brevo (formerly Sendinblue) Brevo (formerly Sendinblue) Marketing Automation Software RD Station Marketing RD Station Marketing Marketing Automation Software Adobe Marketo Adobe Marketo Marketing Automation Software Sign in to see more Show more Show less HCLSoftware products HCL BigFix HCL BigFix Endpoint Detection & Response (EDR) Software HCL Commerce+ HCL Commerce+ E-Commerce Platforms HCL Connections HCL Connections Enterprise Social Networking Software HCL Digital Experience HCL Digital Experience Digital Experience Platforms (DXP) HCL Domino HCL Domino Business Process Management Software HCL Mainframe Solutions HCL Mainframe Solutions Mainframe Management Software HCL Sametime HCL Sametime Employee Communications Software HCL Software DevOps HCL Software DevOps DevOps Software HCL Volt MX HCL Volt MX Low-Code Development Platforms Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://www.linkedin.com/products/zohocorp-zoho-desk/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click
Zoho Desk | LinkedIn Skip to main content LinkedIn Zoho in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Zoho Desk Help Desk Software by Zoho See who's skilled in this Add as skill Try now Report this product About Zoho Desk is context-aware help desk software that helps businesses of all sizes provide proactive and well informed customer support. Combine effective ticket management with easy collaboration tools, granular metric reports, powerful automation, robust self-service, sentiment analysis, and much more to boost agent productivity and deliver positive customer experiences. Over 40,000 businesses globally trust Zoho Desk to enable them to deliver exceptional service and create loyal customers. This product is intended for Customer Service Representative Customer Service Manager Technical Support Specialist Customer Support Specialist Technical Support Manager Director of Customer Service Director Customer Support Senior Support Manager Customer Experience Manager Director Customer Experience Media Products media viewer No more previous content The Streaming Networks provides multi-brand support using Zoho Desk Here's how The Streaming Networks uses Zoho Desk to provide fast and contextual customer service, proactively respond to escalations and deliver exceptional customer experiences. The Zoho Desk Story Zoho Desk is award-winning context-aware help desk software that provides agents with the right tools to deliver excellent customer support. Intuitive features like Work Modes, Blueprint, Zia and ASAP ensure that your support team is able to close tickets and create happy customers in no time. A Product Walkthrough of Zoho Desk Learn how Zoho Desk’s automation features can help your team remove manual, repeatable chores from their plate and instead focus on what matters - keeping customers happy. Reply Assistant in Zoho Desk Zoho Desk provides the right information at the agent's disposal with Reply Assistant. Reply Assistant pulls the relevant articles from your Knowledge Base and helps your agent share it with the customer instantly. Now, your support team is equipped to respond faster and smarter to your customers and create customer happiness. Get a Bird's Eye View of your Customer Support in Real Time Zoho Desk helps your customer service managers and CX leaders keep a finger on the pulse of everything that's happening in your help desk. Stay constantly updated on ticket traffic, lengthy ticket conversations, agent status and customer happiness ratings, so you can step in with proactive interventions to keep things sailing smoothly for your support team, No more next content Similar products Service Cloud Service Cloud Help Desk Software Freshdesk Freshdesk Help Desk Software Freshdesk Omni Freshdesk Omni Help Desk Software Zendesk Suite Zendesk Suite Help Desk Software Zendesk for Employee Experience Zendesk for Employee Experience Help Desk Software Hiver Hiver Help Desk Software Sign in to see more Show more Show less Zoho products Bigin by Zoho CRM Bigin by Zoho CRM Customer Relationship Management (CRM) Software Zoho Analytics Zoho Analytics Business Intelligence (BI) Software Zoho Commerce Zoho Commerce E-Commerce Platforms Zoho DataPrep Zoho DataPrep Data Preparation Tools Zoho Forms Zoho Forms Online Form Builder Software Zoho FSM Zoho FSM Field Service Management (FSM) Software Zoho One Zoho One Business Management Software Zoho People Zoho People Human Resources Management Systems (HRMS) Zoho RPA Zoho RPA Robotic Process Automation (RPA) Software Zoho Webinar Zoho Webinar Webinar Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#json-messages
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://www.linkedin.com/login?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2Fcompany%2Fbmc-software&fromSignIn=true&trk=top-card_top-card-secondary-button-top-card-secondary-cta
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:18
https://git-scm.com/book/ko/v2/%ec%8b%9c%ec%9e%91%ed%95%98%ea%b8%b0-Git-%ea%b8%b0%ec%b4%88
Git - Git 기초 About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. 시작하기 1.1 버전 관리란? 1.2 짧게 보는 Git의 역사 1.3 Git 기초 1.4 CLI 1.5 Git 설치 1.6 Git 최초 설정 1.7 도움말 보기 1.8 요약 2. Git의 기초 2.1 Git 저장소 만들기 2.2 수정하고 저장소에 저장하기 2.3 커밋 히스토리 조회하기 2.4 되돌리기 2.5 리모트 저장소 2.6 태그 2.7 Git Alias 2.8 요약 3. Git 브랜치 3.1 브랜치란 무엇인가 3.2 브랜치와 Merge 의 기초 3.3 브랜치 관리 3.4 브랜치 워크플로 3.5 리모트 브랜치 3.6 Rebase 하기 3.7 요약 4. Git 서버 4.1 프로토콜 4.2 서버에 Git 설치하기 4.3 SSH 공개키 만들기 4.4 서버 설정하기 4.5 Git 데몬 4.6 스마트 HTTP 4.7 GitWeb 4.8 GitLab 4.9 또 다른 선택지, 호스팅 4.10 요약 5. 분산 환경에서의 Git 5.1 분산 환경에서의 워크플로 5.2 프로젝트에 기여하기 5.3 프로젝트 관리하기 5.4 요약 6. GitHub 6.1 계정 만들고 설정하기 6.2 GitHub 프로젝트에 기여하기 6.3 GitHub 프로젝트 관리하기 6.4 Organization 관리하기 6.5 GitHub 스크립팅 6.6 요약 7. Git 도구 7.1 리비전 조회하기 7.2 대화형 명령 7.3 Stashing과 Cleaning 7.4 내 작업에 서명하기 7.5 검색 7.6 히스토리 단장하기 7.7 Reset 명확히 알고 가기 7.8 고급 Merge 7.9 Rerere 7.10 Git으로 버그 찾기 7.11 서브모듈 7.12 Bundle 7.13 Replace 7.14 Credential 저장소 7.15 요약 8. Git맞춤 8.1 Git 설정하기 8.2 Git Attributes 8.3 Git Hooks 8.4 정책 구현하기 8.5 요약 9. Git과 여타 버전 관리 시스템 9.1 Git: 범용 Client 9.2 Git으로 옮기기 9.3 요약 10. Git의 내부 10.1 Plumbing 명령과 Porcelain 명령 10.2 Git 개체 10.3 Git Refs 10.4 Packfile 10.5 Refspec 10.6 데이터 전송 프로토콜 10.7 운영 및 데이터 복구 10.8 환경변수 10.9 요약 A1. 부록 A: 다양한 환경에서 Git 사용하기 A1.1 GUI A1.2 Visual Studio A1.3 Eclipse A1.4 Bash A1.5 Zsh A1.6 Git in Powershell A1.7 요약 A2. 부록 B: 애플리케이션에 Git 넣기 A2.1 Git 명령어 A2.2 Libgit2 A2.3 JGit A2.4 go-git A3. 부록 C: Git 명령어 A3.1 설치와 설정 A3.2 프로젝트 가져오기와 생성하기 A3.3 스냅샷 다루기 A3.4 Branch와 Merge A3.5 공유하고 업데이트하기 A3.6 보기와 비교 A3.7 Debugging A3.8 Patch 하기 A3.9 Email A3.10 다른 버전 관리 시스템 A3.11 관리 A3.12 Plumbing 명령어 2nd Edition 1.3 시작하기 - Git 기초 Git 기초 Git의 핵심은 뭘까? 이 질문은 Git을 이해하는데 굉장히 중요하다. Git이 무엇이고 어떻게 동작하는지 이해한다면 쉽게 Git을 효과적으로 사용할 수 있다. Git을 배우려면 Subversion이나 Perforce 같은 다른 VCS를 사용하던 경험을 버려야 한다. Git은 미묘하게 달라서 다른 VCS에서 쓰던 개념으로는 헷갈린다. 사용자 인터페이스는 매우 비슷하지만, 정보를 취급하는 방식이 다르다. 이런 차이점을 이해하면 Git을 사용하는 것이 어렵지 않다. 차이가 아니라 스냅샷 Subversion과 Subversion 비슷한 놈들과 Git의 가장 큰 차이점은 데이터를 다루는 방법에 있다. 큰 틀에서 봤을 때 VCS 시스템 대부분은 관리하는 정보가 파일들의 목록이다. CVS, Subversion, Perforce, Bazaar 등의 시스템은 각 파일의 변화를 시간순으로 관리하면서 파일들의 집합을 관리한다(보통 델타 기반 버전관리 시스템이라 함). 그림 4. 각 파일에 대한 변화를 저장하는 시스템들. Git은 이런 식으로 데이터를 저장하지도 취급하지도 않는다. 대신 Git은 데이터를 파일 시스템 스냅샷의 연속으로 취급하고 크기가 아주 작다. Git은 커밋하거나 프로젝트의 상태를 저장할 때마다 파일이 존재하는 그 순간을 중요하게 여긴다. 파일이 달라지지 않았으면 Git은 성능을 위해서 파일을 새로 저장하지 않는다. 단지 이전 상태의 파일에 대한 링크만 저장한다. Git은 데이터를 스냅샷의 스트림 처럼 취급한다. 그림 5. 시간순으로 프로젝트의 스냅샷을 저장. 이것이 Git이 다른 VCS와 구분되는 점이다. 이점 때문에 Git은 다른 시스템들이 과거로부터 답습해왔던 버전 컨트롤의 개념과 다르다는 것이고 많은 부분을 새로운 관점에서 바라본다. Git은 강력한 도구를 지원하는 작은 파일시스템이다. Git은 단순한 VCS가 아니다. Git 브랜치 에서 설명할 Git 브랜치를 사용하면 얻게 되는 이득이 무엇인지 설명한다. 거의 모든 명령을 로컬에서 실행 거의 모든 명령이 로컬 파일과 데이터만 사용하기 때문에 네트워크에 있는 다른 컴퓨터는 필요 없다. 대부분의 명령어가 네트워크의 속도에 영향을 받는 CVCS에 익숙하다면 Git이 매우 놀라울 것이다. Git의 이런 특징에서 나오는 미칠듯한 속도는 오직 Git느님만이 구사할 수 있는 전능이다. 프로젝트의 모든 히스토리가 로컬 디스크에 있기 때문에 모든 명령이 순식간에 실행된다. 예를 들어 Git은 프로젝트의 히스토리를 조회할 때 서버 없이 조회한다. 그냥 로컬 데이터베이스에서 히스토리를 읽어서 보여 준다. 그래서 눈 깜짝할 사이에 히스토리를 조회할 수 있다. 어떤 파일의 현재 버전과 한 달 전의 상태를 비교해보고 싶을 때도 Git은 그냥 한 달 전의 파일과 지금의 파일을 로컬에서 찾는다. 파일을 비교하기 위해 리모트에 있는 서버에 접근하고 나서 예전 버전을 가져올 필요가 없다. 즉 오프라인 상태이거나 VPN에 연결하지 못해도 막힘 없이 일 할 수 있다. 비행기나 기차 등에서 작업하고 네트워크에 접속하고 있지 않아도 커밋할 수 있다( 로컬 저장소라는 점이 기억나는지). 다른 VCS 시스템에서는 불가능한 일이다. Perforce를 예로 들자면 서버에 연결할 수 없을 때 할 수 있는 일이 별로 없다. Subversion이나 CVS에서도 마찬가지다. 오프라인이기 때문에 데이터베이스에 접근할 수 없어서 파일을 편집할 수는 있지만, 커밋할 수 없다. 매우 사소해 보이지만 실제로 이 상황에 부닥쳐보면 느껴지는 차이가 매우 크다. Git의 무결성 Git은 데이터를 저장하기 전에 항상 체크섬을 구하고 그 체크섬으로 데이터를 관리한다. 그래서 체크섬을 이해하는 Git 없이는 어떠한 파일이나 디렉토리도 변경할 수 없다. 체크섬은 Git에서 사용하는 가장 기본적인(Atomic) 데이터 단위이자 Git의 기본 철학이다. Git 없이는 체크섬을 다룰 수 없어서 파일의 상태도 알 수 없고 심지어 데이터를 잃어버릴 수도 없다. Git은 SHA-1 해시를 사용하여 체크섬을 만든다. 만든 체크섬은 40자 길이의 16진수 문자열이다. 파일의 내용이나 디렉토리 구조를 이용하여 체크섬을 구한다. SHA-1은 아래처럼 생겼다. 24b9da6552252987aa493b52f8696cd6d3b00373 Git은 모든 것을 해시로 식별하기 때문에 이런 값은 여기저기서 보인다. 실제로 Git은 파일을 이름으로 저장하지 않고 해당 파일의 해시로 저장한다. Git은 데이터를 추가할 뿐 Git으로 무얼 하든 Git 데이터베이스에 데이터가 추가 된다. 되돌리거나 데이터를 삭제할 방법이 없다. 다른 VCS처럼 Git도 커밋하지 않으면 변경사항을 잃어버릴 수 있다. 하지만, 일단 스냅샷을 커밋하고 나면 데이터를 잃어버리기 어렵다. Git을 사용하면 프로젝트가 심각하게 망가질 걱정 없이 매우 즐겁게 여러 가지 실험을 해 볼 수 있다. 되돌리기 을 보면 Git에서 데이터를 어떻게 저장하고 손실을 어떻게 복구하는지 알 수 있다. 세 가지 상태 이 부분은 중요하기에 집중해서 읽어야 한다. Git을 공부하기 위해 반드시 짚고 넘어가야 할 부분이다. Git은 파일을 Committed , Modified , Staged 이렇게 세 가지 상태로 관리한다. Committed란 데이터가 로컬 데이터베이스에 안전하게 저장됐다는 것을 의미한다. Modified는 수정한 파일을 아직 로컬 데이터베이스에 커밋하지 않은 것을 말한다. Staged란 현재 수정한 파일을 곧 커밋할 것이라고 표시한 상태를 의미한다. 이 세 가지 상태는 Git 프로젝트의 세 가지 단계와 연결돼 있다. Git 디렉토리, 워킹 트리, Staging Area 이렇게 세 가지 단계를 이해하고 넘어가자. 그림 6. 워킹 트리, Staging Area, Git 디렉토리. Git 디렉토리는 Git이 프로젝트의 메타데이터와 객체 데이터베이스를 저장하는 곳을 말한다. 이 Git 디렉토리가 Git의 핵심이다. 다른 컴퓨터에 있는 저장소를 Clone 할 때 Git 디렉토리가 만들어진다. 워킹 트리는 프로젝트의 특정 버전을 Checkout 한 것이다. Git 디렉토리는 지금 작업하는 디스크에 있고 그 디렉토리 안에 압축된 데이터베이스에서 파일을 가져와서 워킹 트리를 만든다. Staging Area는 Git 디렉토리에 있다. 단순한 파일이고 곧 커밋할 파일에 대한 정보를 저장한다. Git에서는 기술용어로는 “Index” 라고 하지만, “Staging Area” 라는 용어를 써도 상관 없다. Git으로 하는 일은 기본적으로 아래와 같다. 워킹 트리에서 파일을 수정한다. Staging Area에 파일을 Stage 해서 커밋할 스냅샷을 만든다. 모든 파일을 추가할 수도 있고 선택하여 추가할 수도 있다. Staging Area에 있는 파일들을 커밋해서 Git 디렉토리에 영구적인 스냅샷으로 저장한다. Git 디렉토리에 있는 파일들은 Committed 상태이다. 파일을 수정하고 Staging Area에 추가했다면 Staged이다. 그리고 Checkout 하고 나서 수정했지만, 아직 Staging Area에 추가하지 않았으면 Modified이다. Git의 기초 에서 이 상태에 대해 좀 더 자세히 배운다. 특히 Staging Area를 이용하는 방법부터 아예 생략하는 방법까지도 설명한다. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#login-success-response
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://sre.google/sre-book/evolving-sre-engagement-model/#id-lM2uPS8FgT0Tdukn
Google SRE - Production Readiness Review: Engagement Insight Chapter 32 - The Evolving SRE Engagement Model Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolving SRE Engagement Model Written by Acacio Cruz and Ashish Bhambhani Edited by Betsy Beyer and Tim Harvey SRE Engagement: What, How, and Why We've discussed in most of the rest of this book what happens when SRE is already in charge of a service. Few services begin their lifecycle enjoying SRE support, so there needs to be a process for evaluating a service, making sure that it merits SRE support, negotiating how to improve any deficits that bar SRE support, and actually instituting SRE support. We call this process onboarding . If you are in an environment where you are surrounded by a lot of existing services in varying states of perfection, your SRE team will probably be running through a prioritized queue of onboardings for quite a while until the team has finished taking on the highest-value targets. Although this is very common, and a completely reasonable way of dealing with a fait accompli environment, there are actually at least two better ways of bringing the wisdom of production, and SRE support, to services old and new alike. In the first case, just as in software engineering—where the earlier the bug is found, the cheaper it is to fix—the earlier an SRE team consultation happens, the better the service will be and the quicker it will feel the benefit. When SRE is engaged during the earliest stages of design , the time to onboard is lowered and the service is more reliable "out of the gate," usually because we don't have to spend the time unwinding suboptimal design or implementation. Another way, perhaps the best, is to short-circuit the process by which specially created systems with lots of individual variations end up "arriving" at SRE's door. Provide product development with a platform of SRE-validated infrastructure, upon which they can build their systems. This platform will have the double benefit of being both reliable and scalable. This avoids certain classes of cognitive load problems entirely, and by addressing common infrastructure practices, allows product development teams to focus on innovation at the application layer, where it mostly belongs. In the following sections, we'll spend some time looking at each of these models in turn, beginning with the "classic" one, the PRR-driven model. The PRR Model The most typical initial step of SRE engagement is the Production Readiness Review (PRR), a process that identifies the reliability needs of a service based on its specific details. Through a PRR, SREs seek to apply what they've learned and experienced to ensure the reliability of a service operating in production. A PRR is considered a prerequisite for an SRE team to accept responsibility for managing the production aspects of a service. Figure 32-1 illustrates the lifecycle of a typical service. The Production Readiness Review can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. This chapter describes the Simple PRR Model, then discusses how its modification into the Extended Engagement Model and the Frameworks and SRE Platform structure allowed SRE to scale their engagement process and impact. Figure 32-1. A typical service lifecycle The SRE Engagement Model SRE seeks production responsibility for important services for which it can make concrete contributions to reliability. SRE is concerned with several aspects of a service, which are collectively referred to as production . These aspects include the following : System architecture and interservice dependencies Instrumentation, metrics, and monitoring Emergency response Capacity planning Change management Performance: availability, latency, and efficiency When SREs engage with a service, we aim to improve it along all of these axes, which makes managing production for the service easier. Alternative Support Not all Google services receive close SRE engagement. A couple of factors are at play here: Many services don't need high reliability and availability, so support can be provided by other means. By design, the number of development teams that request SRE support exceeds the available bandwidth of SRE teams (see Introduction ). When SRE can't provide full-fledged support, it provides other options for making improvements to production, such as documentation and consultation. Documentation Development guides are available for internal technologies and clients of widely used systems. Google's Production Guide documents production best practices for services , as determined by the experiences of SRE and development teams alike. Developers can implement the solutions and recommendations in such documentation to improve their services. Consultation Developers may also seek SRE consulting to discuss specific services or problem areas. The Launch Coordination Engineering (LCE) team (see Reliable Product Launches at Scale ) spends a majority of its time consulting with development teams. SRE teams that aren't specifically dedicated to launch consultations also engage in consultation with development teams. When a new service or a new feature has been implemented, developers usually consult with SRE for advice about preparing for the Launch phase. Launch consultation usually involves one or two SREs spending a few hours studying the design and implementation at a high level. The SRE consultants then meet with the development team to provide advice on risky areas that need attention and to discuss well-known patterns or solutions that can be incorporated to improve the service in production. Some of this advice may come from the Production Guide mentioned earlier. Consultation sessions are necessarily broad in scope because it's not possible to gain a deep understanding of a given system in the limited time available. For some development teams, consultation is not sufficient: Services that have grown by orders of magnitude since they launched, which now require more time to understand than is feasible through documentation and consultation. Services upon which many other services have subsequently come to rely upon, which now host significantly more traffic from many different clients. These types of services may have grown to the point at which they begin to encounter significant difficulties in production while simultaneously becoming important to users. In such cases, long-term SRE engagement becomes necessary to ensure that they are properly maintained in production as they grow. Production Readiness Reviews: Simple PRR Model When a development team requests that SRE take over production management of a service, SRE gauges both the importance of the service and the availability of SRE teams. If the service merits SRE support, and the SRE team and development organization agree on staffing levels to facilitate this support, SRE initiates a Production Readiness Review with the development team. The objectives of the Production Readiness Review are as follows: Verify that a service meets accepted standards of production setup and operational readiness, and that service owners are prepared to work with SRE and take advantage of SRE expertise. Improve the reliability of the service in production, and minimize the number and severity of incidents that might be expected. A PRR targets all aspects of production that SRE cares about. After sufficient improvements are made and the service is deemed ready for SRE support, an SRE team assumes its production responsibilities. This brings us to the Production Readiness Review process itself. There are three different but related engagement models (Simple PRR Model, Early Engagement Model, and Frameworks and SRE Platform), which will be discussed in turn. We will first describe the Simple PRR Model, which is usually targeted at a service that is already launched and will be taken over by an SRE team. A PRR follows several phases, much like a development lifecycle, although it may proceed independently in parallel with the development lifecycle. Engagement SRE leadership first decides which SRE team is a good fit for taking over the service. Usually one to three SREs are selected or self-nominated to conduct the PRR process. This small group then initiates discussion with the development team. The discussion covers matters such as: Establishing an SLO/SLA for the service Planning for potentially disruptive design changes required to improve reliability Planning and training schedules The goal is to arrive at a common agreement about the process, end goals, and outcomes that are necessary for the SRE team to engage with the development team and their service. Analysis Analysis is the first large segment of work. During this phase, the SRE reviewers learn about the service and begin analyzing it for production shortcomings. They aim to gauge the maturity of the service along the various axes of concern to SRE. They also examine the service's design and implementation to check if it follows production best practices. Usually, the SRE team establishes and maintains a PRR checklist explicitly for the Analysis phase. The checklist is specific to the service and is generally based on domain expertise, experience with related or similar systems, and best practices from the Production Guide. The SRE team may also consult other teams that have more experience with certain components or dependencies of the service. A few examples of checklist items include: Do updates to the service impact an unreasonably large percentage of the system at once? Does the service connect to the appropriate serving instance of its dependencies? For example, end-user requests to a service should not depend on a system that is designed for a batch-processing use case. Does the service request a sufficiently high network quality-of-service when talking to a critical remote service? Does the service report errors to central logging systems for analysis? Does it report all exceptional conditions that result in degraded responses or failures to the end users? Are all user-visible request failures well instrumented and monitored, with suitable alerting configured? The checklist may also include operational standards and best practices followed by a specific SRE team. For example, a perfectly functional service configuration that doesn't follow an SRE team's "gold standard" might be refactored to work better with SRE tools for scalably managing configurations. SREs also look at recent incidents and postmortems for the service, as well as follow-up tasks for the incidents. This evaluation gauges the demands of emergency response for the service and the availability of well-established operational controls. Improvements and Refactoring The Analysis phase leads to the identification of recommended improvements for the service. This next phase proceeds as follows: Improvements are prioritized based upon importance for service reliability. The priorities are discussed and negotiated with the development team, and a plan of execution is agreed upon. Both SRE and product development teams participate and assist each other in refactoring parts of the service or implementing additional features. This phase typically varies the most in duration and amount of effort. How much time and effort this phase will involve depends upon the availability of engineering time for refactoring, the maturity and complexity of the service at the start of the review, and myriad other factors. Training Responsibility for managing a service in production is generally assumed by an entire SRE team. To ensure that the team is prepared, the SRE reviewers who led the PRR take ownership of training the team, which includes the documentation necessary to support the service. Typically with the help and participation of the development team, these engineers organize a series of training sessions and exercises. Instruction can include: Design overviews Deep dives on various request flows in the system A description of the production setup Hands-on exercises for various aspects of system operations When the training is concluded, the SRE team should be prepared to manage the service. Onboarding The Training phase unblocks onboarding of the service by the SRE team. It involves a progressive transfer of responsibilities and ownership of various production aspects of the service, including parts of operations, the change management process, access rights, and so forth. The SRE team continues to focus on the various areas of production mentioned earlier. To complete the transition, the development team must be available to back up and advise the SRE team for a period of time as it settles in managing production for the service. This relationship becomes the basis for the ongoing work between the teams. Continuous Improvement Active services continuously change in response to new demands and conditions, including user requests for new features, evolving system dependencies, and technology upgrades, in addition to other factors. The SRE team must maintain service reliability standards in the face of these changes by driving continuous improvement. The responsible SRE team naturally learns more about the service in the course of operating the service, reviewing new changes, responding to incidents, and especially when conducting postmortems/root cause analyses. This expertise is shared with the development team as suggestions and proposals for changes to the service whenever new features, components, and dependencies may be added to the service. Lessons from managing the service are also contributed to best practices, which are documented in the Production Guide and elsewhere. Engaging with Shakespeare Initially, the developers of the Shakespeare service were responsible for the product, including carrying the pager for emergency response. However, with growing use of the service and the growth of the revenue coming from the service, SRE support became desirable. The product has already been launched, so SRE conducted a Production Readiness Review. One of the things they found was that the dashboards were not completely covering some of the metrics defined in the SLO, so that needed to be fixed. After all the issues that had been filed had been fixed, SRE took over the pager for the service, though two developers were in the on-call rotation as well. The developers are participating in the weekly on-call meeting discussing last week's problems and how to handle upcoming large-scale maintenance or cluster turndowns. Also future plans for the service are now discussed with the SREs to make sure that new launches will go flawlessly (though Murphy's law is always looking for opportunities to spoil that). Evolving the Simple PRR Model: Early Engagement Thus far, we've discussed the Production Readiness Review as it's used in the Simple PRR Model, which is limited to services that have already entered the Launch phase. There are several limitations and costs associated with this model. For example: Additional communication between teams can increase some process overhead for the development team, and cognitive burden for the SRE reviewers. The right SRE reviewers must be available, and capable of managing their time and priorities with regards to their existing engagements. Work done by SREs must be highly visible and sufficiently reviewed by the development team to ensure effective knowledge sharing. SREs should essentially work as a part of the development team, rather than an external unit. However, the main limitations of the PRR Model stem from the fact that the service is launched and serving at scale, and the SRE engagement starts very late in the development lifecycle. If the PRR occurred earlier in the service lifecycle, SRE's opportunity to remedy potential issues in the service would be markedly increased. As a result, the success of the SRE engagement and the future success of the service itself would likely improve. The resulting drawbacks can pose a significant challenge to the success of the SRE engagement and the future success of the service itself. Candidates for Early Engagement The Early Engagement Model introduces SRE earlier in the development lifecycle in order to achieve significant additional advantages. Applying the Early Engagement Model requires identifying the importance and/or business value of a service early in the development lifecycle, and determining if the service will have sufficient scale or complexity to benefit from SRE expertise. Applicable services often have the following characteristics: The service implements significant new functionality and will be part of an existing system already managed by SRE. The service is a significant rewrite or alternative to an existing system, targeting the same use cases. The development team sought SRE advice or approached SRE for takeover upon launch. The Early Engagement Model essentially immerses SREs in the development process. SRE's focus remains the same, though the means to achieve a better production service are different. SRE participates in Design and later phases, eventually taking over the service any time during or after the Build phase. This model is based on active collaboration between the development and SRE teams. Benefits of the Early Engagement Model While the Early Engagement Model does entail certain risks and challenges discussed previously, additional SRE expertise and collaboration during the entire lifecycle of the product creates significant benefits compared to an engagement initiated later in the service lifecycle. Design phase SRE collaboration during the Design phase can prevent a variety of problems or incidents from occurring later in production. While design decisions can be reversed or rectified later in the development lifecycle, such changes come at a high cost in terms of effort and complexity. The best production incidents are those that never happen! Occasionally, difficult trade-offs lead to the selection of a less-than-ideal design. Participation in the Design phase means that SREs are aware up front of the trade-offs and are part of the decision to pick a less-than-ideal option. Early SRE involvement aims to minimize future disputes over design choices once the service is in production. Build and implementation The Build phase addresses production aspects such as instrumentation and metrics, operational and emergency controls, resource usage, and efficiency. During this phase, SRE can influence and improve the implementation by recommending specific existing libraries and components, or helping build certain controls into the system. SRE participation at this stage helps enable ease of operations in the future and allows SRE to gain operational experience in advance of the launch. Launch SRE can also help implement widely used launch patterns and controls. For example, SRE might help implement a "dark launch" setup, in which part of the traffic from existing users is sent to the new service in addition to being sent to the live production service. The responses from the new service are "dark" since they are thrown away and not actually shown to users. Practices such as dark launches allow the team to gain operational insight, resolve issues without impacting existing users, and reduce the risk of encountering issues after launch. A smooth launch is immensely helpful in keeping the operational burden low and maintaining the development momentum after the launch. Disruptions around launch can easily result in emergency changes to source code and production, and disrupt the development team's work on future features. Post-launch Having a stable system at launch time generally leads to fewer conflicting priorities for the development team in terms of choosing between improving service reliability versus adding new features. In later phases of the service, the lessons from earlier phases can better inform refactoring or redesign. With extended involvement, the SRE team can be ready to take over the new service much sooner than is possible with the Simple PRR Model. The longer and closer engagement between the SRE and development teams also creates a collaborative relationship that can be sustained long term. A positive cross-team relationship fosters a mutual feeling of solidarity, and helps SRE establish ownership of the production responsibility. Disengaging from a service Sometimes a service doesn't warrant full-fledged SRE team management—this determination might be made post-launch, or SRE might engage with a service but never officially take it over. This is a positive outcome, because the service has been engineered to be reliable and low maintenance, and can therefore remain with the development team. It is also possible that SRE engages early with a service that fails to meet the levels of usage projected. In such cases, the SRE effort spent is simply part of the overall business risk that comes with new projects, and a small cost relative to the success of projects that meet expected scale. The SRE team can be reassigned, and lessons learned can be incorporated into the engagement process. Evolving Services Development: Frameworks and SRE Platform The Early Engagement Model made strides in evolving SRE engagement beyond the Simple PRR Model, which applied only to services that had already launched. However, there was still progress to be made in scaling SRE engagement to the next level by designing for reliability. Lessons Learned Over time, the SRE engagement model described thus far produced several distinct patterns: Onboarding each service required two or three SREs and typically lasted two or three quarters. The lead times for a PRR were relatively high (quarters away). The effort level required was proportional to the number of services under review, and was constrained by the insufficient number of SREs available to conduct PRRs. These conditions led to serialization of service takeovers and strict service prioritization. Due to differing software practices across services, each production feature was implemented differently. To meet PRR-driven standards, features usually had to be reimplemented specifically for each service or, at best, once for each small subset of services sharing code. These reimplementations were a waste of engineering effort. One canonical example is the implementation of functionally similar logging frameworks repeatedly in the same language because different services didn't implement the same coding structure. A review of common service issues and outages revealed certain patterns, but there was no way to easily replicate fixes and improvements across services. Typical examples included service overload situations and data hot-spotting. SRE software engineering contributions were often local to the service. Thus, building generic solutions to be reused was difficult. As a consequence, there was no easy way to implement new lessons individual SRE teams learned and best practices across services that had already been onboarded. External Factors Affecting SRE External factors have traditionally pressured the SRE organization and its resources in several ways. Google is increasingly following the industry trend of moving toward microservices. 151 As a result, both the number of requests for SRE support and the cardinality of services to support have increased. Because each service has a base fixed operational cost, even simple services demand more staffing. Microservices also imply an expectation of lower lead time for deployment, which was not possible with the previous PRR model (which had a lead time of months). Hiring experienced, qualified SREs is difficult and costly. Despite enormous effort from the recruiting organization, there are never enough SREs to support all the services that need their expertise. Once SREs are hired, their training is also a lengthier process than is typical for development engineers. Finally, the SRE organization is responsible for serving the needs of the large and growing number of development teams that do not already enjoy direct SRE support. This mandate calls for extending the SRE support model far beyond the original concept and engagement model. Toward a Structural Solution: Frameworks To effectively respond to these conditions, it became necessary to develop a model that allowed for the following principles: Codified best practices The ability to commit what works well in production to code, so services can simply use this code and become "production ready" by design. Reusable solutions Common and easily shareable implementations of techniques used to mitigate scalability and reliability issues. A common production platform with a common control surface Uniform sets of interfaces to production facilities, uniform sets of operational controls, and uniform monitoring, logging, and configuration for all services. Easier automation and smarter systems A common control surface that enables automation and smart systems at a level not possible before. For example, SREs can readily receive a single view of relevant information for an outage, rather than hand collecting and analyzing mostly raw data from disparate sources (logs, monitoring data, and so on). Based upon these principles, a set of SRE-supported platform and service frameworks were created, one for each environment we support(Java, C++, Go). Services built using these frameworks share implementations that are designed to work with the SRE-supported platform, and are maintained by both SRE and development teams. The main shift brought about by frameworks was to enable product development teams to design applications using the framework solution that was built and blessed by SRE, as opposed to either retrofitting the application to SRE specifications after the fact, or retrofitting more SREs to support a service that was markedly different than other Google services. An application typically comprises some business logic, which in turn depends on various infrastructure components. SRE production concerns are largely focused on the infrastructure-related parts of a service. The service frameworks implement infrastructure code in a standardized fashion and address various production concerns. Each concern is encapsulated in one or more framework modules, each of which provides a cohesive solution for a problem domain or infrastructure dependency. Framework modules address the various SRE concerns enumerated earlier, such as: Instrumentation and metrics Request logging Control systems involving traffic and load management SRE builds framework modules to implement canonical solutions for the concerned production area. As a result, development teams can focus on the business logic, because the framework already takes care of correct infrastructure use. A framework essentially is a prescriptive implementation for using a set of software components and a canonical way of combining these components. The framework can also expose features that control various components in a cohesive manner. For example, a framework might provide the following: Business logic organized as well-defined semantic components that can be referenced using standard terms Standard dimensions for monitoring instrumentation A standard format for request debugging logs A standard configuration format for managing load shedding Capacity of a single server and determination of "overload" that can both use a semantically consistent measure for feedback to various control systems Frameworks provide multiple upfront gains in consistency and efficiency. They free developers from having to glue together and configure individual components in an ad hoc service-specific manner, in ever-so-slightly incompatible ways, that then have to be manually reviewed by SREs. They drive a single reusable solution for production concerns across services, which means that framework users end up with the same common implementation and minimal configuration differences. Google supports several major languages for application development, and frameworks are implemented across all of these languages. While different implementations of the framework (say in C++ versus Java) can't share code, the goal is to expose the same API, behavior, configuration, and controls for identical functionality. Therefore, development teams can choose the language platform that fits their needs and experience, while SREs can still expect the same familiar behavior in production and standard tools to manage the service . New Service and Management Benefits The structural approach, founded on service frameworks and a common production platform and control surface, provided a host of new benefits. Significantly lower operational overhead A production platform built on top of frameworks with stronger conventions significantly reduced operational overhead, for the following reasons: It supports strong conformance tests for coding structure, dependencies, tests, coding style guides, and so on. This functionality also improves user data privacy, testing, and security conformance. It features built-in service deployment, monitoring, and automation for all services . It facilitates easier management of large numbers of services, especially micro-services, which are growing in number. It enables much faster deployment: an idea can graduate to fully deployed SRE-level production quality in a matter of days! Universal support by design The constant growth in the number of services at Google means that most of these services can neither warrant SRE engagement nor be maintained by SREs. Regardless, services that don't receive full SRE support can be built to use production features that are developed and maintained by SREs. This practice effectively breaks the SRE staffing barrier. Enabling SRE-supported production standards and tools for all teams improves the overall service quality across Google. Furthermore, all services that are implemented with frameworks automatically benefit from improvements made over time to frameworks modules. Faster, lower overhead engagements The frameworks approach results in faster PRR execution because we can rely upon: Built-in service features as part of the framework implementation Faster service onboarding (usually accomplished by a single SRE during one quarter) Less cognitive burden for the SRE teams managing services built using frameworks These properties allow SRE teams to lower the assessment and qualification effort for service onboarding, while maintaining a high bar on service production quality. A new engagement model based on shared responsibility The original SRE engagement model presented only two options: either full SRE support, or approximately no SRE engagement. 152 A production platform with a common service structure, conventions, and software infrastructure made it possible for an SRE team to provide support for the "platform" infrastructure, while the development teams provide on-call support for functional issues with the service—that is, for bugs in the application code. Under this model, SREs assume responsibility for the development and maintenance of large parts of service software infrastructure, particularly control systems such as load shedding, overload, automation, traffic management, logging, and monitoring. This model represents a significant departure from the way service management was originally conceived in two major ways: it entails a new relationship model for the interaction between SRE and development teams, and a new staffing model for SRE-supported service management. 153 Conclusion Service reliability can be improved through SRE engagement, in a process that includes systematic review and improvement of its production aspects. Google SRE’s initial such systematic approach, the Simple Production Readiness Review, made strides in standardizing the SRE engagement model, but was only applicable to services that had already entered the Launch phase. Over time, SRE extended and improved this model. The Early Engagement Model involved SRE earlier in the development lifecycle in order to "design for reliability." As demand for SRE expertise continued to grow, the need for a more scalable engagement model became increasingly apparent. Frameworks for production services were developed to meet this demand: code patterns based on production best practices were standardized and encapsulated in frameworks, so that use of frameworks became a recommended, consistent, and relatively simple way of building production-ready services. All three of the engagement models described are still practiced within Google. However, the adoption of frameworks is becoming a prominent influence on building production-ready services at Google as well as profoundly expanding the SRE contribution, lowering service management overhead, and improving baseline service quality across the organization. 151 See the Wikipedia page on microservices at https://en.wikipedia.org/wiki/Microservices . 152 Occasionally, there were consulting engagements by SRE teams with some non-onboarded services, but consultations were a best-effort approach and limited in number and scope. 153 The new model of service management changes the SRE staffing model in two ways: (1) because a lot of service technology is common, it reduces the number of required SREs per service; (2) it enables the creation of production platforms with separation of concerns between production platform support (done by SREs) and service-specific business-logic support, which remains with the development team. These platforms teams are staffed based upon the need to maintain the platform rather than upon service count, and can be shared across products. Previous Chapter 31 - Communication and Collaboration in SRE Next Part V - Conclusions Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:18
https://www.infoblox.com/products/advanced-dns-protection/?trk=products_details_guest_secondary_call_to_action
Infoblox DNS Infrastructure Protection | Stop DNS Attacks Skip to content Infoblox Threat Intel Community Blog Cloud Services Infoblox Portal Support Partner Portal Contact Language English Deutsch 日本語 Español Français Search Search Submit Why Infoblox Why Infoblox Market Leadership DNS MTTRs Customers Infoblox Threat Intel Products Networking Products Core network services including DNS, DHCP and IPAM (DDI) Infoblox Universal DDI ™ Product Suite Unify SaaS management of DNS, DHCP, and IPAM across hybrid, multi-cloud infrastructures Infoblox Universal Asset Insights ™ Automate network discovery and analysis of assets across hybrid and multi-cloud environments Universal DDI Management for Microsoft Unify visibility and management of Microsoft DNS and DHCP across your hybrid, multi-cloud networks NIOS DDI Unify DNS, DHCP and IPAM for complex, on-premise networking NIOS-X as a Service​ Our infrastructure-free DDI deployment model Networking Ecosystem Automate workflows and create data-driven insights with networking integrations Automation Products​ Tools to streamline modern networking and security DNS Infrastructure Protection Block attacks on DNS infrastructure to ensure maximum uptime Book a Demo GET HANDS-ON Contact Us Security Products Foundational security for anywhere protection Infoblox Threat Defense ™​ Quickly deploy hybrid DNS-layer security everywhere DNS Infrastructure Protection Block attacks on DNS infrastructure to ensure maximum uptime SOC Insights Use AI-driven analytics to correlate DNS threat and asset data into actionable insights Security Ecosystem Automate SecOps response and efficiency with advanced integrations Brand Protection Defend your domain and reputation from threat actors Threat Intelligence Protect against multiple threat types, regardless of the endpoint Book a Demo GET HANDS-ON Contact Us Solutions Initiative Solutions for key challenges Multi-Cloud Networking Transform hybrid, multi-cloud management of critical network services such as DNS and IPAM Network Modernization Upgrade your legacy environment to modern, automated critical network services On-Premises + Cloud-Managed Networking Uniting enterprise grade and cloud native core networking services for the hybrid enterprise Ransomware Identify and block malicious domains before threats can launch Hybrid Workplace Speed your transition to a secure, multi-cloud organization to support your hybrid workforce Cloud Migration Simplify your cloud migration with reduced delays and strengthened security. IT Compliance Simplify management of regulatory compliance requirements Business Continuity & Disaster Recovery Keep your critical network services online, accessible and resilient LATEST REPORT Hybrid, Multi-cloud Management Maturity How Leaders Tame Complexity, Increase Efficiency, and Innovate at the Speed of Business DOWNLOAD NOW Button Text Book a Demo GET HANDS-ON Contact Us Technology Key tools and integrations Technology Optimization Accelerate Office 365 performance Ensure fast, reliable user experiences Secure IoT Protect devices across IoT environments Deploy IPv6 Set your network foundation up for success Optimize SD-WAN Use cloud-managed DNS, DHCP and IPAM for better ROI Support Encrypted DNS Offer DoT/DoH services while maintaining security and performance Key Integrations Networking Ecosystem Improve efficiency and agility across hybrid and multi-cloud networks Security Ecosystem Boost threat detection and simplify security operations Ecosystem Portal One centralized location to easily find all certified integrations Book a Demo GET HANDS-ON Contact Us Industry Vertical market solutions Healthcare Simplify, control and secure vital healthcare services Higher Education Improve network performance while reducing costs Public Sector Scale and secure distributed networking for all SLED Public Health & Human Services Education State & Local Federal Service Providers Deliver modern core network services and security Mobile Providers Cable and Broadband Managed Service Providers Subscriber Services Book a Demo GET HANDS-ON Contact Us Job Function Solutions overview by role NetOps Unify DNS, DHCP and IPAM and simplify complex networking DevOps Use automation and advanced integrations to streamline operations SecOps Leverage automation and multi-source intelligence to stop threats faster CIO/CISO Optimize your workforce productivity with a SaaS-enabled enterprise Book a Demo GET HANDS-ON Contact Us Support & Services Support Support Overview Customer Support Portal Technical Success Manager Customer Success Infoblox Community Services Professional Services Education Services Cloud Services Infoblox Portal Cloud Services Status Developer Portal Resources Analyst Reports Brochures Customer Case Studies Infoblox Threat Intel Datasheets Deployment Guides eBooks Evaluations Hands-On Lab Live Events & Webinars Infographics Interactive Tools Podcasts Reports Research Reports Reference Architectures Solution Notes Videos On-demand Webinars Whitepapers Workshops LATEST REPORT Hybrid, Multi-cloud Management Maturity How Leaders Tame Complexity, Increase Efficiency, and Innovate at the Speed of Business DOWNLOAD NOW Resources Resources Resources Resources Company Company About Infoblox Leadership Customers Careers Inclusion + Community Environmental, Social, and Governance Policy Contact Us Partners About Infoblox Partners Technology Ecosystem Networking Ecosystem​ Security Ecosystem​ Infoblox for Microsoft Infoblox for AWS Infoblox for Google Managed Service Providers News and Events Company Blog Live Events & Webinars Newsroom Press Releases Downloads Search Search Submit Home » Products » Infoblox DNS Infrastructure Protection INFOBLOX DNS INFRASTRUCTURE PROTECTION STOP DNS AND DDOS ATTACKS. KEEP YOUR BUSINESS ONLINE. Protect critical infrastructure from DDoS and other DNS-based attacks. Keep websites and applications online—even while DNS infrastructure is under attack. TALK TO AN EXPERT DNS INFRASTRUCTURE PROTECTION BENEFITS FEATURES RESOURCES TALK TO AN EXPERT TALK TO AN EXPERT BENEFITS FEATURES RESOURCES Keep critical web apps up and running even during attacks DNS is the hidden backbone of digital operations. Every website, app and user relies on it to stay connected. If external or internal DNS goes down, it can cost millions in lost revenues and lost customer trust. Infoblox DNS Infrastructure Protection (formerly Advanced DNS Protection) stops DDoS and other attacks targeting DNS servers. This NIOS software add-on keeps web apps up and running—even when DNS infrastructure is under heavy attack. BENEFITS Why DNS Infrastructure Protection? $ 0 lost per minute of downtime for large enterprises 0 minutes average duration of a DDoS attack Always available Prevent hours to days of downtime from DNS attacks Broad protection against DDoS, hijacking, cache poisoning and other DNS attacks Adaptive defense uses Infoblox Threat Adapt to detect evolving DNS threats FEATURES Keep mission-critical web apps online DNS Infrastructure Protection shields websites, email and other apps from the worst kinds of attacks: DDoS, hijacking, cache poisoning and other DNS threats that can knock them offline for days. Detect and stop DDoS attacks and other DNS threats while continuing to process legitimate traffic DNS Infrastructure Protection continually detects and blocks a wide range of threats targeting critical DNS services and prevents attackers from manipulating DNS integrity. It blocks both volumetric attacks, like DDoS and NXDOMAIN, and non-volumetric exploits, like DNS hijacking, cache poisoning and other threats targeting external and internal DNS servers. Unlike one-size-fits-all approaches that can block legitimate traffic, DNS Infrastructure Protection uses continually updated intelligence to identify genuine threats. Meanwhile, it keeps real traffic flowing so your business stays up and running, even while DNS infrastructure is under heavy attack. Continually update defenses, avoid hours or days of expensive downtime Threats like DNS hijacking and cache poisoning can be the worst types of attacks. Because DNS data gets distributed across global internet servers, if an attacker can alter your business’s DNS records, it can take hours—even days—to correct all that bad data. During that time, your websites, email and other online apps may be totally unreachable. DNS Infrastructure Protection uses Infoblox Threat Adapt intelligence to block DNS attacks before they disrupt your business. Combining independent research into evolving cyberattack techniques with insights from anonymized DNS data across Infoblox’s global customer base, you can continually update your defenses against the latest DNS threats. Get detailed views of attack patterns across your distributed network DNS Infrastructure Protection provides pervasive, centralized visibility into DNS attack patterns, both in real time and historically. NetOps and SecOps teams can track threat sources and view points of attack across the environment to accelerate mitigation and continually improve defenses. DNS Infrastructure Protection blocks evolving internet threats to keep websites and apps online—even while DNS infrastructure is under attack. CRITICAL INFRASTRUCTURE PROTECTION Protect critical infrastructure against DNS attacks DNS Infrastructure Protection shields the business from DDoS and other attacks while keeping legitimate traffic flowing. Using Threat Adapt intelligence, it continually updates defenses against new and evolving internet threats. Over 13,000 customers worldwide Trusted by 75% of the Fortune 500 RESOURCES DNS Infrastructure Protection resources Explore our resources to learn more about how you can protect critical DNS infrastructure and keep your business online. Datasheets Product Videos Datasheets Datasheets Explore comprehensive datasheets to learn how DNS Infrastructure Protection stops DDoS and other attacks targeting DNS servers. Infoblox DNS Infrastructure Protection Minimize business disruptions caused by attacks on critical DNS services. Product Videos Product Videos Learn how DNS Infrastructure Protection blocks DDoS and other threats attacking external DNS servers, while continuing to process legitimate traffic to keep the business online. DNS Infrastructure Protection Learn how DNS data can help SecOps fill visibility gaps left by other solutions. Datasheets Datasheets Explore comprehensive datasheets to learn how DNS Infrastructure Protection stops DDoS and other attacks targeting DNS servers. Infoblox DNS Infrastructure Protection Minimize business disruptions caused by attacks on critical DNS services. Product Videos Product Videos Learn how DNS Infrastructure Protection blocks DDoS and other threats attacking external DNS servers, while continuing to process legitimate traffic to keep the business online. DNS Infrastructure Protection Learn how DNS data can help SecOps fill visibility gaps left by other solutions. Talk to an expert Thank you for contacting an expert We help modernize your network infrastructure to make it more secure, scalable, automated and agile. With DNS Infrastructure Protection, you can continually shield your critical DNS services from internet attacks and keep your business online. Get in touch with an expert today to start the process. After you submit your contact request, our team will follow up within one business day. Tell us about yourself so we can help All fields are required Privacy Policy | Terms & Conditions Products Infoblox Universal DDI™ Product Suite​ NIOS DDI Infoblox Threat Defense™​ DNS Infrastructure Protection Cloud Network Automation Unified Network View IPAM for Microsoft Solutions Network Modernization Multi-Cloud Networking Hybrid Workplace On-Premises + Cloud-Managed Networking Security Ecosystem Networking Ecosystem Healthcare Higher Education Public Sector Company About Us Why Infoblox Customers Partner Programs Services Press Releases Infoblox Accolades Careers Contact Us Resources Resource Center Resource Center Resource Center Resource Center Resource Center Support DNS Security Center Infoblox Glossary Infoblox Threat Intel Community Training Blog SaaS Status Vulnerability Disclosure Get Infoblox Email Updates Subscribe © 2026 Infoblox. All rights reserved. Feedback Website Terms of Use Legal Privacy Policy Sitemap Why Infoblox Market Leadership DNS MTTRs Customers Infoblox Threat Intel Products Networking Products Infoblox Universal DDI™ Product Suite​ Infoblox Universal Asset Insights™​ Universal DDI Management for Microsoft NIOS DDI NIOS-X as a Service Automation Products DNS Infrastructure Protection Networking Ecosystem​ Security Products Infoblox Threat Defense™​ DNS Infrastructure Protection SOC Insights Security Ecosystem Brand Protection Threat Intelligence Solutions Initiative Multi-Cloud Networking Network Modernization Hybrid Workplace On-Premises + Cloud-Managed Networking Cloud Migration IT Compliance Ransomware Business Continuity & Disaster Recovery Technology Accelerate Office 365 Secure IoT Deploy IPv6 Optimize SD-WAN Support Encrypted DNS Networking Ecosystem Security Ecosystem Industry Healthcare Higher Education Public Sector SLED Public Health & Human Services Education State & Local Federal Service Providers Mobile Providers Cable and Broadband Managed Service Providers Subscriber Services Job Function NetOps DevOps SecOps CIO/CISO Support & Services Support Overview Customer Support Portal Technical Success Manager Professional Services Education Services Infoblox Community Infoblox Portal Cloud Services Status Developer Portal Resources Analyst Reports Brochures Customer Case Studies Infoblox Threat Intel Datasheets Deployment Guides eBooks Evaluations Hands-On Lab Live Events & Webinars Infographics Interactive Tools Podcasts Reference Architectures Reports Research Reports Solution Notes Videos On-demand Webinars Whitepapers Workshops Resources Resources Resources Resources Company About Infoblox Leadership Customers Careers Inclusion + Community Environmental, Social, and Governance Policy Partners Technology Ecosystem Networking Ecosystem​ Security Ecosystem​ Infoblox for Microsoft Infoblox for AWS Infoblox for Google Managed Service Providers Contact Us Live Events & Webinars Newsroom Press Releases Downloads Infoblox Threat Intel Community Blog Cloud Services Infoblox Portal Support Partners Portal Contact Language English Deutsch 日本語 Español Français Back To Top Search Submit
2026-01-13T09:29:18
https://www.linkedin.com/products/cardano-foundation-cardano/?trk=products_details_guest_other_products_by_org_section_product_link_result-card_image-click
Cardano | LinkedIn Skip to main content LinkedIn Cardano Foundation in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cardano Blockchain Platforms by Cardano Foundation See who's skilled in this Add as skill Contact us Report this product About Cardano is a public, permissionless proof-of-stake blockchain infrastructure. The first founded on peer-reviewed research, it has robust engineering and provides maximum security, scalability, and decentralization. Cardano is maintained by a global network of experts, developers, and organizations. Media Products media viewer No more previous content Building the Future with Cardano Built for real-world impact, Cardano empowers developers, enterprises, and institutions to reimagine what is possible. From decentralized finance to infrastructure modernization, and far beyond, you can build it with Cardano. Cardano The Key Advantages of Cardano Blockchain No more next content Similar products Quartz Quartz Blockchain Platforms Ethereum Ethereum Blockchain Platforms Blockchain as a Service Blockchain as a Service Blockchain Platforms Stellar Stellar Blockchain Platforms Polkadot Polkadot Blockchain Platforms Immutable Immutable Blockchain Platforms Sign in to see more Show more Show less Cardano Foundation products Cardano Academy Cardano Academy Online Course Platforms Reeve Reeve Enterprise Accounting Software Veridian Veridian Identity & Access Management (IAM) Software LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://il.linkedin.com/company/collidervc
Collider | LinkedIn דילוג לתוכן הראשי ‏LinkedIn מאמרים אנשים למידה משרות משחקים התחברות הצטרפות Collider Venture Capital and Private Equity Principals Shaping the Future of the Decentralized Internet. מעקב הצגת ‏כל ‏19‏ העובדים‏ דווח על חברה זו עלינו Shaping the Future of the Decentralized Internet. We work alongside startups that are focused on building disruptive technologies, services and products, globally. אתר אינטרנט https://www.collider.vc/ קישור חיצוני עבור Collider תעשייה Venture Capital and Private Equity Principals גודל החברה 11-50 עובדים משרדים ראשיים Tel Aviv סוג שותפות הקמה 2018 התמחויות מיקומים הראשי HaArba'a St 28 Tel Aviv, 6473925, IL קבלת הוראות הגעה Tel Aviv, Israel 6473925, IL קבלת הוראות הגעה עובדים ב- Collider Eyal Bar-Zvi Yoav Tzuker Adam Benayoun Ben Kong צפייה בכל עובדים עדכונים Collider ‏‏2,951‏ עוקבים‏ ‏5‏חודש דיווח על פוסט הזה שמחים להגיד שאנחנו לוקחים חלק בכנס Blockchain Experts Conference שמאורגן על ידי CryptoJungle הכנס יתקיים ב-15 בספטמבר 2025 ב-Avenue ויארח למעלה מ-1,200 מבכירי תעשיית הבלוקצ’יין והקריפטו, המגזר הפיננסי והבנקאי, הרגולציה והממשל, עו“ד ורו“ח, ועוד! 🎟 יש לנו קוד קופון עם הנחה של 15% על כרטיסים 15% COLLI15 נתראה שם! 8 ‏2‏ ‏תגובות‏ לייק תגובה שיתוף Collider ‏‏2,951‏ עוקבים‏ ‏5‏חודש דיווח על פוסט הזה תודה לכל מי שהגיע לחגוג איתנו 10 שנים לאת’ריום! יזמים, חברים ושותפים מכל רחבי האקוסיסטם. היו אנשים מעולים, וייבים טובים, ובורקסים מדהימים! תודה מיוחדת ל- Ethereum Foundation , SSV Labs Starknet Foundation Fireblocks על השותפות באירוע הזה. נתראה בחגיגות ה־20. ‏*עוד 15 ‏2‏ ‏תגובות‏ לייק תגובה שיתוף Collider ‏‏2,951‏ עוקבים‏ ‏5‏חודש דיווח על פוסט הזה ביום רביעי הקרוב, 30 ליולי, אנחנו חוגגים 10 שנים לבלוק הראשון של את’ריום (יום הולדת שמח!), המטבע השני בגודלו בתעשיית הקריפטו, ותשתית הפיננסים המבוזרת הגדולה בעולם. נתחיל ב־17:30 במשרדים של Collider . ב־18:00 נצטרף לשידור החי עם קהילות את’ריום מכל העולם, ומשם נמשיך לשתייה וכיבודים על הגג שלנו, עם נוף לשקיעה תל-אביבית של סוף הקיץ. האירוע פתוח לקהילה ומתקיים בשיתוף עם Ethereum Foundation , ssv.network Starknet Foundation הרשמה כאן: https://lu.ma/4qvn1wv9 Ethereum 10 Year Anniversary Tel Aviv · Luma lu.ma 19 ‏1‏ ‏תגובה‏ לייק תגובה שיתוף Collider ‏‏2,951‏ עוקבים‏ ‏5‏חודש דיווח על פוסט הזה אנחנו גאים לפרסם את דו”ח תעשיית הקריפטו בישראל, לשנת 2025. 📍 מה קרה ב2024 שהזניק את תעשיית הקריפטו למרכז הבמה? 📍 למה ענקיות הפיננסים התעלמו, זלזלו, צחקו והיום מובילות את האימוץ? 📍 ואיפה ממוקמת היזמות והאקדמיה הישראלית על המפה? 2024 הייתה השנה שבה הכל השתנה: ✅ אישור קרנות סל לביטקוין ואיתריום – תעודת כשרות לתעשייה ✅ המוסדיים באים: BlackRock, Fidelity, Franklin Templeton, JPMorgan ✅ הכלכלה המבוזרת נסקה לשווי של 140 מיליארד דולר ✅ מעל 200 מיליארד דולר במטבעות יציבים צמודי-דולר ✅ עשרות מיליוני משתמשים ״רגילים״ באפליקציות מבוזרות כמו פולימרקט ✅ בחירתו של טראמפ והממשל בארה״ב התומכים ומקדמים את התעשייה השורה התחתונה? קריפטו עובר תהליך מתעשייה טכנולוגית מבודדת, לתשתית הפיננסית של העולם. 📘 מה תמצאו בדו”ח? DeFi – תעשייה פיננסית מבוזרת, פתוחה, יעילה וזולה RWA – אג”חים, מניות ונדל”ן על הבלוקצ׳יין – זמינים, שקופים, ופעילים 24/7 ZK & FHE – הגנה מקסימלית על פרטיות המשתמשים DePIN – תשתיות פיזיות מבוזרות לתקשורת ואחסון AI – סוכנים אוטונומיים על הרשת, עם ביזור, שליטה ופרטיות 🇮🇱 ומה עם ישראל? - 145 סטארטאפים - $3 מיליארד גויסו - Fireblocks, Kaspa, StarkWare, eToro – יוניקורנים שכולם כחול-לבן. - FHE ,ZK, MPC, Wallet Security – החוקרים האקדמאיים הישראליים מובילים את ההצפנה, האבטחה והפרטיות בעולם. קריאה מהנה. #Crypto #Israel #Collider #Blockchain 49 ‏1‏ ‏תגובה‏ לייק תגובה שיתוף ‏ ‏Collider‏ ‏ פרסם מחדש את זה Avishay Ovadia ‏5‏חודש דיווח על פוסט הזה אחרי הדרמה של הימים האחרונים בבית הנבחרים, נפל דבר בארה״ב; מדיניות טראמפ לקידום והסדרה של קריפטו מנצחת. ובינתיים בישראל? ישנים על האף. רגע לפני פגרת הקיץ, ממשל טראמפ מקיים את הבטחת הבחירות ומעביר 3 חוקים, אחד באופן סופי ושניים בדרך לסנאט, וסולל את הדרך לזינוק תעשיית הקריפטו בארה״ב, וביסוסה כמעצמה הפיננסית של העולם החדש: 📌 חוק GENIUS המטרה: הסדרה של תחום המטבעות הדיגיטליים היציבים צמודי-דולר (Stablecoins) ההישג: סולל את הדרך לטריליוני דולרים על הבלוקצ׳יין. הפוליטיקה: עבר סופית עם 308 בעד מול 122 נגד - עם 102 דמוקרטים שתמכו. 📌 חוק CLARITY המטרה: הסדרה של תחום המטבעות הדיגיטליים והרגולטור המפקח על כל תחום. ההישג: סולל את הדרך לבהירות רגולטורית ועידוד יזמות בארה״ב. הפוליטיקה: עבר בהצבעה של 294 בעד מול 134 נגד - עם 78 דמוקרטים שתמכו. 📌 חוק Anti-CBDC המטרה: מניעת הנפקת מטבע דיגיטלי צמוד-דולר מטעם הבנק המרכזי. ההישג: הורדה מסדר היום מכשיר פיננסי עצום שיכול לפגוע דרמטית בפרטיות האזרחים. הפוליטיקה: עבר בקושי עם 219 בעד מול 210 נגד - כששני דמוקרטים בלבד תמכו. מדהים לראות את המעצמה החזקה בעולם, טכנולוגית, פיננסית, מצליחה לגבש קואליציה פוליטית מתוך הבנה שקריפטו זה כוח שאסור לוותר עליו לטובת חדשנות ממדינה אחרת. טראמפ מבין את זה, הדמוקרטים גם כן למרות הפוליטיקה. ובישראל, אפשר רק לפנטז על אסטרטגיה לקריפטו מטעם הממשל, ובמיוחד כזו שיוצאת לפועל במהירות ויעילות. 17 לייק תגובה שיתוף ‏ ‏Collider‏ ‏ פרסם מחדש את זה Avishay Ovadia ‏6‏חודש נערך דיווח על פוסט הזה היסטוריה בכנסת. לראשונה ועדה לאסטרטגיית קריפטו לישראל, ביוזמת קרן הקריפטו שלנו Collider . לא ניתן שישראל תפספס את מנוע הצמיחה האדיר הזה למדינה. קוליידר גאים להיות חלק ביוזמה של פורום חברות הקריפטו שמוביל Nir Hirschmann Rub , בשיתוף עם חברות הענק הישראליות Fireblocks StarkWare Aleph BitsOfGold SSVNetwork Masterkey DcentraLab BitcoinIL 83 ‏8‏ ‏תגובות‏ לייק תגובה שיתוף Collider ‏‏2,951‏ עוקבים‏ ‏10‏חודש דיווח על פוסט הזה Tal S. will be in ETHDenver from February 26 to March 1—if you're around, reach out and say hi! 28 לייק תגובה שיתוף Collider ‏‏2,951‏ עוקבים‏ ‏11‏חודש דיווח על פוסט הזה Our partner  Tal S. shares his thoughts on how open source can shape the future of insurance, using Nexus Mutual as a key example. Nexus is positioning itself as the Lloyd’s of London for the onchain economy and, as we predict, will become a key player in the insurance industry for years to come 65 ‏5‏ ‏תגובות‏ לייק תגובה שיתוף ‏ ‏Collider‏ ‏ פרסם מחדש את זה Building Blocks ‏‏199‏ עוקבים‏ ‏11‏חודש דיווח על פוסט הזה Building Blocks 25 Early bird tickets go live tomorrow, bringing you closer to MENA's top web3 dev conference 🐤 Ticket Drop Details: 📅 Date: Tomorrow 🕑 Time: 14:00 GMT 💲 Genesis Price: $119 Limited availability What Awaits at BB25 A combination of local and global crypto and tech talent, 800+ carefully curated attendees: - Leading Cryptographers and protocol researchers from all over the world - Web2 tech professionals and experience developers exploring web3 - EVERYONE at the local crypto community - Leading investors and industry veterans 🛠️ Technical Deep Dives: Immerse yourself in the cutting edge of crypto development: - Zero-Knowledge Proofs and Advanced Cryptography - MEV and DeFi Infrastructure - Fully Homomorphic Encryption (FHE) and the latest in privacy tech - Multi-Party Computation (MPC) and the future of digital asset custory  - Re-Staking Protocols and Mechanisms 🪜 Practical Building Tracks: - Hands-on workshops for implementing decentralized applications - Security considerations and best practices - Token economics and mechanism design - Latest frameworks and development tools ✳️ Startup Focus: - Live pitch competition for emerging projects - Marketing strategies for web3 ventures - Fundraising insights and investor connections - Real-world case studies from successful founders 13 לייק תגובה שיתוף Collider ‏‏2,951‏ עוקבים‏ ‏11‏חודש דיווח על פוסט הזה This week, Tal S. from our team participated in the Sui event at Ben-Gurion University of the Negev . Tal explored the Israeli crypto ecosystem, discussed what builders are currently working on, and showcased how blockchains can drive the next generation of social networks. Thanks for the invitation! It’s always great to be part of the growing community at BGU! Blockchain B7 Ben-Gurion University of the Negev MASTERKEY Venture Capital Itay Alon (伊泰) Eyar Doron Amit Shilo 19 לייק תגובה שיתוף הצטרף עכשיו כדי לראות מה חסר לך מצא אנשים שאתה מכיר בהם Collider עיין מקומות תעסוקה מומלצים עבורך הצג את כל העדכונים, חֲדָשׁוֹת והמאמרים הצטרף עכשיו דפים דומים TrashTech טראשטק Media and Telecommunications Building Blocks Financial Services Collider Entertainment Providers ‏Montreal‏, ‏Quebec‏ Reeco Hospitality ‏Miami‏, ‏FL‏ Brogan Law PLLC Law Practice StarkWare Software Development ‏Netanya‏, ‏Hasharon‏ nekuda Technology, Information and Internet ‏New York‏, ‏New York‏ Natural Intelligence Technology, Information and Internet ‏Tel-Aviv‏, ‏Israel‏ Crayze Media (Acquired by Radd.) Media Production Comon Technology, Information and Internet ‏Tel Aviv‏, ‏Israel‏ הצג עוד דפים דומים הצג פחות דפים דומים ‏LinkedIn © 2026 על אודות נגישות הסכם המשתמש מדיניות פרטיות מדיניות עוגיות מדיניות זכויות יוצרים מדיניות המותג בקרת אורחים הנחיות הקהילה العربية (ערבית) ‏বাংলা (בנגלה) ‏Čeština (צ’כית) ‏Dansk (דנית) ‏Deutsch (גרמנית) ‏Ελληνικά (יוונית) ‏English (אנגלית) ‏Español (ספרדית) فارسی (פרסית) ‏Suomi (פינית) ‏Français (צרפתית) ‏हिंदी (הינדי) ‏Magyar (הונגרית) ‏Bahasa Indonesia (אינדונזית) ‏Italiano (איטלקית) עברית (Hebrew) ‏日本語 (יפנית) ‏한국어 (קוריאנית) ‏मराठी (מראטהית) ‏Bahasa Malaysia (מלאית) ‏Nederlands (הולנדית) ‏Norsk (נורווגית) ‏ਪੰਜਾਬੀ (פנג’אבי) ‏Polski (פולנית) ‏Português (פורטוגזית) ‏Română (רומנית) ‏Русский (רוסית) ‏Svenska (שוודית) ‏తెలుగు (טלוגו) ‏ภาษาไทย (תאילנדית) ‏Tagalog (טאגאלוג) ‏Türkçe (טורקית) ‏Українська (אוקראינית) ‏Tiếng Việt (וייטנאמית) ‏简体中文 (סינית (מפושטת)) ‏正體中文 (סינית (מסורתית)) שפה מסכים ומצטרף LinkedIn הלחיצה על ’המשך‘ להצטרפות או להתחברות מהווה את הסמכתך ל‏ הסכם המשתמש ‏, ל‏ מדיניות הפרטיות ‏ ול‏ מדיניות קובצי ה-Cookie ‏ של LinkedIn. היכנס כדי לראות את מי שאתה כבר מכיר בכתובת Collider התחברות טוב לראות אותך שוב דוא”ל או טלפון סיסמה הצגה שכחת סיסמא? התחברות או הלחיצה על ’המשך‘ להצטרפות או להתחברות מהווה את הסמכתך ל‏ הסכם המשתמש ‏, ל‏ מדיניות הפרטיות ‏ ול‏ מדיניות קובצי ה-Cookie ‏ של LinkedIn. פעם ראשונה שלך ב-LinkedIn? ‏ להצטרפות ‏ או פעם ראשונה שלך ב-LinkedIn? ‏ להצטרפות ‏ הלחיצה על ’המשך‘ להצטרפות או להתחברות מהווה את הסמכתך ל‏ הסכם המשתמש ‏, ל‏ מדיניות הפרטיות ‏ ול‏ מדיניות קובצי ה-Cookie ‏ של LinkedIn.
2026-01-13T09:29:18
https://www.linkedin.com/products/categories/web-hosting
Best Web Hosting | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Web Developer (51) Web Designer (41) Owner (26) Information Technology Specialist (19) Information Technology Manager (13) See all products Find top products in Web Hosting category Software used to make websites accessible on the Internet through the rental of server space. - Use remote storage for shared, reseller, dedicated, virtual private server, colocation, or cloud hosting - Make sites available per storage limits, bandwidth allowance, and with minimum downtime for server maintenance - Integrate with file and content management systems, server-side code, databases, and e-commerce platforms - Keep site secure with backups, firewalls, and malware detection Can include domain name registration, email accounts, and website builders. 901 results IPv4 Address Leasing Service Web Hosting by LARUS Limited LARUS Limited offers leasing of IPv4 addresses, which are unique numerical identifiers assigned to devices connected to the internet. Leasing IPv4 addresses allows businesses to obtain the necessary IP addresses they need for their online operations without having to purchase them outright. This can be a cost-effective solution for companies that need a temporary or flexible supply of IP addresses. LARUS Limited offers a range of leasing options to suit different needs, with competitive pricing and reliable technical support. Lease IPv4 addresses directly from LARUS's pools and use them like your own without limitation regarding geography and usage. View product Pantheon Platform Web Hosting by Pantheon Pantheon is the only WebOps platform built to unify web infrastructure, workflows, and governance for Drupal, WordPress and Next.js. By eliminating outdated tech and tedious sysadmin tasks, Pantheon frees your team to focus on delivering exceptional results and creating extraordinary digital experiences. Trusted by innovative brands like Tableau, Uber, Okta, Home Depot, and Doctors without Borders, Pantheon powers over 700,000 websites and drives 17 billion monthly pageviews. With a lightning-fast content delivery network, always-on security, and effortless scalability, we empower web teams to build, iterate, and launch with confidence. View product Elastic Metal Web Hosting by Scaleway Elastic Metal combines powerful dedicated servers, with flexible pricing, and native integration into Scaleway complete cloud ecosystem. With this range of Bare Metal, you get full control of the resources and applications installed on the server, for maximum performance and security. Elastic Metal servers are available on two different billing plans: hourly or monthly. View product Web Hosting Web Hosting by SiteGround At SiteGround, we offer fast and secure Website Hosting for personal and business websites with agenda to succeed. - Easy Website start or transfer - Daily Backups, Out-of-the-box Caching, Free SSL & CDN - Optimized for popular CMS applications - Advanced solutions for Ecommerce - Expert Hosting Support available 24/7 - Trusted by the owners of over 2,000,000 domains View product WooCommerce Hosting Web Hosting by SiteGround SiteGround's Managed WooCommerce Hosting is designed to help clients sell more online. It is suitable for both new WooCommerce stores, as well as established Woo sites. With SiteGround you get: - WooCommerce pre-installed - Managed WP & Woo Updates - Fast performance thanks to free CDN and smart caching - Top security with our free SSL, AI anti-bot system, custom WAF - High availability with our Daily Backups and restore functionality - 24/7 Expert Hosting Support and much more. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Cloudways Autonomous Web Hosting by Cloudways Experience fully-managed WordPress hosting that auto-scales with your growing business. Launch your app in under a minute on a self scaling powerful infrastructure that is pre-configured to scale up & down acccording to the traffic demand. For the tech-savvy, this infrastructure is built on Kubernetes with integrated Cloudflare Enterprise & Object Cache Pro's to help ensure to-notch performance & security Enjoy free 24/7/365 technical support, so you're never left alone when you need assitance. Cloudways Autonomous makes high-performance WordPress hosting simple and accessible, combining advanced infrastructure with user-friendly management. Join the 100,000+ SMBs who scale with Cloudways! View product Sites Web Hosting by Appwrite Sites: The open-source Vercel alternative. ✅ Deploy your site: Monitor your deployment logs to ensure a stable connection with the Appwrite Network. ✅ Integrate with your backend: Easily integrate Appwrite's backend products with your site. Zero configuration needed. ✅ Scale safely and deliver widely: Make your product available anywhere in the world through our Global CDN. View product Front-end hosting Web Hosting by Contentstack Accelerate digital delivery with automated, MACH-compliant hosting. Publish updates in minutes, auto-redeploy across regions and ensure consistently fast experiences through global cache refresh. View product Leaseweb Colocation Web Hosting by Leaseweb Our colocation solutions allow you to interconnect with cloud providers and within the Leaseweb Network to create hybrid architectures. View product one.com Web Hosting Web Hosting by one.com Web hosting with one.com is affordable, fast and secure to keep your website up and running. Free SSL certificate and 24/7 support are included in all plans. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language
2026-01-13T09:29:18
https://web3.okx.com
OKX Wallet: One Crypto Wallet To Web3, DeFi & Multi-Chains | OKX Wallet Skip to main content Market Meme Pump Tracker Signal Copy trading Swap Portfolio Referral More Products DeFi Marketplace Security Build X Layer Learn about X Layer X Layer Explorer Bridge Developer docs Testnet faucet GitHub Wallet API Discover Wallet API API docs Manage API keys Blockchain explorer Connect wallet in DApps Boost X Launch Participate and earn tokens X Campaign Join to win exciting rewards Rewards center Claim bonuses and airdrops Alert Language Currency Color preferences Download OKX Wallet All about Web3 Announcements Back English 简体中文 繁體中文 Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Suomi Svenska Türkçe Back Back Learn Support center One crypto wallet, 130+ native chains Start trading Get OKX Wallet Top features Experience the power of decentralized applications where you are in complete control of your crypto and your data Store Your own self-managed crypto wallet is your passport to the DApps, known to be the most secure crypto wallet Analyze Simple access to real-time onchain data, including customizable market signals and alerts Swap Hop over from one crypto to another seamlessly with the best price matching between millions of supported tokens Trade Trade like a pro using our advanced trading tools and access to hundreds of decentralized liquidity pools for best price execution Earn Discover the best staking offers and earn crypto rewards on exclusive onchain offers Connect Your wallet is compatible with all major networks along with features like Wallet Connect Store Store any crypto on OKX Wallet, including BTC, ETH, USDT, or meme tokens and NFTs. Our wallet allows you to create up to 1,000 sub-accounts at once, and you can fully manage your keys. Connect wallet Swap Swap or trade on your terms using one of the three modes: Easy, Advanced or Meme Mode. We make it easy for beginners, and advanced enough for the pros and natives. Start trading Stake Put your crypto to work by discovering networks that give you back rewards for staking your crypto. You can now explore hundreds of rewards in one place using your OKX Self Managed Wallet. Discover staking Analyze Make better decisions with a real-time data terminal; monitoring price, liquidity, volume and trades on every major DEX and blockchain network. Track smart money and setup unlimited custom alerts. Learn more SECURITY Advanced self-custody technology and world-class security standards Self-custody True crypto ownership. Your assets are secured by seed phrase, private key, biometric authentication and encryption technology. Proactive protection Day-to-day proactive protection from threats such as malicious domains and smart contracts. Verified by security audits Our technology undergoes frequent and rigorous review by the world’s leading cybersecurity and blockchain experts. Your gateway to Web3 networks Explore FAQs How do I get started? You can sign up on web or download our OKX Wallet app from the app stores. We also offer an OKX Wallet Extension for web, and seamless Wallet Connect compatibility. Is OKX Wallet app right for me? There are a lot of crypto wallets you can choose from. OKX Wallet app is the most interoperable wallet in the industry. That means you can use one-wallet for all of your crypto needs. All from storing, to analyzing, trading or creating digital art. Is OKX Wallet app a custodial wallet? No. OKX Wallet app is a self-managed crypto wallet. This means you are the only one in control of your assets. This means you can recover your crypto from any other wallet that supports your crypto using your private keys. What are the top use cases of OKX Wallet app? Our wallet app is mostly used by crypto natives to store crypto in a secure wallet with top grade security controls. People use the OKX Wallet to also access its native DApps, among which, the DEX router is the most popular DApp. Think of this DApp like an expedia. It finds the best prices of your favorite crypto, and executes the best price using more than 100+ liquidity pools and top DEXs. You can also use the wallet for creating and storing digital assets like NFTs. OKX Wallet Store, swap, and manage your entire crypto portfolio in one wallet Connect wallet Explore DApps ©2017 - 2026 WEB3.OKX.COM English 简体中文 繁體中文 Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Suomi Svenska Türkçe More about OKX Wallet Download Learn About us Careers Contact us Terms of Service Privacy Notice X (formerly Twitter) Product Tokens Swap Marketplace Earn Discover Build Explorer Security Support Support center Channel verification Announcements DEX fee schedule Connect with OKX Bitcoin wallet Ethereum wallet Solana wallet
2026-01-13T09:29:18
https://sre.google/sre-book/evolving-sre-engagement-model/#id-8ZQuEFQt2TVTyu7K
Google SRE - Production Readiness Review: Engagement Insight Chapter 32 - The Evolving SRE Engagement Model Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolving SRE Engagement Model Written by Acacio Cruz and Ashish Bhambhani Edited by Betsy Beyer and Tim Harvey SRE Engagement: What, How, and Why We've discussed in most of the rest of this book what happens when SRE is already in charge of a service. Few services begin their lifecycle enjoying SRE support, so there needs to be a process for evaluating a service, making sure that it merits SRE support, negotiating how to improve any deficits that bar SRE support, and actually instituting SRE support. We call this process onboarding . If you are in an environment where you are surrounded by a lot of existing services in varying states of perfection, your SRE team will probably be running through a prioritized queue of onboardings for quite a while until the team has finished taking on the highest-value targets. Although this is very common, and a completely reasonable way of dealing with a fait accompli environment, there are actually at least two better ways of bringing the wisdom of production, and SRE support, to services old and new alike. In the first case, just as in software engineering—where the earlier the bug is found, the cheaper it is to fix—the earlier an SRE team consultation happens, the better the service will be and the quicker it will feel the benefit. When SRE is engaged during the earliest stages of design , the time to onboard is lowered and the service is more reliable "out of the gate," usually because we don't have to spend the time unwinding suboptimal design or implementation. Another way, perhaps the best, is to short-circuit the process by which specially created systems with lots of individual variations end up "arriving" at SRE's door. Provide product development with a platform of SRE-validated infrastructure, upon which they can build their systems. This platform will have the double benefit of being both reliable and scalable. This avoids certain classes of cognitive load problems entirely, and by addressing common infrastructure practices, allows product development teams to focus on innovation at the application layer, where it mostly belongs. In the following sections, we'll spend some time looking at each of these models in turn, beginning with the "classic" one, the PRR-driven model. The PRR Model The most typical initial step of SRE engagement is the Production Readiness Review (PRR), a process that identifies the reliability needs of a service based on its specific details. Through a PRR, SREs seek to apply what they've learned and experienced to ensure the reliability of a service operating in production. A PRR is considered a prerequisite for an SRE team to accept responsibility for managing the production aspects of a service. Figure 32-1 illustrates the lifecycle of a typical service. The Production Readiness Review can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. This chapter describes the Simple PRR Model, then discusses how its modification into the Extended Engagement Model and the Frameworks and SRE Platform structure allowed SRE to scale their engagement process and impact. Figure 32-1. A typical service lifecycle The SRE Engagement Model SRE seeks production responsibility for important services for which it can make concrete contributions to reliability. SRE is concerned with several aspects of a service, which are collectively referred to as production . These aspects include the following : System architecture and interservice dependencies Instrumentation, metrics, and monitoring Emergency response Capacity planning Change management Performance: availability, latency, and efficiency When SREs engage with a service, we aim to improve it along all of these axes, which makes managing production for the service easier. Alternative Support Not all Google services receive close SRE engagement. A couple of factors are at play here: Many services don't need high reliability and availability, so support can be provided by other means. By design, the number of development teams that request SRE support exceeds the available bandwidth of SRE teams (see Introduction ). When SRE can't provide full-fledged support, it provides other options for making improvements to production, such as documentation and consultation. Documentation Development guides are available for internal technologies and clients of widely used systems. Google's Production Guide documents production best practices for services , as determined by the experiences of SRE and development teams alike. Developers can implement the solutions and recommendations in such documentation to improve their services. Consultation Developers may also seek SRE consulting to discuss specific services or problem areas. The Launch Coordination Engineering (LCE) team (see Reliable Product Launches at Scale ) spends a majority of its time consulting with development teams. SRE teams that aren't specifically dedicated to launch consultations also engage in consultation with development teams. When a new service or a new feature has been implemented, developers usually consult with SRE for advice about preparing for the Launch phase. Launch consultation usually involves one or two SREs spending a few hours studying the design and implementation at a high level. The SRE consultants then meet with the development team to provide advice on risky areas that need attention and to discuss well-known patterns or solutions that can be incorporated to improve the service in production. Some of this advice may come from the Production Guide mentioned earlier. Consultation sessions are necessarily broad in scope because it's not possible to gain a deep understanding of a given system in the limited time available. For some development teams, consultation is not sufficient: Services that have grown by orders of magnitude since they launched, which now require more time to understand than is feasible through documentation and consultation. Services upon which many other services have subsequently come to rely upon, which now host significantly more traffic from many different clients. These types of services may have grown to the point at which they begin to encounter significant difficulties in production while simultaneously becoming important to users. In such cases, long-term SRE engagement becomes necessary to ensure that they are properly maintained in production as they grow. Production Readiness Reviews: Simple PRR Model When a development team requests that SRE take over production management of a service, SRE gauges both the importance of the service and the availability of SRE teams. If the service merits SRE support, and the SRE team and development organization agree on staffing levels to facilitate this support, SRE initiates a Production Readiness Review with the development team. The objectives of the Production Readiness Review are as follows: Verify that a service meets accepted standards of production setup and operational readiness, and that service owners are prepared to work with SRE and take advantage of SRE expertise. Improve the reliability of the service in production, and minimize the number and severity of incidents that might be expected. A PRR targets all aspects of production that SRE cares about. After sufficient improvements are made and the service is deemed ready for SRE support, an SRE team assumes its production responsibilities. This brings us to the Production Readiness Review process itself. There are three different but related engagement models (Simple PRR Model, Early Engagement Model, and Frameworks and SRE Platform), which will be discussed in turn. We will first describe the Simple PRR Model, which is usually targeted at a service that is already launched and will be taken over by an SRE team. A PRR follows several phases, much like a development lifecycle, although it may proceed independently in parallel with the development lifecycle. Engagement SRE leadership first decides which SRE team is a good fit for taking over the service. Usually one to three SREs are selected or self-nominated to conduct the PRR process. This small group then initiates discussion with the development team. The discussion covers matters such as: Establishing an SLO/SLA for the service Planning for potentially disruptive design changes required to improve reliability Planning and training schedules The goal is to arrive at a common agreement about the process, end goals, and outcomes that are necessary for the SRE team to engage with the development team and their service. Analysis Analysis is the first large segment of work. During this phase, the SRE reviewers learn about the service and begin analyzing it for production shortcomings. They aim to gauge the maturity of the service along the various axes of concern to SRE. They also examine the service's design and implementation to check if it follows production best practices. Usually, the SRE team establishes and maintains a PRR checklist explicitly for the Analysis phase. The checklist is specific to the service and is generally based on domain expertise, experience with related or similar systems, and best practices from the Production Guide. The SRE team may also consult other teams that have more experience with certain components or dependencies of the service. A few examples of checklist items include: Do updates to the service impact an unreasonably large percentage of the system at once? Does the service connect to the appropriate serving instance of its dependencies? For example, end-user requests to a service should not depend on a system that is designed for a batch-processing use case. Does the service request a sufficiently high network quality-of-service when talking to a critical remote service? Does the service report errors to central logging systems for analysis? Does it report all exceptional conditions that result in degraded responses or failures to the end users? Are all user-visible request failures well instrumented and monitored, with suitable alerting configured? The checklist may also include operational standards and best practices followed by a specific SRE team. For example, a perfectly functional service configuration that doesn't follow an SRE team's "gold standard" might be refactored to work better with SRE tools for scalably managing configurations. SREs also look at recent incidents and postmortems for the service, as well as follow-up tasks for the incidents. This evaluation gauges the demands of emergency response for the service and the availability of well-established operational controls. Improvements and Refactoring The Analysis phase leads to the identification of recommended improvements for the service. This next phase proceeds as follows: Improvements are prioritized based upon importance for service reliability. The priorities are discussed and negotiated with the development team, and a plan of execution is agreed upon. Both SRE and product development teams participate and assist each other in refactoring parts of the service or implementing additional features. This phase typically varies the most in duration and amount of effort. How much time and effort this phase will involve depends upon the availability of engineering time for refactoring, the maturity and complexity of the service at the start of the review, and myriad other factors. Training Responsibility for managing a service in production is generally assumed by an entire SRE team. To ensure that the team is prepared, the SRE reviewers who led the PRR take ownership of training the team, which includes the documentation necessary to support the service. Typically with the help and participation of the development team, these engineers organize a series of training sessions and exercises. Instruction can include: Design overviews Deep dives on various request flows in the system A description of the production setup Hands-on exercises for various aspects of system operations When the training is concluded, the SRE team should be prepared to manage the service. Onboarding The Training phase unblocks onboarding of the service by the SRE team. It involves a progressive transfer of responsibilities and ownership of various production aspects of the service, including parts of operations, the change management process, access rights, and so forth. The SRE team continues to focus on the various areas of production mentioned earlier. To complete the transition, the development team must be available to back up and advise the SRE team for a period of time as it settles in managing production for the service. This relationship becomes the basis for the ongoing work between the teams. Continuous Improvement Active services continuously change in response to new demands and conditions, including user requests for new features, evolving system dependencies, and technology upgrades, in addition to other factors. The SRE team must maintain service reliability standards in the face of these changes by driving continuous improvement. The responsible SRE team naturally learns more about the service in the course of operating the service, reviewing new changes, responding to incidents, and especially when conducting postmortems/root cause analyses. This expertise is shared with the development team as suggestions and proposals for changes to the service whenever new features, components, and dependencies may be added to the service. Lessons from managing the service are also contributed to best practices, which are documented in the Production Guide and elsewhere. Engaging with Shakespeare Initially, the developers of the Shakespeare service were responsible for the product, including carrying the pager for emergency response. However, with growing use of the service and the growth of the revenue coming from the service, SRE support became desirable. The product has already been launched, so SRE conducted a Production Readiness Review. One of the things they found was that the dashboards were not completely covering some of the metrics defined in the SLO, so that needed to be fixed. After all the issues that had been filed had been fixed, SRE took over the pager for the service, though two developers were in the on-call rotation as well. The developers are participating in the weekly on-call meeting discussing last week's problems and how to handle upcoming large-scale maintenance or cluster turndowns. Also future plans for the service are now discussed with the SREs to make sure that new launches will go flawlessly (though Murphy's law is always looking for opportunities to spoil that). Evolving the Simple PRR Model: Early Engagement Thus far, we've discussed the Production Readiness Review as it's used in the Simple PRR Model, which is limited to services that have already entered the Launch phase. There are several limitations and costs associated with this model. For example: Additional communication between teams can increase some process overhead for the development team, and cognitive burden for the SRE reviewers. The right SRE reviewers must be available, and capable of managing their time and priorities with regards to their existing engagements. Work done by SREs must be highly visible and sufficiently reviewed by the development team to ensure effective knowledge sharing. SREs should essentially work as a part of the development team, rather than an external unit. However, the main limitations of the PRR Model stem from the fact that the service is launched and serving at scale, and the SRE engagement starts very late in the development lifecycle. If the PRR occurred earlier in the service lifecycle, SRE's opportunity to remedy potential issues in the service would be markedly increased. As a result, the success of the SRE engagement and the future success of the service itself would likely improve. The resulting drawbacks can pose a significant challenge to the success of the SRE engagement and the future success of the service itself. Candidates for Early Engagement The Early Engagement Model introduces SRE earlier in the development lifecycle in order to achieve significant additional advantages. Applying the Early Engagement Model requires identifying the importance and/or business value of a service early in the development lifecycle, and determining if the service will have sufficient scale or complexity to benefit from SRE expertise. Applicable services often have the following characteristics: The service implements significant new functionality and will be part of an existing system already managed by SRE. The service is a significant rewrite or alternative to an existing system, targeting the same use cases. The development team sought SRE advice or approached SRE for takeover upon launch. The Early Engagement Model essentially immerses SREs in the development process. SRE's focus remains the same, though the means to achieve a better production service are different. SRE participates in Design and later phases, eventually taking over the service any time during or after the Build phase. This model is based on active collaboration between the development and SRE teams. Benefits of the Early Engagement Model While the Early Engagement Model does entail certain risks and challenges discussed previously, additional SRE expertise and collaboration during the entire lifecycle of the product creates significant benefits compared to an engagement initiated later in the service lifecycle. Design phase SRE collaboration during the Design phase can prevent a variety of problems or incidents from occurring later in production. While design decisions can be reversed or rectified later in the development lifecycle, such changes come at a high cost in terms of effort and complexity. The best production incidents are those that never happen! Occasionally, difficult trade-offs lead to the selection of a less-than-ideal design. Participation in the Design phase means that SREs are aware up front of the trade-offs and are part of the decision to pick a less-than-ideal option. Early SRE involvement aims to minimize future disputes over design choices once the service is in production. Build and implementation The Build phase addresses production aspects such as instrumentation and metrics, operational and emergency controls, resource usage, and efficiency. During this phase, SRE can influence and improve the implementation by recommending specific existing libraries and components, or helping build certain controls into the system. SRE participation at this stage helps enable ease of operations in the future and allows SRE to gain operational experience in advance of the launch. Launch SRE can also help implement widely used launch patterns and controls. For example, SRE might help implement a "dark launch" setup, in which part of the traffic from existing users is sent to the new service in addition to being sent to the live production service. The responses from the new service are "dark" since they are thrown away and not actually shown to users. Practices such as dark launches allow the team to gain operational insight, resolve issues without impacting existing users, and reduce the risk of encountering issues after launch. A smooth launch is immensely helpful in keeping the operational burden low and maintaining the development momentum after the launch. Disruptions around launch can easily result in emergency changes to source code and production, and disrupt the development team's work on future features. Post-launch Having a stable system at launch time generally leads to fewer conflicting priorities for the development team in terms of choosing between improving service reliability versus adding new features. In later phases of the service, the lessons from earlier phases can better inform refactoring or redesign. With extended involvement, the SRE team can be ready to take over the new service much sooner than is possible with the Simple PRR Model. The longer and closer engagement between the SRE and development teams also creates a collaborative relationship that can be sustained long term. A positive cross-team relationship fosters a mutual feeling of solidarity, and helps SRE establish ownership of the production responsibility. Disengaging from a service Sometimes a service doesn't warrant full-fledged SRE team management—this determination might be made post-launch, or SRE might engage with a service but never officially take it over. This is a positive outcome, because the service has been engineered to be reliable and low maintenance, and can therefore remain with the development team. It is also possible that SRE engages early with a service that fails to meet the levels of usage projected. In such cases, the SRE effort spent is simply part of the overall business risk that comes with new projects, and a small cost relative to the success of projects that meet expected scale. The SRE team can be reassigned, and lessons learned can be incorporated into the engagement process. Evolving Services Development: Frameworks and SRE Platform The Early Engagement Model made strides in evolving SRE engagement beyond the Simple PRR Model, which applied only to services that had already launched. However, there was still progress to be made in scaling SRE engagement to the next level by designing for reliability. Lessons Learned Over time, the SRE engagement model described thus far produced several distinct patterns: Onboarding each service required two or three SREs and typically lasted two or three quarters. The lead times for a PRR were relatively high (quarters away). The effort level required was proportional to the number of services under review, and was constrained by the insufficient number of SREs available to conduct PRRs. These conditions led to serialization of service takeovers and strict service prioritization. Due to differing software practices across services, each production feature was implemented differently. To meet PRR-driven standards, features usually had to be reimplemented specifically for each service or, at best, once for each small subset of services sharing code. These reimplementations were a waste of engineering effort. One canonical example is the implementation of functionally similar logging frameworks repeatedly in the same language because different services didn't implement the same coding structure. A review of common service issues and outages revealed certain patterns, but there was no way to easily replicate fixes and improvements across services. Typical examples included service overload situations and data hot-spotting. SRE software engineering contributions were often local to the service. Thus, building generic solutions to be reused was difficult. As a consequence, there was no easy way to implement new lessons individual SRE teams learned and best practices across services that had already been onboarded. External Factors Affecting SRE External factors have traditionally pressured the SRE organization and its resources in several ways. Google is increasingly following the industry trend of moving toward microservices. 151 As a result, both the number of requests for SRE support and the cardinality of services to support have increased. Because each service has a base fixed operational cost, even simple services demand more staffing. Microservices also imply an expectation of lower lead time for deployment, which was not possible with the previous PRR model (which had a lead time of months). Hiring experienced, qualified SREs is difficult and costly. Despite enormous effort from the recruiting organization, there are never enough SREs to support all the services that need their expertise. Once SREs are hired, their training is also a lengthier process than is typical for development engineers. Finally, the SRE organization is responsible for serving the needs of the large and growing number of development teams that do not already enjoy direct SRE support. This mandate calls for extending the SRE support model far beyond the original concept and engagement model. Toward a Structural Solution: Frameworks To effectively respond to these conditions, it became necessary to develop a model that allowed for the following principles: Codified best practices The ability to commit what works well in production to code, so services can simply use this code and become "production ready" by design. Reusable solutions Common and easily shareable implementations of techniques used to mitigate scalability and reliability issues. A common production platform with a common control surface Uniform sets of interfaces to production facilities, uniform sets of operational controls, and uniform monitoring, logging, and configuration for all services. Easier automation and smarter systems A common control surface that enables automation and smart systems at a level not possible before. For example, SREs can readily receive a single view of relevant information for an outage, rather than hand collecting and analyzing mostly raw data from disparate sources (logs, monitoring data, and so on). Based upon these principles, a set of SRE-supported platform and service frameworks were created, one for each environment we support(Java, C++, Go). Services built using these frameworks share implementations that are designed to work with the SRE-supported platform, and are maintained by both SRE and development teams. The main shift brought about by frameworks was to enable product development teams to design applications using the framework solution that was built and blessed by SRE, as opposed to either retrofitting the application to SRE specifications after the fact, or retrofitting more SREs to support a service that was markedly different than other Google services. An application typically comprises some business logic, which in turn depends on various infrastructure components. SRE production concerns are largely focused on the infrastructure-related parts of a service. The service frameworks implement infrastructure code in a standardized fashion and address various production concerns. Each concern is encapsulated in one or more framework modules, each of which provides a cohesive solution for a problem domain or infrastructure dependency. Framework modules address the various SRE concerns enumerated earlier, such as: Instrumentation and metrics Request logging Control systems involving traffic and load management SRE builds framework modules to implement canonical solutions for the concerned production area. As a result, development teams can focus on the business logic, because the framework already takes care of correct infrastructure use. A framework essentially is a prescriptive implementation for using a set of software components and a canonical way of combining these components. The framework can also expose features that control various components in a cohesive manner. For example, a framework might provide the following: Business logic organized as well-defined semantic components that can be referenced using standard terms Standard dimensions for monitoring instrumentation A standard format for request debugging logs A standard configuration format for managing load shedding Capacity of a single server and determination of "overload" that can both use a semantically consistent measure for feedback to various control systems Frameworks provide multiple upfront gains in consistency and efficiency. They free developers from having to glue together and configure individual components in an ad hoc service-specific manner, in ever-so-slightly incompatible ways, that then have to be manually reviewed by SREs. They drive a single reusable solution for production concerns across services, which means that framework users end up with the same common implementation and minimal configuration differences. Google supports several major languages for application development, and frameworks are implemented across all of these languages. While different implementations of the framework (say in C++ versus Java) can't share code, the goal is to expose the same API, behavior, configuration, and controls for identical functionality. Therefore, development teams can choose the language platform that fits their needs and experience, while SREs can still expect the same familiar behavior in production and standard tools to manage the service . New Service and Management Benefits The structural approach, founded on service frameworks and a common production platform and control surface, provided a host of new benefits. Significantly lower operational overhead A production platform built on top of frameworks with stronger conventions significantly reduced operational overhead, for the following reasons: It supports strong conformance tests for coding structure, dependencies, tests, coding style guides, and so on. This functionality also improves user data privacy, testing, and security conformance. It features built-in service deployment, monitoring, and automation for all services . It facilitates easier management of large numbers of services, especially micro-services, which are growing in number. It enables much faster deployment: an idea can graduate to fully deployed SRE-level production quality in a matter of days! Universal support by design The constant growth in the number of services at Google means that most of these services can neither warrant SRE engagement nor be maintained by SREs. Regardless, services that don't receive full SRE support can be built to use production features that are developed and maintained by SREs. This practice effectively breaks the SRE staffing barrier. Enabling SRE-supported production standards and tools for all teams improves the overall service quality across Google. Furthermore, all services that are implemented with frameworks automatically benefit from improvements made over time to frameworks modules. Faster, lower overhead engagements The frameworks approach results in faster PRR execution because we can rely upon: Built-in service features as part of the framework implementation Faster service onboarding (usually accomplished by a single SRE during one quarter) Less cognitive burden for the SRE teams managing services built using frameworks These properties allow SRE teams to lower the assessment and qualification effort for service onboarding, while maintaining a high bar on service production quality. A new engagement model based on shared responsibility The original SRE engagement model presented only two options: either full SRE support, or approximately no SRE engagement. 152 A production platform with a common service structure, conventions, and software infrastructure made it possible for an SRE team to provide support for the "platform" infrastructure, while the development teams provide on-call support for functional issues with the service—that is, for bugs in the application code. Under this model, SREs assume responsibility for the development and maintenance of large parts of service software infrastructure, particularly control systems such as load shedding, overload, automation, traffic management, logging, and monitoring. This model represents a significant departure from the way service management was originally conceived in two major ways: it entails a new relationship model for the interaction between SRE and development teams, and a new staffing model for SRE-supported service management. 153 Conclusion Service reliability can be improved through SRE engagement, in a process that includes systematic review and improvement of its production aspects. Google SRE’s initial such systematic approach, the Simple Production Readiness Review, made strides in standardizing the SRE engagement model, but was only applicable to services that had already entered the Launch phase. Over time, SRE extended and improved this model. The Early Engagement Model involved SRE earlier in the development lifecycle in order to "design for reliability." As demand for SRE expertise continued to grow, the need for a more scalable engagement model became increasingly apparent. Frameworks for production services were developed to meet this demand: code patterns based on production best practices were standardized and encapsulated in frameworks, so that use of frameworks became a recommended, consistent, and relatively simple way of building production-ready services. All three of the engagement models described are still practiced within Google. However, the adoption of frameworks is becoming a prominent influence on building production-ready services at Google as well as profoundly expanding the SRE contribution, lowering service management overhead, and improving baseline service quality across the organization. 151 See the Wikipedia page on microservices at https://en.wikipedia.org/wiki/Microservices . 152 Occasionally, there were consulting engagements by SRE teams with some non-onboarded services, but consultations were a best-effort approach and limited in number and scope. 153 The new model of service management changes the SRE staffing model in two ways: (1) because a lot of service technology is common, it reduces the number of required SREs per service; (2) it enables the creation of production platforms with separation of concerns between production platform support (done by SREs) and service-specific business-logic support, which remains with the development team. These platforms teams are staffed based upon the need to maintain the platform rather than upon service count, and can be shared across products. Previous Chapter 31 - Communication and Collaboration in SRE Next Part V - Conclusions Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:18
https://visualstudio.microsoft.com/de/vs/
Visual Studio-IDE – KI zum Codieren, Debuggen und Testen Zu Hauptinhalt springen Microsoft Visual Studio Visual Studio Visual Studio Startseite Entwicklertools Visual Studio Visual Studio Überblick Funktionen Editionen Erweiterungen Erste Schritte Dienststatus FAQ-Unterstützung Dokumentation Blog Visual Studio Code Visual Studio Code Überblick Erweiterungen Support Dokumentation Blog VS Code für Bildung Downloads Kaufen Abonnements Überblick Support​ Administratorportal​ Abonnentenportal​ Dev Essentials Visual Studio Hub Mehr Kostenloses Visual Studio Alles von Microsoft Global Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft Teams Windows 365 Technologie und Innovation Technologie und Innovation KI Azure Space Mixed Reality Microsoft HoloLens Microsoft Viva Quanten-Computing Nachhaltigkeit Branchen Branchen Bildung und Forschung Automobilbranche Finanzdienstleistungen Öffentlicher Sektor Gesundheitswesen Produktion Einzelhandel Alle Branchen Partner Partner Partner finden Partner werden Partner-Netzwerk Microsoft Marketplace Marketplace Rewards Software development companies Ressourcen Ressourcen Blog Microsoft Advertising Developer Center Dokumentation Veranstaltungen Lizenzierung Microsoft Learn Microsoft Research Siteübersicht anzeigen Anmelden This device is not currently supported for these products. Visual Studio IDE Visual Studio for Mac Visual Studio Code To continue downloading, click here Visual Studio-IDE – KI zum Codieren, Debuggen und Testen 2025-12-08T11:35:54-08:00 Leistungsstarke IDE, intelligenter gemacht mit KI Agent-Modus – jetzt in GitHub Copilot Free verfügbar Erleben Sie KI-gestützte Codierungsunterstützung, die Probleme analysiert, die nächsten Schritte koordiniert, Änderungen anwendet und bei Fehlern iteriert. Visual Studio herunterladen Community Professional Enterprise Insiders Kostenlos starten Unterstützung bei der Produktivitätssteigerung durch KI Echte KI-Lösungen mit Agentmodus Planen, Erstellen, Testen und Beheben – alles über eine einzige Eingabeaufforderung. Beschreiben Sie Ihr Anliegen in natürlicher Sprache, lassen Sie Copilot den Rest erledigen. Führen Sie Linter, Tests und Befehle aus, ohne Visual Studio zu verlassen. Aktivieren Sie den Agent-Modus noch heute Immer aktuelle KI-Modelle Greifen Sie auf die neuesten KI-Modelle zu, um die genauesten kontextbewussten Vorschläge im Chat oder inline zu erhalten. Wählen Sie ein Modell aus, das den jeweiligen Aufgaben entspricht. Die neuesten Modelle abrufen Kostenlos und vollständig in Visual Studio integriert KI-gesteuerte Codierungsunterstützung, integriert in Visual Studio. Empfangen Sie 2.000 Codeabschlüsse. Erhalten Sie 50 kostenlose Chatnachrichten pro Monat. Copilot Free aktivieren GitHub Copilot in Aktion sehen Nie mehr eine Commitnachricht schreiben: Vielen Dank GitHub Copilot Bearbeitung mehrerer Dateien für GitHub Copilot in Visual Studio Erste Schritte mit GitHub Copilot Free in Visual Studio Bleiben Sie auf dem Laufenden mit allem, was Visual Studio betrifft. Visual Studio Hub besuchen Entwickeln Debuggen Testen Versionskontrolle Zusammenarbeiten Cloud Design Profil Daten Copilot: Ihr Koppelprogrammierer Beschleunigen Sie die Entwicklung mit kontextbewussten Codevervollständigungsvorschlägen für eine einzelne Zeile oder einen gesamten Codeausschnitt. Verwenden Sie Copilot Edits für die Bearbeitung mehrerer Dateien mit Code-Review, In-File-Vorschau und Rollback-Erfahrung. Verwenden Sie Copilot für Codevorschläge Chat, um Unterstützung bei Code und Abfragen zu erhalten Erhalten Sie Echtzeithilfe für Ihre Codierungsanfragen mit GitHub Copilot Chat. Die Konversations-Chat-Schnittstelle bietet sofort kontextbewusste Vorschläge, unterstützt beim Debuggen und generiert Unit-Tests – alles innerhalb Ihrer Entwicklungsumgebung. Tipps zur Verbesserung der Copilot Chat-Ergebnisse Debuggen leicht gemacht mit Copilot Beschleunigen Sie das Debuggen mit umfassenden Einblicken in Ihren Code, unterstützt von KI. Copilot bietet detaillierte Analysen und Erläuterungen, um gezielte Lösungen für Ausnahmen, Deadlocks und andere Debugging-Szenarien bereitzustellen. Optimieren Sie den Debugvorgang Verbessern Sie die Testeffizienz durch Copilot Reduzieren Sie die Zeit, die für die manuelle Testerstellung und die Aufrechterhaltung der Codequalität benötigt wird. Die Automatisierung von Copilot hilft Ihnen, die Zuverlässigkeit zu verbessern, indem Fehler frühzeitig mit Komponenten- und Integrationstests identifiziert werden, die auf Ihrem vorhandenen Code basieren. Verwenden Sie Copilot, um Fehler zu beheben Nutzen Sie Copilot, um die Versionskontrolle zu optimieren Halten Sie eine saubere und organisierte Codebasis mit KI-generierten Commitnachrichten basierend auf Codeänderungen aufrecht. GitHub Copilot vereinfacht Ihre Versionskontrolle, indem es Klarheit in Workflow-, Branchverwaltungs- und Mergekonflikten sicherstellt. Generieren Sie Commitnachrichten mit GitHub Copilot Chat Mehr als Bildschirme teilen Durch Live Share-Sitzungen zur Zusammenarbeit in Echtzeit lassen sich die Bearbeitungs- und Debugzyklen Ihres Teams beschleunigen – unabhängig von Sprache oder Plattform. Durch personalisierte Sitzungen mit Zugriffssteuerungen und benutzerdefinierten Editor-Einstellungen wird Codekonsistenz sichergestellt. Zusammenarbeiten mit Live Share Azure in Visual Studio Erstellen, Bereitstellen und Skalieren mit Azure in Visual Studio: Erstellen Sie cloudorientierte Anwendungen effizient und sicher mithilfe integrierter Tools und KI-Funktionen – alles innerhalb Ihrer IDE. Ob Sie Web-Apps, APIs oder KI-gesteuerte Lösungen entwickeln – Visual Studio bietet nahtlose Azure-Integration für eine optimierte Codierungsfunktionalität. Erkunden Sie Lernprogramme, verbessern Sie Ihre Programmierkenntnisse und automatisieren Sie Bereitstellungen mit Azure DevOps und GitHub Actions. Stellen Sie eine Verbindung mit Azure-Diensten her, stellen Sie Anwendungen bereit und debuggen Sie in Echtzeit mit den leistungsstarken Tools von Visual Studio. Weitere Informationen zu Azure in Visual Studio Entwerfen ansprechender plattformübergreifender Apps Visual Studio ist die beste IDE für die Erstellung moderner, plattformübergreifender Anwendungen für Windows, Mac, Linux, iOS und Android. Erstellen Sie leistungsstarke Client-Apps mit WinForms, WPF, WinUI und MAUI, mithilfe leistungsstarker visueller Designer, die komplexe Layouts vereinfachen und Ihre Benutzeroberfläche zum Leben erwecken. Visualisierung und Modellierung Erstellen Sie schnelle und reaktionsfähige Apps mit den integrierten Profilerstellungstools Verwenden Sie die Profilerstellungstools in Visual Studio, um schnelle und reaktionsfähige .NET- und C++-Anwendungen für Web und Client zu erstellen. Mit Profilern für CPU, Arbeitsspeicher, XAML, Datenbanken, Ereignisse und mehr verfügen Sie über eine ganze Sammlung von Tools, mit denen Sie Leistungsprobleme identifizieren und beheben können. App-Leistung messen Tools, um SQL zu einem Kinderspiel zu machen Arbeiten Sie mit SQL-Projekten, ohne die IDE zu verlassen. Erstellen Sie Abfragen mit umfangreichen Bearbeitungsfunktionen sowie Anzeige- und Erkundungsmöglichkeiten für Server und Datenbanken. Visualisieren Sie Datenbankschemas und Abfragen innerhalb von Visual Studio. Zugreifen auf Daten in Visual Studio Machen Sie es zu Ihrem eigenen Konfigurieren Und designen Sie genau so, wie Sie es von jedem modernen Entwicklungstool erwarten würden. Unabhängig davon, ob Sie eines der bereits installierten Standarddesigns auswählen oder neue Designs aus dem Visual Studio Marketplace erkunden, Sie haben die Freiheit, sich Ihren Editor oder Ihre IDE wirklich zu eigen zu machen. Designs erkunden Die Möglichkeiten mit Erweiterungen erweitern Erweiterungen sind Add-Ons, die Visual Studio anpassen und verbessern, einschließlich zusätzlicher Einstellungen, Features oder Verwendungen für vorhandene Tools. Mit Tausenden von Erweiterungen im Marketplace haben Sie viele Optionen, Ihre Produktivität zu steigern und Ihren Workflow anzupassen. Erweiterungen erkunden GitHub Actions Installiert Ein Fenster, das eine Ansicht der GitHub Actions für das aktuelle Repository der geöffneten Projektmappe in Visual Studio bereitstellt. Bereitgestellt von @timheuer Erweiterung abrufen Markdown-Editor v2 Installiert Ein voll ausgestatteter Markdown-Editor mit Livevorschau und Syntaxhervorhebung. Unterstützt GitHub Flavored Markdown. Erweiterung abrufen Visual Studio-Designpaket Installiert Eine Sammlung beliebter Designs ist jetzt für Visual Studio verfügbar. Weitere Informationen SQLite- und SQL Server Compact-Toolbox Installiert Diese Erweiterung fügt mehrere Funktionen hinzu, um Ihre Entwicklungsaktivitäten für eingebettete Datenbanken zu unterstützen. Erweiterung abrufen Visual Studio Community Kostenlose, mit allen Features ausgestattete integrierte Entwicklungsumgebung (Integrated Development Environment, IDE) für Schüler, Schülerinnen, Studierende sowie Open-Source- und einzelne Entwicklungsfachkräfte. Weitere Informationen Kostenloser Download Visual Studio Professional Leistungsstarke Entwicklungstools, Dienste und Abonnementvorteile für eine effiziente Zusammenarbeit in kleinen Teams. Weitere Informationen Kostenlose Testversion Visual Studio Enterprise Erweitertes Debugging, Unternehmenssicherheit, KI-gestützte Entwicklung – plus Azure-Guthaben und Dev/Test-Preise für die Cloud. Weitere Informationen Kostenlose Testversion Folge uns Dev Essentials Developer Community Blogs: Visual Studio .NET C++ Neuigkeiten Surface Pro Surface Laptop Surface Laptop Studio 2 Copilot für Organisationen Copilot für die private Nutzung Microsoft 365 Microsoft-Produkte erkunden Windows 11-Apps Microsoft Store Kontoprofil Download Center Microsoft Store-Support Rückgaben Bestellnachverfolgung Abfallverwertung Weitere Informationen Bildungswesen Microsoft Bildung Geräte für den Bildungsbereich Microsoft Teams for Education Microsoft 365 Education Office Education Ausbildung und Weiterbildung von Lehrpersonal Angebote für Studenten und Eltern Azure für Studenten Unternehmen Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft 365 Copilot Microsoft Teams Kleine Unternehmen Entwicklung & IT Microsoft-Entwickler Microsoft Learn Support für KI-Apps im Marketplace Microsoft Tech Community Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Im Unternehmen Jobs & Karriere Das Unternehmen Microsoft Unternehmensnachrichten Datenschutz bei Microsoft Investoren Nachhaltigkeit LkSG Beschwerdeverfahren Deutsch (Deutschland) Abwahlsymbol „Ihre Datenschutzoptionen“ Ihre Datenschutzoptionen Abwahlsymbol „Ihre Datenschutzoptionen“ Ihre Datenschutzoptionen Verbraucherdatenschutz für Gesundheitsdaten Wenden Sie sich an uns Impressum Datenschutz und Cookies Cookies verwalten Nutzungsbedingungen Marken Informationen zu unseren Werbeanzeigen © Microsoft 2026
2026-01-13T09:29:18
https://scholar.google.com/citations?user=sgiysmUAAAAJ&hl=ko&oe=ASCII
Yingshu Li - Google 학술 검색 로드 중... 현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요. 연간 인용횟수 중복된 서지정보 다음 학술 문서는 학술검색에서 병합되어 있습니다. 총 인용 횟수가 첫 번째 학술 문서에 대해서만 계산됩니다. 병합된 서지정보 이 '인용' 횟수는 학술검색의 다음 학술 문서에 대한 인용문을 포함합니다. * 표시된 인용문은 프로필에 있는 학술 문서와 다를 수 있습니다. 공동 저자 추가 공동 저자 팔로우 이 저자의 새 학술자료 이 저자를 인용한 새 학술자료 이 저자의 연구와 관련된 새 학술자료 업데이트를 받을 이메일 주소 완료 죄송합니다. 현재 Internet Explorer 버전에서는 일부 기능이 작동하지 않을 수 있습니다. 최적의 상태로 사용하려면 Chrome 또는 Mozilla Firefox 를 사용해 주세요. 내 프로필 내 서재 통계 알리미 설정 로그인 로그인 내 프로필 만들기 인용 모두 보기 전체 2021년 이후 서지정보 11801 3410 h-index 50 28 i10-index 141 78 0 960 480 240 720 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 43 76 145 221 304 400 463 627 646 563 636 674 769 953 869 843 925 712 617 618 532 5 공개 액세스 모두 보기 모두 보기 자료 63개 자료 39개 공개 비공개 재정 지원 요구사항 기준 공동 저자 모두 보기 Ding-Zhu Du Professor of Computer Science, University of Texas at Dallas utdallas.edu의 이메일 확인됨 Weili (Lily) Wu Professor of Computer Science, IEEE Fellow, University of Texas at Dallas utdallas.edu의 이메일 확인됨 Raheem Beyah Dean and Southern Company Chair, Georgia Tech College of Engineering ece.gatech.edu의 이메일 확인됨 Ai Chunyu University of South Carolina Upstate uscupstate.edu의 이메일 확인됨 Zaobo He Jinan University (暨南大学) jnu.edu.cn의 이메일 확인됨 Meng Han Intelligence Fusion Research Center (IFRC) zju.edu.cn의 이메일 확인됨 Mingyuan Yan Assistant Professor, University of North Georgia ung.edu의 이메일 확인됨 Xiuzhen Cheng School of Computer Science and Technology, Shandong University sdu.edu.cn의 이메일 확인됨 Feng Wang Arizona State University asu.edu의 이메일 확인됨 Xiaohua JIA IEEE Fellow (Computer Society), City University of Hong Kong gapps.cityu.edu.hk의 이메일 확인됨 Hongwei Du Professor of Computer Science and Technology, Harbin Institute of Technology, Shenzhen hit.edu.cn의 이메일 확인됨 Mihaela Cardei Florida Atlantic University fau.edu의 이메일 확인됨 Tuo Shi Aalto University aalto.fi의 이메일 확인됨 Selcuk Uluagac Florida International University fiu.edu의 이메일 확인됨 PENGJUN WAN Illinois Institute of Technology cs.iit.edu의 이메일 확인됨 Zhuojun Duan James Madison University jmu.edu의 이메일 확인됨 Sushil K Prasad Professor of Computer Science and ACM Distinguished Scientist, University of Texas at San Antonio utsa.edu의 이메일 확인됨 Bo Yu (于博) Alibaba Inc. alibaba-inc.com의 이메일 확인됨 Alexander Zelikovsky Distinguished University Professor at Georgia State University gsu.edu의 이메일 확인됨 Lu Ruan Associate Professor of Computer Science, Iowa State University iastate.edu의 이메일 확인됨 모두 보기 팔로우 Yingshu Li Professor, Georgia State University gsu.edu의 이메일 확인됨 - 홈페이지 Privacy-aware Computing IoT Wireless Networks Social Networks 학술자료 인용 공개 액세스 공동 저자 제목 정렬 서지정보순 정렬 연도순 정렬 제목순 정렬 인용 인용 연도 Energy-efficient target coverage in wireless sensor networks M Cardei, MT Thai, Y Li, W Wu Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and�… , 2005 1429 2005 Collective data-sanitization for preventing sensitive information inference attacks in social networks Z Cai, Z He, X Guan, Y Li IEEE Transactions on Dependable and Secure Computing 15 (4), 577-590 , 2016 666 2016 Distributed data aggregation scheduling in wireless sensor networks B Yu, J Li, Y Li IEEE INFOCOM 2009, 2159-2167 , 2009 453 * 2009 Minimum connected dominating sets and maximal independent sets in unit disk graphs W Wu, H Du, X Jia, Y Li, SCH Huang Theoretical Computer Science 352 (1-3), 1-7 , 2006 419 * 2006 Nearly constant approximation for data aggregation scheduling in wireless sensor networks SCH Huang, PJ Wan, CT Vu, Y Li, F Yao IEEE INFOCOM 2007-26th IEEE International Conference on Computer�… , 2007 402 * 2007 On greedy construction of connected dominating sets in wireless networks Y Li, MT Thai, F Wang, CW Yi, PJ Wan, DZ Du Wireless Communications and Mobile Computing 5 (8), 927-932 , 2005 343 * 2005 Cost-efficient strategies for restraining rumor spreading in mobile social networks Z He, Z Cai, J Yu, X Wang, Y Sun, Y Li IEEE Transactions on Vehicular Technology 66 (3), 2789-2800 , 2016 327 2016 Constructing minimum connected dominating sets with bounded diameters in wireless networks D Kim, Y Wu, Y Li, F Zou, DZ Du IEEE Transactions on Parallel and Distributed Systems 20 (2), 147-157 , 2008 288 * 2008 A greedy approximation for minimum connected dominating sets L Ruan, H Du, X Jia, W Wu, Y Li, KI Ko Theoretical Computer Science 329 (1-3), 325-330 , 2004 284 2004 Deep learning based inference of private information using embedded sensors in smart devices Y Liang, Z Cai, J Yu, Q Han, Y Li IEEE Network 32 (4), 8-14 , 2018 282 2018 Wireless sensor networks and applications Y Li, MT Thai Springer Science & Business Media , 2008 282 2008 Sparse target counting and localization in sensor networks based on compressive sensing B Zhang, X Cheng, N Zhang, Y Cui, Y Li, Q Liang 2011 Proceedings IEEE INFOCOM, 2255-2263 , 2011 247 2011 Data linkage in smart internet of things systems: a consideration from a privacy perspective X Zheng, Z Cai, Y Li IEEE Communications Magazine 56 (9), 55-61 , 2018 240 2018 zkCrowd: a hybrid blockchain-based crowdsourcing platform S Zhu, Z Cai, H Hu, Y Li, W Li IEEE Transactions on Industrial Informatics 16 (6), 4196-4205 , 2019 229 2019 Construction algorithms for k-connected m-dominating sets in wireless sensor networks Y Wu, Y Li Proceedings of the 9th ACM international symposium on Mobile ad hoc�… , 2008 229 * 2008 Using crowdsourced data in location-based social networks to explore influence maximization JLZ Cai, M Yan, Y Li IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer�… , 2016 205 2016 Comparative analysis of quality of service and memory usage for adaptive failure detectors in healthcare systems N Xiong, AV Vasilakos, LT Yang, L Song, Y Pan, R Kannan, Y Li IEEE Journal on Selected Areas in Communications 27 (4), 495-509 , 2009 202 2009 Transforming complete coverage algorithms to partial coverage algorithms for wireless sensor networks Y Li, C Vu, C Ai, G Chen, Y Zhao IEEE Transactions on Parallel and Distributed Systems 22 (4), 695-703 , 2010 181 * 2010 Composite event detection in wireless sensor networks CT Vu, RA Beyah, Y Li 2007 IEEE International Performance, Computing, and Communications�… , 2007 173 * 2007 Follow but no track: Privacy preserved profile publishing in cyber-physical social systems X Zheng, Z Cai, J Yu, C Wang, Y Li IEEE Internet of Things Journal 4 (6), 1868-1878 , 2017 164 2017 현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요. 학술자료 1–20 더보기 개인정보처리방침 약관 도움말 학술검색 정보 검색 도움말
2026-01-13T09:29:18
https://www.linkedin.com/products/categories/web-hosting?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_subtitle-click
Best Web Hosting | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Web Developer (51) Web Designer (41) Owner (26) Information Technology Specialist (19) Information Technology Manager (13) See all products Find top products in Web Hosting category Software used to make websites accessible on the Internet through the rental of server space. - Use remote storage for shared, reseller, dedicated, virtual private server, colocation, or cloud hosting - Make sites available per storage limits, bandwidth allowance, and with minimum downtime for server maintenance - Integrate with file and content management systems, server-side code, databases, and e-commerce platforms - Keep site secure with backups, firewalls, and malware detection Can include domain name registration, email accounts, and website builders. 901 results IPv4 Address Leasing Service Web Hosting by LARUS Limited LARUS Limited offers leasing of IPv4 addresses, which are unique numerical identifiers assigned to devices connected to the internet. Leasing IPv4 addresses allows businesses to obtain the necessary IP addresses they need for their online operations without having to purchase them outright. This can be a cost-effective solution for companies that need a temporary or flexible supply of IP addresses. LARUS Limited offers a range of leasing options to suit different needs, with competitive pricing and reliable technical support. Lease IPv4 addresses directly from LARUS's pools and use them like your own without limitation regarding geography and usage. View product Pantheon Platform Web Hosting by Pantheon Pantheon is the only WebOps platform built to unify web infrastructure, workflows, and governance for Drupal, WordPress and Next.js. By eliminating outdated tech and tedious sysadmin tasks, Pantheon frees your team to focus on delivering exceptional results and creating extraordinary digital experiences. Trusted by innovative brands like Tableau, Uber, Okta, Home Depot, and Doctors without Borders, Pantheon powers over 700,000 websites and drives 17 billion monthly pageviews. With a lightning-fast content delivery network, always-on security, and effortless scalability, we empower web teams to build, iterate, and launch with confidence. View product Elastic Metal Web Hosting by Scaleway Elastic Metal combines powerful dedicated servers, with flexible pricing, and native integration into Scaleway complete cloud ecosystem. With this range of Bare Metal, you get full control of the resources and applications installed on the server, for maximum performance and security. Elastic Metal servers are available on two different billing plans: hourly or monthly. View product Web Hosting Web Hosting by SiteGround At SiteGround, we offer fast and secure Website Hosting for personal and business websites with agenda to succeed. - Easy Website start or transfer - Daily Backups, Out-of-the-box Caching, Free SSL & CDN - Optimized for popular CMS applications - Advanced solutions for Ecommerce - Expert Hosting Support available 24/7 - Trusted by the owners of over 2,000,000 domains View product WooCommerce Hosting Web Hosting by SiteGround SiteGround's Managed WooCommerce Hosting is designed to help clients sell more online. It is suitable for both new WooCommerce stores, as well as established Woo sites. With SiteGround you get: - WooCommerce pre-installed - Managed WP & Woo Updates - Fast performance thanks to free CDN and smart caching - Top security with our free SSL, AI anti-bot system, custom WAF - High availability with our Daily Backups and restore functionality - 24/7 Expert Hosting Support and much more. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Cloudways Autonomous Web Hosting by Cloudways Experience fully-managed WordPress hosting that auto-scales with your growing business. Launch your app in under a minute on a self scaling powerful infrastructure that is pre-configured to scale up & down acccording to the traffic demand. For the tech-savvy, this infrastructure is built on Kubernetes with integrated Cloudflare Enterprise & Object Cache Pro's to help ensure to-notch performance & security Enjoy free 24/7/365 technical support, so you're never left alone when you need assitance. Cloudways Autonomous makes high-performance WordPress hosting simple and accessible, combining advanced infrastructure with user-friendly management. Join the 100,000+ SMBs who scale with Cloudways! View product Sites Web Hosting by Appwrite Sites: The open-source Vercel alternative. ✅ Deploy your site: Monitor your deployment logs to ensure a stable connection with the Appwrite Network. ✅ Integrate with your backend: Easily integrate Appwrite's backend products with your site. Zero configuration needed. ✅ Scale safely and deliver widely: Make your product available anywhere in the world through our Global CDN. View product Front-end hosting Web Hosting by Contentstack Accelerate digital delivery with automated, MACH-compliant hosting. Publish updates in minutes, auto-redeploy across regions and ensure consistently fast experiences through global cache refresh. View product Leaseweb Colocation Web Hosting by Leaseweb Our colocation solutions allow you to interconnect with cloud providers and within the Leaseweb Network to create hybrid architectures. View product one.com Web Hosting Web Hosting by one.com Web hosting with one.com is affordable, fast and secure to keep your website up and running. Free SSL certificate and 24/7 support are included in all plans. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#failure-response-operation-not-supported
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://www.linkedin.com/products/cardano-foundation-veridian?similarProducts=true&trk=products_details_guest_similar_products_section_sign_in
Veridian | LinkedIn Skip to main content LinkedIn Cardano Foundation in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Veridian Identity & Access Management (IAM) Software by Cardano Foundation See who's skilled in this Add as skill Download now Report this product About Veridian is a decentralized identity platform designed to meet the demands of a connected, interoperable digital world. Developed with cutting-edge standards and powered by open infrastructure, Veridian provides a secure and scalable framework for digital identity—across individuals, organizations, and machines. Key Benefits: • Self-sovereign identity through decentralized identifiers (DIDs) • Post-quantum cryptographic security • Industry-compliant verifiable credentials • Cross-platform interoperability and delegated data authority • Full transparency via public, permissionless protocols • Optimized for enterprise, IoT, and low-bandwidth environments Veridian Wallet: Manage identifiers, credentials, and secure connections—all from a single mobile interface. Veridian’s wallet leverages biometric authentication, local key signing, and seamless integration with the Cardano blockchain. Media Products media viewer No more previous content Veridian: The Identity Platform Built for Tomorrow Discover Veridian, the Cardano Foundation's groundbreaking open-source digital identity platform. The next-generation solution empowers individuals and organizations with unprecedented control over their digital presence. Launching with the Veridian Wallet mobile app, the platform delivers verifiable credentials and robust data security, setting a new standard for trust in the digital age. Credentials Information Sample Passport No more next content Similar products Crowd Crowd Identity & Access Management (IAM) Software Cloudflare Access Cloudflare Access Identity & Access Management (IAM) Software Saviynt Intelligence Saviynt Intelligence Identity & Access Management (IAM) Software Akamai Identity Cloud Akamai Identity Cloud Identity & Access Management (IAM) Software RSA Governance & Lifecycle RSA Governance & Lifecycle Identity & Access Management (IAM) Software Teleport Teleport Identity & Access Management (IAM) Software Sign in to see more Show more Show less Cardano Foundation products Cardano Cardano Blockchain Platforms Cardano Academy Cardano Academy Online Course Platforms Reeve Reeve Enterprise Accounting Software LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#failure-response-url-not-supported
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://www.linkedin.com/products/netscout-arbor-cloud-ddos-protection/?trk=products_seo_search
Arbor Cloud DDoS Protection | LinkedIn Skip to main content LinkedIn NETSCOUT in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Arbor Cloud DDoS Protection DDoS Protection Software by NETSCOUT See who's skilled in this Add as skill Learn more Report this product About Arbor Cloud combines on-premise DDoS defense with cloud-based traffic scrubbing services that are tightly integrated via an automated cloud signal. This multi-layered, adaptive DDoS protection approach is a proven industry best practice and is the only way to mitigate today's full spectrum of DDoS threats for both Service Providers and Enterprises, all from a single cloud DDoS protection provider. This product is intended for Cyber Security Engineer Chief Executive Officer Chief Information Officer Network Operations Center Head of Information Technology Network Security Engineer Security Engineer Director of Security Information Technology Specialist Director of Information Technology Media Products media viewer No more previous content Arbor Cloud Interface Arbor Cloud Arbor Cloud Infrastructure No more next content Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Sign in to see more Show more Show less NETSCOUT products Arbor Edge Defense Arbor Edge Defense DDoS Protection Software Arbor Sightline Arbor Sightline Network Monitoring Software Arbor Threat Mitigation System (TMS) Arbor Threat Mitigation System (TMS) DDoS Protection Software InfiniStreamNG (ISNG) InfiniStreamNG (ISNG) Business Continuity Software nGenius Business Analytics nGenius Business Analytics Business Intelligence (BI) Software nGeniusONE nGeniusONE Application Performance Monitoring (APM) Software nGeniusPULSE nGeniusPULSE Network Management Software Omnis Threat Horizon Omnis Threat Horizon DDoS Protection Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://git-scm.com/book/it/v2/Git-Basics-Getting-a-Git-Repository
Git - Getting a Git Repository About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Per Iniziare 1.1 Il Controllo di Versione 1.2 Una Breve Storia di Git 1.3 Cos’é Git? 1.4 La riga di comando 1.5 Installing Git 1.6 First-Time Git Setup 1.7 Chiedere aiuto 1.8 Sommario 2. Git Basics 2.1 Getting a Git Repository 2.2 Recording Changes to the Repository 2.3 Viewing the Commit History 2.4 Undoing Things 2.5 Working with Remotes 2.6 Tagging 2.7 Git Aliases 2.8 Sommario 3. Git Branching 3.1 Branches in a Nutshell 3.2 Basic Branching and Merging 3.3 Branch Management 3.4 Branching Workflows 3.5 Remote Branches 3.6 Rebasing 3.7 Summary 4. Git on the Server 4.1 The Protocols 4.2 Getting Git on a Server 4.3 Generating Your SSH Public Key 4.4 Setting Up the Server 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Third Party Hosted Options 4.10 Summary 5. Distributed Git 5.1 Distributed Workflows 5.2 Contributing to a Project 5.3 Maintaining a Project 5.4 Summary 6. GitHub 6.1 Account Setup and Configuration 6.2 Contributing to a Project 6.3 Maintaining a Project 6.4 Managing an organization 6.5 Scripting GitHub 6.6 Summary 7. Git Tools 7.1 Revision Selection 7.2 Interactive Staging 7.3 Stashing and Cleaning 7.4 Signing Your Work 7.5 Searching 7.6 Rewriting History 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugging with Git 7.11 Submodules 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 Summary 8. Customizing Git 8.1 Git Configuration 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Summary 9. Git and Other Systems 9.1 Git as a Client 9.2 Migrating to Git 9.3 Summary 10. Git Internals 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 The Refspec 10.6 Transfer Protocols 10.7 Maintenance and Data Recovery 10.8 Environment Variables 10.9 Summary A1. Appendice A: Git in altri contesti A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git in Powershell A1.7 Riassunto A2. Appendice B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. Appendice C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 2.1 Git Basics - Getting a Git Repository Se devi leggere un solo capitolo per iniziare a usare Git, leggi questo. Questo capitolo copre ogni comando di base di cui avrai bisogno per fare la maggior parte delle cose che farai con Git. Per la fine del capitolo, dovresti essere in grado di configurare e inizializare una repository, iniziare e stoppare il tracking dei file, e fare stage e commit delle modifiche. Mostreremo anche come configurare Git per ignorare alcuni file e file pattern, come annullare un errore velocemente e facilmente, come sfogliare la storia del tuo progetto e vedere le differenze tra i commit, e come fare push e pull dalle repository remote. Getting a Git Repository You can get a Git project using two main approaches. The first takes an existing project or directory and imports it into Git. The second clones an existing Git repository from another server. Initializing a Repository in an Existing Directory If you’re starting to track an existing project in Git, you need to go to the project’s directory and type $ git init This creates a new subdirectory named .git that contains all of your necessary repository files – a Git repository skeleton. At this point, nothing in your project is tracked yet. (See [ch10-git-internals] for more information about exactly what files are contained in the .git directory you just created.) If you want to start version-controlling existing files (as opposed to an empty directory), you should probably begin tracking those files and do an initial commit. You can accomplish that with a few git add commands that specify the files you want to track, followed by a git commit : $ git add *.c $ git add LICENSE $ git commit -m 'initial project version' We’ll go over what these commands do in just a minute. At this point, you have a Git repository with tracked files and an initial commit. Cloning an Existing Repository If you want to get a copy of an existing Git repository – for example, a project you’d like to contribute to – the command you need is git clone . If you’re familiar with other VCS systems such as Subversion, you’ll notice that the command is "clone" and not "checkout". This is an important distinction – instead of getting just a working copy, Git receives a full copy of nearly all data that the server has. Every version of every file for the history of the project is pulled down by default when you run git clone . In fact, if your server disk gets corrupted, you can often use nearly any of the clones on any client to set the server back to the state it was in when it was cloned (you may lose some server-side hooks and such, but all the versioned data would be there – see Getting Git on a Server for more details). You clone a repository with git clone [url] . For example, if you want to clone the Git linkable library called libgit2, you can do so like this: $ git clone https://github.com/libgit2/libgit2 That creates a directory named “libgit2”, initializes a .git directory inside it, pulls down all the data for that repository, and checks out a working copy of the latest version. If you go into the new libgit2 directory, you’ll see the project files in there, ready to be worked on or used. If you want to clone the repository into a directory named something other than “libgit2”, you can specify that as the next command-line option: $ git clone https://github.com/libgit2/libgit2 mylibgit That command does the same thing as the previous one, but the target directory is called mylibgit . Git has a number of different transfer protocols you can use. The previous example uses the https:// protocol, but you may also see git:// or user@server:path/to/repo.git , which uses the SSH transfer protocol. Getting Git on a Server will introduce all of the available options the server can set up to access your Git repository and the pros and cons of each. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#failure-response-other
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#failure-response-not-found
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://www.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fmessenger.com%252F%26amp%253Bh%3DAT0NYKzIfH0K_3n8zEg2FDOZcaDpnHRvH0R8dh-lPmKwTCHeq_8KnftluFblL_25g--6zmzfczgAJeQwSsawTrtkNWyUyAgtdjIzSYKuD5W1hnR8LO8GCp5d3SZTsqtKSfCJcbVVJU2fV45q
Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026
2026-01-13T09:29:18
https://www.linkedin.com/products/cloudflare-spectrum/?trk=products_seo_search
Cloudflare Spectrum | LinkedIn Skip to main content LinkedIn Cloudflare in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Cloudflare Spectrum DDoS Protection Software by Cloudflare See who's skilled in this Add as skill Learn more Report this product About Cloudflare Spectrum increases TCP and UDP security and prevents DDoS attacks for gaming, mail, SSH, and other services. Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Sign in to see more Show more Show less Cloudflare products Argo Smart Routing Argo Smart Routing Network Management Software Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare DNS Cloudflare DNS Managed DNS Services Software Cloudflare Load Balancing Cloudflare Load Balancing Load Balancing Software Cloudflare Registrar Cloudflare Registrar Managed DNS Services Software Cloudflare SSL / TLS Cloudflare SSL / TLS SSL Certificates Software Cloudflare WAF Cloudflare WAF Web Application Firewalls (WAF) Cloudflare Web Analytics Cloudflare Web Analytics Digital Analytics Software Cloudflare Workers Cloudflare Workers Server Virtualization Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://giou.stanford.edu/
Generalized Intersection over Union Home Method Paper GitHub Team Generalized Intersection over Union A Metric and A Loss for Bounding Box Regression Cite Paper Explore Object Detection and $IoU$ Intersection over Union (IoU), also known as the Jaccard index, is the most popular evaluation metric for tasks such as segmentation, object detection and tracking. Object detection consists of two sub-tasks: localization, which is determining the location of an object in an image, and classification, which is assigning a class to that object. The goal of localization in object detection is to draw a 2D bounding box around the objects in the scene. Further simplified in this example, we focus on a single bounding box. Click this button to Load the Ground Truth Bounding Box . The ground truth bounding box should now be shown in the image above. The source for this image and bounding box is the coco dataset . We know this is the ground truth because a person manually annotated the image. Now, click the Load Prediction button to show a prediction that might be made. This prediction bounding box is usually the output of a neural network, either during training or at inference time. Explore what the Intersection Area $I$ or the Union Area $U$ look like. Intersection over Union ($IoU$) is then computed as follows: $$IoU = \frac{|A\cap B|}{|A\cup B|} = \frac{|I|}{|U|}$$ Where A and B are the prediction and ground truth bounding boxes. $IoU$ has the appealing property of scale invariance. This means that the width, height and location of the two bounding boxes under consideration are taken into account. The normalized IoU measure focuses on the area of the shapes, no matter their size. Common Cost Functions Object detection neural networks commonly use $\ell_1$-norm or $\ell_2$-norm for their cost function (aka. loss function). Our work shows that there is not a strong correlation between minimizing these commonly used losses and improving their IoU value. To understand why this is the case, recall that a rectangle can be represented parametrically in a variety of ways. For example, bounding boxes can be represented by their top-left corner $(x_1, y_1)$ and their bottom-right corner $(x_2, y_2)$, which can be written as $(x_1, y_1, x_2, y_2)$. Alternatively, the $(x_c, y_c)$ coordinates for the center of the bounding box can be used in conjunction with the bounding box's width $w$ and height $h$ giving $(x_c, y_c, w, h)$. If we calculate $\ell_2$-norm distance, $||.||_2$ for the bounding boxes in both cases shown above and we calculate $\ell_1$-norm distance, $||.||_1$. Notice how the $\ell_n$-norm values are exactly the same, but their $IoU$ and $GIoU$ values are very different. It is common practice to train a network by optimizing a loss function such as $\ell_1$-norm or $\ell_2$-norm, but then evaluate performance on a different function, such as $IoU$. Moreover, $\ell_n$-norm based losses are not scale invariant. Therefore, bounding boxes with the same level of overlap, but different scales will give different values. State of the art object detection networks deal with this problem by introducing ideas such as anchor boxes and non-linear representations, but even with these engineered tweaks, there is still a gap betwen the $\ell_n$-norm cost function and the $IoU$ metric. $IoU$ vs. $GIoU$ as a Metric In object detection, $IoU$ is used as a metric to evaluate how close the prediction bounding box is to the ground truth. In the first example above, the prediction and ground truth bounding boxes overlap, so the value for $IoU$ is non-zero. Let's look at an example where $IoU$ falls short. First, Show the Ground Truth Bounding Box . Now, say that instead of making a prediction like we saw above, what if we Make a Bad Prediction where the predicted bounding box has no overlap with the ground truth. In this case, and any other case where there is no overlap between the ground truth and prediction bounding boxes, intersection is 0, therefore $IoU$ will be 0 as well. Now let's Make a Better Prediction . Unfortunately, $IoU$ is still 0 for both. It would be nice if $IoU$ indicated if our new, better prediction was closer to the ground truth than the first prediction, even in cases of no intersection. Our work proposes a solution to this, $GIoU$, which is formulated as follows: $$GIoU = \frac{|A\cap B|}{|A\cup B|} - \frac{|C\backslash(A\cup B)|}{|C|} = IoU - \frac{|C\backslash(A\cup B)|}{|C|}$$ Where $A$ and $B$ are the prediction and ground truth bounding boxes. $C$ is the smallest convex hull that encloses both $A$ and $B$. Use the buttons below to visualize the value of $C$ for both of the predictions in this example. Notice that the area of $C$ is smaller in the better case and all other values remain constant. $IoU$ will be 0 in both cases. Therefore, a smaller value is subtracted and the value of $GIoU$ increases as the prediction moves towards the ground truth. Bad Prediction GIoU Area C vs. Better Prediction GIoU Area C $GIoU$ as a loss Recall that in a neural network, any given loss function must be differentiable to allow for backpropagation. We see from the above example that in cases where there is no intersection, $IoU$ has no value and therefore no gradient. $GIoU$ however, is always differentiable. We sampled cases where the prediction bounding box overlaps (aka. intersects) the ground truth and cases where there is no intersection. The relationship between $IoU$ and $GIoU$ for these samples is shown in this figure. From the plot, as from the formulation above, you can see that $GIoU$ ranges from -1 to 1. Negative values occur when the area enclosing both bounding boxes, e.g. $C$, is greater than $IoU$. As the $IoU$ component increases, the value of $GIoU$ converges to $IoU$. $GIoU$ Algorithm Pseudocode Algorithm : $IoU$ and $GIoU$ as bounding box losses $input$: Predicted $B^p$ and ground truth $B^g$ bounding box coordinates: $B^p = (x^p_1,y^p_1,x^p_2,y^p_2) $, $B^g = (x^g_1,y^g_1,x^g_2,y^g_2)$ $output$: $\mathcal{L}_{IoU}$, $\mathcal{L}_{GIoU}$ 1. For the predicted box $B^p$, ensuring $x^p_2>x^p_1$ and $y^p_2>y^p_1$: $\hat{x}^p_1 = \min(x^p_1,x^p_2)$, $ \hat{x}^p_2 = \max(x^p_1,x^p_2)$, $\hat{y}^p_1 = \min(y^p_1,y^p_2)$, $\hat{y}^p_2 = \max(y^p_1,y^p_2)$ 2. Calculating area of $B^g$: $A^g = (x^g_2 - x^g_1)\times(y^g_2 - y^g_1)$ 3. Calculating area of $B^p$: $A^p = (\hat{x}^p_2 - \hat{x}^p_1)\times(\hat{y}^p_2 - \hat{y}^p_1)$ 4. Calculating intersection $\mathcal{I}$ between $B^p$ and $B^g$: $x^{\mathcal{I}}_1 = \max(\hat{x}^p_1,x^g_1)$, $x^{\mathcal{I}}_2 = \min(\hat{x}^p_2,x^g_2)$, $y^{\mathcal{I}}_1 = \max(\hat{y}^p_1,y^g_1)$, $y^{\mathcal{I}}_2 = \min(\hat{y}^p_2,y^g_2)$, $\mathcal{I} = \begin{cases} (x^{\mathcal{I}}_2 - x^{\mathcal{I}}_1) \times (y^{\mathcal{I}}_2 - y^{\mathcal{I}}_1) & \text{if} \quad x^{\mathcal{I}}_2 > x^{\mathcal{I}}_1, y^{\mathcal{I}}_2 > y^{\mathcal{I}}_1, \\ 0 & \text{otherwise} \end{cases}$ 5. Finding the coordinate of smallest enclosing box $B^c$: $x^{c}_1 = \min(\hat{x}^p_1,x^g_1)$, $x^{c}_2 = \max(\hat{x}^p_2,x^g_2)$, $y^{c}_1 = \min(\hat{y}^p_1,y^g_1)$, $y^{c}_2 = \max(\hat{y}^p_2,y^g_2)$ 6. Calculating area of $B^c$: $A^c = (x^c_2 - x^c_1)\times(y^c_2 - y^c_1)$ 7. $\displaystyle IoU = \frac{\mathcal{I}}{\mathcal{U}}$, where $\mathcal{U} = A^p+A^g-\mathcal{I}$ 8. $\displaystyle GIoU = IoU - \frac{A^c-\mathcal{U}}{A^c}$ 9. $\mathcal{L}_{IoU} = 1 - IoU$, $\mathcal{L}_{GIoU} = 1 - GIoU$ Team Hamid Rezatofighi Nathan Tsoi Website JunYoung Gwak Amir Sadeghian Ian Reid Website Silvio Savarese Website If you found this work helpful in your research, please consider citing: @article{Rezatofighi_2018_CVPR, author = {Rezatofighi, Hamid and Tsoi, Nathan and Gwak, JunYoung and Sadeghian, Amir and Reid, Ian and Savarese, Silvio}, title = {Generalized Intersection over Union}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019}, }
2026-01-13T09:29:18
https://www.linkedin.com/products/odoo-crm/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click
Odoo CRM | LinkedIn Skip to main content LinkedIn Odoo in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Odoo CRM Customer Relationship Management (CRM) Software by Odoo See who's skilled in this Add as skill Learn more Report this product About With over 5 million users, Odoo is the #1 open-source CRM software in the world! Sales teams can easily keep track of top leads and opportunities with just a few clicks. Users can personalize their sales cycle, take control of their statistics/forecasts, and create incredible marketing campaign automation to increase their overall sales performance. With Odoo CRM, sales teams are able to analyze the quality of their leads, make faster decisions, and save valuable time by integrating emails from all contacts directly into this fully-integrated app. Media Products media viewer No more previous content Odoo CRM (Tour) Odoo CRM Dashboard Odoo CRM Odoo CRM Pipeline Analysis Odoo CRM Customers No more next content Similar products Sales Cloud Sales Cloud Customer Relationship Management (CRM) Software Zoho CRM Zoho CRM Customer Relationship Management (CRM) Software Bigin by Zoho CRM Bigin by Zoho CRM Customer Relationship Management (CRM) Software Experian DataShare Experian DataShare Customer Relationship Management (CRM) Software Freshsales Freshsales Customer Relationship Management (CRM) Software Clickup Clickup Customer Relationship Management (CRM) Software Sign in to see more Show more Show less Odoo products Odoo Odoo Business Management Software Odoo Accounting Odoo Accounting Enterprise Accounting Software Odoo eCommerce Odoo eCommerce E-Commerce Platforms Odoo Inventory Odoo Inventory Inventory Management Software Odoo Manufacturing Odoo Manufacturing Manufacturing Execution System (MES) Software Odoo Point of Sale Odoo Point of Sale Retail POS Systems Odoo Projects Odoo Projects Business Management Software Odoo Sign Odoo Sign E-Signature Software Odoo Website Odoo Website Website Builder Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://sre.google/sre-book/evolving-sre-engagement-model/#id-8ZQuEFQt2TVTyu7K-marker
Google SRE - Production Readiness Review: Engagement Insight Chapter 32 - The Evolving SRE Engagement Model Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolving SRE Engagement Model Written by Acacio Cruz and Ashish Bhambhani Edited by Betsy Beyer and Tim Harvey SRE Engagement: What, How, and Why We've discussed in most of the rest of this book what happens when SRE is already in charge of a service. Few services begin their lifecycle enjoying SRE support, so there needs to be a process for evaluating a service, making sure that it merits SRE support, negotiating how to improve any deficits that bar SRE support, and actually instituting SRE support. We call this process onboarding . If you are in an environment where you are surrounded by a lot of existing services in varying states of perfection, your SRE team will probably be running through a prioritized queue of onboardings for quite a while until the team has finished taking on the highest-value targets. Although this is very common, and a completely reasonable way of dealing with a fait accompli environment, there are actually at least two better ways of bringing the wisdom of production, and SRE support, to services old and new alike. In the first case, just as in software engineering—where the earlier the bug is found, the cheaper it is to fix—the earlier an SRE team consultation happens, the better the service will be and the quicker it will feel the benefit. When SRE is engaged during the earliest stages of design , the time to onboard is lowered and the service is more reliable "out of the gate," usually because we don't have to spend the time unwinding suboptimal design or implementation. Another way, perhaps the best, is to short-circuit the process by which specially created systems with lots of individual variations end up "arriving" at SRE's door. Provide product development with a platform of SRE-validated infrastructure, upon which they can build their systems. This platform will have the double benefit of being both reliable and scalable. This avoids certain classes of cognitive load problems entirely, and by addressing common infrastructure practices, allows product development teams to focus on innovation at the application layer, where it mostly belongs. In the following sections, we'll spend some time looking at each of these models in turn, beginning with the "classic" one, the PRR-driven model. The PRR Model The most typical initial step of SRE engagement is the Production Readiness Review (PRR), a process that identifies the reliability needs of a service based on its specific details. Through a PRR, SREs seek to apply what they've learned and experienced to ensure the reliability of a service operating in production. A PRR is considered a prerequisite for an SRE team to accept responsibility for managing the production aspects of a service. Figure 32-1 illustrates the lifecycle of a typical service. The Production Readiness Review can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. This chapter describes the Simple PRR Model, then discusses how its modification into the Extended Engagement Model and the Frameworks and SRE Platform structure allowed SRE to scale their engagement process and impact. Figure 32-1. A typical service lifecycle The SRE Engagement Model SRE seeks production responsibility for important services for which it can make concrete contributions to reliability. SRE is concerned with several aspects of a service, which are collectively referred to as production . These aspects include the following : System architecture and interservice dependencies Instrumentation, metrics, and monitoring Emergency response Capacity planning Change management Performance: availability, latency, and efficiency When SREs engage with a service, we aim to improve it along all of these axes, which makes managing production for the service easier. Alternative Support Not all Google services receive close SRE engagement. A couple of factors are at play here: Many services don't need high reliability and availability, so support can be provided by other means. By design, the number of development teams that request SRE support exceeds the available bandwidth of SRE teams (see Introduction ). When SRE can't provide full-fledged support, it provides other options for making improvements to production, such as documentation and consultation. Documentation Development guides are available for internal technologies and clients of widely used systems. Google's Production Guide documents production best practices for services , as determined by the experiences of SRE and development teams alike. Developers can implement the solutions and recommendations in such documentation to improve their services. Consultation Developers may also seek SRE consulting to discuss specific services or problem areas. The Launch Coordination Engineering (LCE) team (see Reliable Product Launches at Scale ) spends a majority of its time consulting with development teams. SRE teams that aren't specifically dedicated to launch consultations also engage in consultation with development teams. When a new service or a new feature has been implemented, developers usually consult with SRE for advice about preparing for the Launch phase. Launch consultation usually involves one or two SREs spending a few hours studying the design and implementation at a high level. The SRE consultants then meet with the development team to provide advice on risky areas that need attention and to discuss well-known patterns or solutions that can be incorporated to improve the service in production. Some of this advice may come from the Production Guide mentioned earlier. Consultation sessions are necessarily broad in scope because it's not possible to gain a deep understanding of a given system in the limited time available. For some development teams, consultation is not sufficient: Services that have grown by orders of magnitude since they launched, which now require more time to understand than is feasible through documentation and consultation. Services upon which many other services have subsequently come to rely upon, which now host significantly more traffic from many different clients. These types of services may have grown to the point at which they begin to encounter significant difficulties in production while simultaneously becoming important to users. In such cases, long-term SRE engagement becomes necessary to ensure that they are properly maintained in production as they grow. Production Readiness Reviews: Simple PRR Model When a development team requests that SRE take over production management of a service, SRE gauges both the importance of the service and the availability of SRE teams. If the service merits SRE support, and the SRE team and development organization agree on staffing levels to facilitate this support, SRE initiates a Production Readiness Review with the development team. The objectives of the Production Readiness Review are as follows: Verify that a service meets accepted standards of production setup and operational readiness, and that service owners are prepared to work with SRE and take advantage of SRE expertise. Improve the reliability of the service in production, and minimize the number and severity of incidents that might be expected. A PRR targets all aspects of production that SRE cares about. After sufficient improvements are made and the service is deemed ready for SRE support, an SRE team assumes its production responsibilities. This brings us to the Production Readiness Review process itself. There are three different but related engagement models (Simple PRR Model, Early Engagement Model, and Frameworks and SRE Platform), which will be discussed in turn. We will first describe the Simple PRR Model, which is usually targeted at a service that is already launched and will be taken over by an SRE team. A PRR follows several phases, much like a development lifecycle, although it may proceed independently in parallel with the development lifecycle. Engagement SRE leadership first decides which SRE team is a good fit for taking over the service. Usually one to three SREs are selected or self-nominated to conduct the PRR process. This small group then initiates discussion with the development team. The discussion covers matters such as: Establishing an SLO/SLA for the service Planning for potentially disruptive design changes required to improve reliability Planning and training schedules The goal is to arrive at a common agreement about the process, end goals, and outcomes that are necessary for the SRE team to engage with the development team and their service. Analysis Analysis is the first large segment of work. During this phase, the SRE reviewers learn about the service and begin analyzing it for production shortcomings. They aim to gauge the maturity of the service along the various axes of concern to SRE. They also examine the service's design and implementation to check if it follows production best practices. Usually, the SRE team establishes and maintains a PRR checklist explicitly for the Analysis phase. The checklist is specific to the service and is generally based on domain expertise, experience with related or similar systems, and best practices from the Production Guide. The SRE team may also consult other teams that have more experience with certain components or dependencies of the service. A few examples of checklist items include: Do updates to the service impact an unreasonably large percentage of the system at once? Does the service connect to the appropriate serving instance of its dependencies? For example, end-user requests to a service should not depend on a system that is designed for a batch-processing use case. Does the service request a sufficiently high network quality-of-service when talking to a critical remote service? Does the service report errors to central logging systems for analysis? Does it report all exceptional conditions that result in degraded responses or failures to the end users? Are all user-visible request failures well instrumented and monitored, with suitable alerting configured? The checklist may also include operational standards and best practices followed by a specific SRE team. For example, a perfectly functional service configuration that doesn't follow an SRE team's "gold standard" might be refactored to work better with SRE tools for scalably managing configurations. SREs also look at recent incidents and postmortems for the service, as well as follow-up tasks for the incidents. This evaluation gauges the demands of emergency response for the service and the availability of well-established operational controls. Improvements and Refactoring The Analysis phase leads to the identification of recommended improvements for the service. This next phase proceeds as follows: Improvements are prioritized based upon importance for service reliability. The priorities are discussed and negotiated with the development team, and a plan of execution is agreed upon. Both SRE and product development teams participate and assist each other in refactoring parts of the service or implementing additional features. This phase typically varies the most in duration and amount of effort. How much time and effort this phase will involve depends upon the availability of engineering time for refactoring, the maturity and complexity of the service at the start of the review, and myriad other factors. Training Responsibility for managing a service in production is generally assumed by an entire SRE team. To ensure that the team is prepared, the SRE reviewers who led the PRR take ownership of training the team, which includes the documentation necessary to support the service. Typically with the help and participation of the development team, these engineers organize a series of training sessions and exercises. Instruction can include: Design overviews Deep dives on various request flows in the system A description of the production setup Hands-on exercises for various aspects of system operations When the training is concluded, the SRE team should be prepared to manage the service. Onboarding The Training phase unblocks onboarding of the service by the SRE team. It involves a progressive transfer of responsibilities and ownership of various production aspects of the service, including parts of operations, the change management process, access rights, and so forth. The SRE team continues to focus on the various areas of production mentioned earlier. To complete the transition, the development team must be available to back up and advise the SRE team for a period of time as it settles in managing production for the service. This relationship becomes the basis for the ongoing work between the teams. Continuous Improvement Active services continuously change in response to new demands and conditions, including user requests for new features, evolving system dependencies, and technology upgrades, in addition to other factors. The SRE team must maintain service reliability standards in the face of these changes by driving continuous improvement. The responsible SRE team naturally learns more about the service in the course of operating the service, reviewing new changes, responding to incidents, and especially when conducting postmortems/root cause analyses. This expertise is shared with the development team as suggestions and proposals for changes to the service whenever new features, components, and dependencies may be added to the service. Lessons from managing the service are also contributed to best practices, which are documented in the Production Guide and elsewhere. Engaging with Shakespeare Initially, the developers of the Shakespeare service were responsible for the product, including carrying the pager for emergency response. However, with growing use of the service and the growth of the revenue coming from the service, SRE support became desirable. The product has already been launched, so SRE conducted a Production Readiness Review. One of the things they found was that the dashboards were not completely covering some of the metrics defined in the SLO, so that needed to be fixed. After all the issues that had been filed had been fixed, SRE took over the pager for the service, though two developers were in the on-call rotation as well. The developers are participating in the weekly on-call meeting discussing last week's problems and how to handle upcoming large-scale maintenance or cluster turndowns. Also future plans for the service are now discussed with the SREs to make sure that new launches will go flawlessly (though Murphy's law is always looking for opportunities to spoil that). Evolving the Simple PRR Model: Early Engagement Thus far, we've discussed the Production Readiness Review as it's used in the Simple PRR Model, which is limited to services that have already entered the Launch phase. There are several limitations and costs associated with this model. For example: Additional communication between teams can increase some process overhead for the development team, and cognitive burden for the SRE reviewers. The right SRE reviewers must be available, and capable of managing their time and priorities with regards to their existing engagements. Work done by SREs must be highly visible and sufficiently reviewed by the development team to ensure effective knowledge sharing. SREs should essentially work as a part of the development team, rather than an external unit. However, the main limitations of the PRR Model stem from the fact that the service is launched and serving at scale, and the SRE engagement starts very late in the development lifecycle. If the PRR occurred earlier in the service lifecycle, SRE's opportunity to remedy potential issues in the service would be markedly increased. As a result, the success of the SRE engagement and the future success of the service itself would likely improve. The resulting drawbacks can pose a significant challenge to the success of the SRE engagement and the future success of the service itself. Candidates for Early Engagement The Early Engagement Model introduces SRE earlier in the development lifecycle in order to achieve significant additional advantages. Applying the Early Engagement Model requires identifying the importance and/or business value of a service early in the development lifecycle, and determining if the service will have sufficient scale or complexity to benefit from SRE expertise. Applicable services often have the following characteristics: The service implements significant new functionality and will be part of an existing system already managed by SRE. The service is a significant rewrite or alternative to an existing system, targeting the same use cases. The development team sought SRE advice or approached SRE for takeover upon launch. The Early Engagement Model essentially immerses SREs in the development process. SRE's focus remains the same, though the means to achieve a better production service are different. SRE participates in Design and later phases, eventually taking over the service any time during or after the Build phase. This model is based on active collaboration between the development and SRE teams. Benefits of the Early Engagement Model While the Early Engagement Model does entail certain risks and challenges discussed previously, additional SRE expertise and collaboration during the entire lifecycle of the product creates significant benefits compared to an engagement initiated later in the service lifecycle. Design phase SRE collaboration during the Design phase can prevent a variety of problems or incidents from occurring later in production. While design decisions can be reversed or rectified later in the development lifecycle, such changes come at a high cost in terms of effort and complexity. The best production incidents are those that never happen! Occasionally, difficult trade-offs lead to the selection of a less-than-ideal design. Participation in the Design phase means that SREs are aware up front of the trade-offs and are part of the decision to pick a less-than-ideal option. Early SRE involvement aims to minimize future disputes over design choices once the service is in production. Build and implementation The Build phase addresses production aspects such as instrumentation and metrics, operational and emergency controls, resource usage, and efficiency. During this phase, SRE can influence and improve the implementation by recommending specific existing libraries and components, or helping build certain controls into the system. SRE participation at this stage helps enable ease of operations in the future and allows SRE to gain operational experience in advance of the launch. Launch SRE can also help implement widely used launch patterns and controls. For example, SRE might help implement a "dark launch" setup, in which part of the traffic from existing users is sent to the new service in addition to being sent to the live production service. The responses from the new service are "dark" since they are thrown away and not actually shown to users. Practices such as dark launches allow the team to gain operational insight, resolve issues without impacting existing users, and reduce the risk of encountering issues after launch. A smooth launch is immensely helpful in keeping the operational burden low and maintaining the development momentum after the launch. Disruptions around launch can easily result in emergency changes to source code and production, and disrupt the development team's work on future features. Post-launch Having a stable system at launch time generally leads to fewer conflicting priorities for the development team in terms of choosing between improving service reliability versus adding new features. In later phases of the service, the lessons from earlier phases can better inform refactoring or redesign. With extended involvement, the SRE team can be ready to take over the new service much sooner than is possible with the Simple PRR Model. The longer and closer engagement between the SRE and development teams also creates a collaborative relationship that can be sustained long term. A positive cross-team relationship fosters a mutual feeling of solidarity, and helps SRE establish ownership of the production responsibility. Disengaging from a service Sometimes a service doesn't warrant full-fledged SRE team management—this determination might be made post-launch, or SRE might engage with a service but never officially take it over. This is a positive outcome, because the service has been engineered to be reliable and low maintenance, and can therefore remain with the development team. It is also possible that SRE engages early with a service that fails to meet the levels of usage projected. In such cases, the SRE effort spent is simply part of the overall business risk that comes with new projects, and a small cost relative to the success of projects that meet expected scale. The SRE team can be reassigned, and lessons learned can be incorporated into the engagement process. Evolving Services Development: Frameworks and SRE Platform The Early Engagement Model made strides in evolving SRE engagement beyond the Simple PRR Model, which applied only to services that had already launched. However, there was still progress to be made in scaling SRE engagement to the next level by designing for reliability. Lessons Learned Over time, the SRE engagement model described thus far produced several distinct patterns: Onboarding each service required two or three SREs and typically lasted two or three quarters. The lead times for a PRR were relatively high (quarters away). The effort level required was proportional to the number of services under review, and was constrained by the insufficient number of SREs available to conduct PRRs. These conditions led to serialization of service takeovers and strict service prioritization. Due to differing software practices across services, each production feature was implemented differently. To meet PRR-driven standards, features usually had to be reimplemented specifically for each service or, at best, once for each small subset of services sharing code. These reimplementations were a waste of engineering effort. One canonical example is the implementation of functionally similar logging frameworks repeatedly in the same language because different services didn't implement the same coding structure. A review of common service issues and outages revealed certain patterns, but there was no way to easily replicate fixes and improvements across services. Typical examples included service overload situations and data hot-spotting. SRE software engineering contributions were often local to the service. Thus, building generic solutions to be reused was difficult. As a consequence, there was no easy way to implement new lessons individual SRE teams learned and best practices across services that had already been onboarded. External Factors Affecting SRE External factors have traditionally pressured the SRE organization and its resources in several ways. Google is increasingly following the industry trend of moving toward microservices. 151 As a result, both the number of requests for SRE support and the cardinality of services to support have increased. Because each service has a base fixed operational cost, even simple services demand more staffing. Microservices also imply an expectation of lower lead time for deployment, which was not possible with the previous PRR model (which had a lead time of months). Hiring experienced, qualified SREs is difficult and costly. Despite enormous effort from the recruiting organization, there are never enough SREs to support all the services that need their expertise. Once SREs are hired, their training is also a lengthier process than is typical for development engineers. Finally, the SRE organization is responsible for serving the needs of the large and growing number of development teams that do not already enjoy direct SRE support. This mandate calls for extending the SRE support model far beyond the original concept and engagement model. Toward a Structural Solution: Frameworks To effectively respond to these conditions, it became necessary to develop a model that allowed for the following principles: Codified best practices The ability to commit what works well in production to code, so services can simply use this code and become "production ready" by design. Reusable solutions Common and easily shareable implementations of techniques used to mitigate scalability and reliability issues. A common production platform with a common control surface Uniform sets of interfaces to production facilities, uniform sets of operational controls, and uniform monitoring, logging, and configuration for all services. Easier automation and smarter systems A common control surface that enables automation and smart systems at a level not possible before. For example, SREs can readily receive a single view of relevant information for an outage, rather than hand collecting and analyzing mostly raw data from disparate sources (logs, monitoring data, and so on). Based upon these principles, a set of SRE-supported platform and service frameworks were created, one for each environment we support(Java, C++, Go). Services built using these frameworks share implementations that are designed to work with the SRE-supported platform, and are maintained by both SRE and development teams. The main shift brought about by frameworks was to enable product development teams to design applications using the framework solution that was built and blessed by SRE, as opposed to either retrofitting the application to SRE specifications after the fact, or retrofitting more SREs to support a service that was markedly different than other Google services. An application typically comprises some business logic, which in turn depends on various infrastructure components. SRE production concerns are largely focused on the infrastructure-related parts of a service. The service frameworks implement infrastructure code in a standardized fashion and address various production concerns. Each concern is encapsulated in one or more framework modules, each of which provides a cohesive solution for a problem domain or infrastructure dependency. Framework modules address the various SRE concerns enumerated earlier, such as: Instrumentation and metrics Request logging Control systems involving traffic and load management SRE builds framework modules to implement canonical solutions for the concerned production area. As a result, development teams can focus on the business logic, because the framework already takes care of correct infrastructure use. A framework essentially is a prescriptive implementation for using a set of software components and a canonical way of combining these components. The framework can also expose features that control various components in a cohesive manner. For example, a framework might provide the following: Business logic organized as well-defined semantic components that can be referenced using standard terms Standard dimensions for monitoring instrumentation A standard format for request debugging logs A standard configuration format for managing load shedding Capacity of a single server and determination of "overload" that can both use a semantically consistent measure for feedback to various control systems Frameworks provide multiple upfront gains in consistency and efficiency. They free developers from having to glue together and configure individual components in an ad hoc service-specific manner, in ever-so-slightly incompatible ways, that then have to be manually reviewed by SREs. They drive a single reusable solution for production concerns across services, which means that framework users end up with the same common implementation and minimal configuration differences. Google supports several major languages for application development, and frameworks are implemented across all of these languages. While different implementations of the framework (say in C++ versus Java) can't share code, the goal is to expose the same API, behavior, configuration, and controls for identical functionality. Therefore, development teams can choose the language platform that fits their needs and experience, while SREs can still expect the same familiar behavior in production and standard tools to manage the service . New Service and Management Benefits The structural approach, founded on service frameworks and a common production platform and control surface, provided a host of new benefits. Significantly lower operational overhead A production platform built on top of frameworks with stronger conventions significantly reduced operational overhead, for the following reasons: It supports strong conformance tests for coding structure, dependencies, tests, coding style guides, and so on. This functionality also improves user data privacy, testing, and security conformance. It features built-in service deployment, monitoring, and automation for all services . It facilitates easier management of large numbers of services, especially micro-services, which are growing in number. It enables much faster deployment: an idea can graduate to fully deployed SRE-level production quality in a matter of days! Universal support by design The constant growth in the number of services at Google means that most of these services can neither warrant SRE engagement nor be maintained by SREs. Regardless, services that don't receive full SRE support can be built to use production features that are developed and maintained by SREs. This practice effectively breaks the SRE staffing barrier. Enabling SRE-supported production standards and tools for all teams improves the overall service quality across Google. Furthermore, all services that are implemented with frameworks automatically benefit from improvements made over time to frameworks modules. Faster, lower overhead engagements The frameworks approach results in faster PRR execution because we can rely upon: Built-in service features as part of the framework implementation Faster service onboarding (usually accomplished by a single SRE during one quarter) Less cognitive burden for the SRE teams managing services built using frameworks These properties allow SRE teams to lower the assessment and qualification effort for service onboarding, while maintaining a high bar on service production quality. A new engagement model based on shared responsibility The original SRE engagement model presented only two options: either full SRE support, or approximately no SRE engagement. 152 A production platform with a common service structure, conventions, and software infrastructure made it possible for an SRE team to provide support for the "platform" infrastructure, while the development teams provide on-call support for functional issues with the service—that is, for bugs in the application code. Under this model, SREs assume responsibility for the development and maintenance of large parts of service software infrastructure, particularly control systems such as load shedding, overload, automation, traffic management, logging, and monitoring. This model represents a significant departure from the way service management was originally conceived in two major ways: it entails a new relationship model for the interaction between SRE and development teams, and a new staffing model for SRE-supported service management. 153 Conclusion Service reliability can be improved through SRE engagement, in a process that includes systematic review and improvement of its production aspects. Google SRE’s initial such systematic approach, the Simple Production Readiness Review, made strides in standardizing the SRE engagement model, but was only applicable to services that had already entered the Launch phase. Over time, SRE extended and improved this model. The Early Engagement Model involved SRE earlier in the development lifecycle in order to "design for reliability." As demand for SRE expertise continued to grow, the need for a more scalable engagement model became increasingly apparent. Frameworks for production services were developed to meet this demand: code patterns based on production best practices were standardized and encapsulated in frameworks, so that use of frameworks became a recommended, consistent, and relatively simple way of building production-ready services. All three of the engagement models described are still practiced within Google. However, the adoption of frameworks is becoming a prominent influence on building production-ready services at Google as well as profoundly expanding the SRE contribution, lowering service management overhead, and improving baseline service quality across the organization. 151 See the Wikipedia page on microservices at https://en.wikipedia.org/wiki/Microservices . 152 Occasionally, there were consulting engagements by SRE teams with some non-onboarded services, but consultations were a best-effort approach and limited in number and scope. 153 The new model of service management changes the SRE staffing model in two ways: (1) because a lot of service technology is common, it reduces the number of required SREs per service; (2) it enables the creation of production platforms with separation of concerns between production platform support (done by SREs) and service-specific business-logic support, which remains with the development team. These platforms teams are staffed based upon the need to maintain the platform rather than upon service count, and can be shared across products. Previous Chapter 31 - Communication and Collaboration in SRE Next Part V - Conclusions Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:18
https://www.linkedin.com/posts/hacker-combat-cybersecurity-community_cybersecurity-dataleak-instagram-activity-7416708698489675777-IiGE#main-content
#cybersecurity #dataleak #instagram #accountsecurity #phishingawareness #digitalsafety #onlineprivacy #infosec #threatawareness | Hacker Combat™ Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Register now Hacker Combat™’s Post Hacker Combat™ 267,868 followers 4h Report this post 🚨 17.5 𝐦𝐢𝐥𝐥𝐢𝐨𝐧 𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦 𝐚𝐜𝐜𝐨𝐮𝐧𝐭𝐬. 𝐎𝐧𝐞 𝐛𝐢𝐠 𝐫𝐞𝐦𝐢𝐧𝐝𝐞𝐫. A massive dataset tied to Instagram users is reportedly circulating online. No passwords involved. Still dangerous. Emails, phone numbers, and usernames are more than enough for phishing, impersonation, and fake password reset traps. If you receive a password reset you didn’t request, don’t rush. Pause. Instead, 𝐫𝐞𝐯𝐢𝐞𝐰 𝐲𝐨𝐮𝐫 𝐚𝐜𝐜𝐨𝐮𝐧𝐭 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲: • Enable two-factor authentication • Use a strong, unique password • Check official Instagram security settings for unusual activity This isn’t about panic. It’s about recognizing that most attacks start with exposed data and human trust. Scroll smarter. Click slower. #CyberSecurity #DataLeak #Instagram #AccountSecurity #PhishingAwareness #DigitalSafety #OnlinePrivacy #InfoSec #ThreatAwareness 2 Like Comment Share Copy LinkedIn Facebook X To view or add a comment, sign in 267,868 followers View Profile Connect More from this author Hacker Combat and NUCAL Announce Strategic Partnership to Equip Security Professionals with Automation Knowledge and Tools Hacker Combat™ 9mo Fileless Malware Explained Hacker Combat™ 10mo Future Trends and Evolving Threats in Cybersecurity Hacker Combat™ 10mo Explore content categories Career Productivity Finance Soft Skills & Emotional Intelligence Project Management Education Technology Leadership Ecommerce User Experience Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Sign in to view more content Create your free account or sign in to continue your search Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy .
2026-01-13T09:29:18
https://www.linkedin.com/uas/login?fromSignIn=true&session_redirect=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fbmc-software&trk=top-card_ellipsis-menu-semaphore-sign-in-redirect&guestReportContentType=COMPANY&_f=guest-reporting
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:18
https://git-scm.com/book/gr/v2/%ce%9e%ce%b5%ce%ba%ce%b9%ce%bd%cf%8e%ce%bd%cf%84%ce%b1%cf%82-%ce%bc%ce%b5-%cf%84%ce%bf-Git-%ce%95%ce%b3%ce%ba%ce%b1%cf%84%ce%ac%cf%83%cf%84%ce%b1%cf%83%ce%b7-%cf%84%ce%bf%cf%85-Git
Git - Εγκατάσταση του Git About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Ξεκινώντας με το Git 1.1 Σχετικά με τον έλεγχο εκδόσεων 1.2 Σύντομο ιστορικό του Git 1.3 Τι είναι το Git; 1.4 Η γραμμή εντολών 1.5 Εγκατάσταση του Git 1.6 Ρύθμιση του Git για πρώτη φορά 1.7 Χρησιμοποιώντας τη βοήθεια 1.8 Ανακεφαλαίωση 2. Τα θεμελιώδη στοιχεία του Git 2.1 Απόκτηση αποθετηρίου Git 2.2 Καταγραφή αλλαγών στο αποθετήριο 2.3 Χρησιμοποιώντας το ιστορικό υποβολών 2.4 Αναιρέσεις (undoing) 2.5 Δουλεύοντας με απομακρυσμένα αποθετήρια 2.6 Ετικέτες 2.7 Συντομεύεσεις στο Git 2.8 Ανακεφαλαίωση 3. Διακλαδώσεις στο Git 3.1 Οι κλάδοι με λίγα λόγια 3.2 Βασικές έννοιες διακλαδώσεων και συγχωνεύσεων 3.3 Διαχείριση κλάδων 3.4 Ροές εργασίας με διακλαδώσεις 3.5 Απομακρυσμένοι κλάδοι 3.6 Αλλαγή βάσης (rebasing) 3.7 Ανακεφαλαίωση 4. Το Git στον διακομιστή 4.1 Τα πρωτόκολλα 4.2 Εγκατάσταση του Git σε διακομιστή 4.3 Δημιουργία δημόσιου κλειδιού SSH 4.4 Στήσιμο του διακομιστή 4.5 Δαίμονες του Git 4.6 Έξυπνο HTTP 4.7 GitWeb 4.8 GitLab 4.9 Επιλογές φιλοξενίας από τρίτους 4.10 Ανακεφαλαίωση 5. Κατανεμημένο Git 5.1 Κατανεμημένες ροές εργασίας 5.2 Συνεισφέροντας σε ένα έργο 5.3 Συντήρηση ενός έργου 5.4 Ανακεφαλαίωση 6. GitHub 6.1 Δημιουργία λογαριασμού και ρύθμισή του 6.2 Συνεισφορά σε έργο 6.3 Συντήρηση ενός έργου 6.4 Διαχείριση οργανισμώνν 6.5 Συγγραφή script στο GitHub 6.6 Ανακεφαλαίωση 7. Εργαλεία του Git 7.1 Επιλογή αναθεώρησης 7.2 Διαδραστική εργασία με το στάδιο καταχώρισης 7.3 Αποθέματα και Καθαρισμός 7.4 Υπογραφή της δουλειάς μας 7.5 Αναζήτηση 7.6 Η ιστορία ξαναγράφεται 7.7 Απομυθοποίηση της reset 7.8 Προχωρημένη Συγχώνευση 7.9 Rerere 7.10 Αποσφαλμάτωση με το Git 7.11 Υπομονάδες 7.12 Δεμάτιασμα δεδομένων 7.13 Replace 7.14 Αποθήκευση διαπιστευτηρίων 7.15 Ανακεφαλαίωση 8. Εξατομίκευση του Git 8.1 Διαμόρφωση Git 8.2 Γνωρίσματα του Git 8.3 Τα άγκιστρα του Git 8.4 Ένα παράδειγμα επιβολής πολιτικής από το Git 8.5 Ανακεφαλαίωση 9. Το Git και άλλα συστήματα 9.1 Το Git ως πελάτης 9.2 Μετανάστευση στο Git 9.3 Ανακεφαλαίωση 10. Εσωτερική λειτουργία του Git 10.1 Διοχετεύσεις και πορσελάνες 10.2 Αντικείμενα του Git 10.3 Αναφορές του Git 10.4 Πακετάρισμα αρχείων 10.5 Τα refspec 10.6 Πρωτόκολλα μεταφοράς 10.7 Διατήρηση και ανάκτηση δεδομένων 10.8 Μεταβλητές περιβάλλοντος 10.9 Ανακεφαλαίωση A1. Appendix A: Το Git σε άλλα περιβάλλοντα A1.1 Γραφικές διεπαφές A1.2 Το Git στο Visual Studio A1.3 Git στο Visual Studio Code A1.4 Git στο IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.5 Git στο Sublime Text A1.6 Το Git στο Bash A1.7 Το Git στο Zsh A1.8 Το Git στο Powershell A1.9 Ανακεφαλαίωση A2. Appendix B: Ενσωμάτωση του Git στις εφαρμογές μας A2.1 Γραμμή εντολών Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Appendix C: Εντολές Git A3.1 Ρύθμιση και διαμόρφωση A3.2 Λήψη και δημιουργία έργων A3.3 Βασική λήψη στιγμιοτύπων A3.4 Διακλάδωση και συγχώνευση A3.5 Κοινή χρήση και ενημέρωση έργων A3.6 Επιθεώρηση και σύγκριση A3.7 Αποσφαλμάτωση A3.8 Επιθέματα A3.9 Ηλεκτρονικό ταχυδρομείο A3.10 Εξωτερικά Συστήματα A3.11 Διοίκηση A3.12 Εντολές διοχέτευσης 2nd Edition 1.5 Ξεκινώντας με το Git - Εγκατάσταση του Git Εγκατάσταση του Git Πριν ξεκινήσουμε να χρησιμοποιούμε το Git, θα πρέπει να το εγκαταστήσουμε στον υπολογιστή μας. Ακόμα και αν είναι ήδη εγκατεστημένο, καλό είναι να το ανανεώσουμε στην τελευταία του έκδοση. Μπορούμε να το εγκαταστήσουμε είτε ως ξεχωριστό πακέτο, είτε μέσω ενός άλλου προγράμματος εγκατάστασης πακέτων, είτε να κατεβάσουμε τον πηγαίο κώδικα και να τον μεταγλωττίσουμε σε εκτελέσιμα αρχεία. Note Το βιβλίο αυτό γράφτηκε χρησιμοποιώντας την έκδοση 2 του Git. Δεδομένου ότι το Git έχει πολύ καλή συμβατότητα προς-τα-πίσω (προς παλιότερες εκδόσεις του), οποιαδήποτε νεότερη έκδοση θα πρέπει να λειτουργεί σωστά. Αν και οι περισσότερες από τις εντολές που χρησιμοποιούνται στο βιβλίο αυτό θα πρέπει να λειτουργούν και σε πολύ παλιότερες εκδόσεις του Git, μερικές από αυτές μπορεί να έχουν ελαφρώς διαφορετική λειτουργία. Εγκατάσταση στο Linux Αν θέλουμε να εγκαταστήσουμε το Git σε Linux, μπορούμε να το κάνουμε μέσω του βασικού εργαλείου διαχείρισης πακέτων το οποίο περιλαμβάνεται στη διανομή του Linux που χρησιμοποιούμε. Αν για παράδειγμα χρησιμοποιούμε το Fedora (ή κάποια παραπλήσια διανομή με RPM), μπορούμε να χρησιμοποιήσουμε το dnf : $ sudo dnf install git-all Αν χρησιμοποιούμε κάποια διανομή Debian όπως το Ubuntu, εκτελούμε την εντολή apt : $ sudo apt install git-all Για περισσότερες επιλογές, μπορούμε να βρούμε οδηγίες για την εγκατάστασή του Git σε διάφορες διανομές του Linux στην ιστοσελίδα https://git-scm.com/download/linux . Εγκατάσταση σε Mac Υπάρχουν διάφοροι τρόποι για να εγκαταστήσουμε το Git σε έναν υπολογιστή Mac. Ο ευκολότερος είναι να εγκαταστήσουμε τα Xcode Command Line Tools. Από την έκδοση Mac Os X Maverics (10.9) και έπειτα, μπορούμε να τα εγκαταστήσουμε απλά τρέχοντας την εντολή git από το τερματικό (Terminal) την πρώτη φορά. $ git --version Αν δεν το έχουμε ήδη εγκατεστημένο, θα μας προτρέψει να το εγκαταστήσουμε. Αν θέλουμε μια πιο ενημερωμένη έκδοση, μπορούμε επίσης να εγκαταστήσουμε το Git μέσω ενός installer. Ένας installer του Git για macOS είναι διαθέσιμος για λήψη στην ιστοσελίδα του Git, https://git-scm.com/download/mac . Figure 7. Πρόγραμμα εγκατάσης του Git για το OS X Εγκατάσταση σε Windows Υπάρχουν επίσης αρκετοί τρόποι για να εγκαταστήσουμε το Git σε Windows. Η πιο επίσημη έκδοση είναι διαθέσιμη για λήψη από την ιστοσελίδα του Git. Απλά πηγαίνουμε στο https://git-scm.com/download/win και η λήψη θα ξεκινήσει αυτόματα. Σημειώνουμε ότι το πρόγραμμα αυτό ονομάζεται Git για Windows, που είναι διαφορετικό πρόγραμμα από το ίδιο το Git· για περισσότερες πληροφορίες σχετικά, πηγαίνουμε στην ιστοσελίδα https://gitforwindows.org . Για αυτόματη εγκατάσταση μπορούμε να χρησιμοποιήσουμε το πακέτο Git Chocolatey . Σημειώνουμε ότι το πακέτο Chocolatey συντηρείται από την κοινότητα. Εγκατάσταση από τον πηγαίο κώδικα Κάποιοι βρίσκουν πιο χρήσιμο να εγκαταστήσουν το Git από τον πηγαίο του κώδικα, επειδή με αυτό τον τρόπο θα έχουν την πιο πρόσφατη έκδοση. Οι εφαρμογές εγκατάστασης που αναφέραμε προηγουμένως τείνουν να υστερούν ελαφρώς χρονικά, αλλά καθώς το Git έχει ωριμάσει τα τελευταία χρόνια, δεν θα δούμε κάποια εντυπωσιακή διαφορά. Αν πραγματικά θέλουμε να εγκαταστήσουμε το Git από τον πηγαίο του κώδικα, θα πρέπει να έχουμε τις παρακάτω βιβλιοθήκες από τις οποίες εξαρτάται το Git: autotools, curl, zlib, openssl, expat και libiconv. Για παράδειγμα, αν έχουμε ένα λειτουργικό σύστημα το οποίο χρησιμοποιεί το dnf (όπως το Fedora) ή το apt-get (όπως τα συστήματα Debian), μπορούμε να χρησιμοποιήσουμε μία από τις παρακάτω εντολές για να εγκαταστήσουμε τα ελάχιστα προαπαιτούμενα για να μεταγλωτίσσουμε και να εγκαταστήσουμε το Git: $ sudo dnf install dh-autoreconf curl-devel expat-devel gettext-devel \ openssl-devel perl-devel zlib-devel $ sudo apt-get install dh-autoreconf libcurl4-gnutls-dev libexpat1-dev \ gettext libz-dev libssl-dev Για να έχουμε τη δυνατότητα να προσθέσουμε την τεκμηρίωση (documentation) σε διάφορες μορφές (doc, html, info), θα χρειαστούμε επίσης τα παρακάτω: $ sudo dnf install asciidoc xmlto docbook2X $ sudo apt-get install asciidoc xmlto docbook2x Note Χρήστες του RHEL και των παραγώγων του, όπως το CentOS και το Scientific Linux θα χρειαστεί να ενεργοποιήσουν το αποθετήριο EPEL για να κατέβει το πακέτο docbook2X . Αν χρησιμοποιούμε κάποια διανομή Debian (Debian/Ubuntu/παράγωγα του Ubuntu), τότε χρειαζόμαστε ακόμα το πακέτο install-info : $ sudo apt-get install install-info Αν χρησιμοποιούμε κάποια διανομή RPM (Fedora/RHEL/παράγωγα του RHEL), θα χρειαστούμε ακόμα το πακέτο getopt (που είναι ήδη εγκατεστημένο στις διανομές Debian): $ sudo dnf install getopt Επιπλέον, αν χρησιμοποιούμε Fedora/RHEL/παράγωγα του RHEL, θα πρέπει να κάνουμε το εξής: $ sudo ln -s /usr/bin/db2x_docbook2texi /usr/bin/docbook2x-texi εξαιτίας διαφορών στο όνομα του binary. Εφόσον έχουμε εγκαταστήσει όλα τα απαραίτητα προαπαιτούμενα, μπορούμε να προχωρήσουμε και να κατεβάσουμε την τελευταία έκδοσης του Git από διάφορα μέρη. Μπορούμε να την αποκτήσουμε από την ιστοσελίδα του kernel.org, https://www.kernel.org/pub/software/scm/git , ή την αντίστοιχη ιστοσελίδα του Github, https://github.com/git/git/tags . Γενικά, είναι πιο εύκολο να βρούμε την τελευταία έκδοση στην ιστοσελίδα του Github, αλλά στο kernel.org θα βρούμε επίσης ψηφιακές υπογραφές της έκδοσης (release signatures) για να επαληθεύσουμε τη λήψη μας. Μπορούμε πλέον να μεταγλωττίσουμε και να εγκαταστήσουμε: $ tar -zxf git-2.8.0.tar.gz $ cd git-2.8.0 $ make configure $ ./configure --prefix=/usr $ make all doc info $ sudo make install install-doc install-html install-info Αφού ολοκληρωθεί η παραπάνω διαδικασία, μπορούμε επίσης να πάρουμε το Git μέσω του ίδιου του Git, ώστε να παίρνουμε τις ενημερωμένες εκδόσεις: $ git clone https://git.kernel.org/pub/scm/git/git.git prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://sre.google/sre-book/evolving-sre-engagement-model/#fig_engagement_service-life-cycle
Google SRE - Production Readiness Review: Engagement Insight Chapter 32 - The Evolving SRE Engagement Model Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolving SRE Engagement Model Written by Acacio Cruz and Ashish Bhambhani Edited by Betsy Beyer and Tim Harvey SRE Engagement: What, How, and Why We've discussed in most of the rest of this book what happens when SRE is already in charge of a service. Few services begin their lifecycle enjoying SRE support, so there needs to be a process for evaluating a service, making sure that it merits SRE support, negotiating how to improve any deficits that bar SRE support, and actually instituting SRE support. We call this process onboarding . If you are in an environment where you are surrounded by a lot of existing services in varying states of perfection, your SRE team will probably be running through a prioritized queue of onboardings for quite a while until the team has finished taking on the highest-value targets. Although this is very common, and a completely reasonable way of dealing with a fait accompli environment, there are actually at least two better ways of bringing the wisdom of production, and SRE support, to services old and new alike. In the first case, just as in software engineering—where the earlier the bug is found, the cheaper it is to fix—the earlier an SRE team consultation happens, the better the service will be and the quicker it will feel the benefit. When SRE is engaged during the earliest stages of design , the time to onboard is lowered and the service is more reliable "out of the gate," usually because we don't have to spend the time unwinding suboptimal design or implementation. Another way, perhaps the best, is to short-circuit the process by which specially created systems with lots of individual variations end up "arriving" at SRE's door. Provide product development with a platform of SRE-validated infrastructure, upon which they can build their systems. This platform will have the double benefit of being both reliable and scalable. This avoids certain classes of cognitive load problems entirely, and by addressing common infrastructure practices, allows product development teams to focus on innovation at the application layer, where it mostly belongs. In the following sections, we'll spend some time looking at each of these models in turn, beginning with the "classic" one, the PRR-driven model. The PRR Model The most typical initial step of SRE engagement is the Production Readiness Review (PRR), a process that identifies the reliability needs of a service based on its specific details. Through a PRR, SREs seek to apply what they've learned and experienced to ensure the reliability of a service operating in production. A PRR is considered a prerequisite for an SRE team to accept responsibility for managing the production aspects of a service. Figure 32-1 illustrates the lifecycle of a typical service. The Production Readiness Review can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. This chapter describes the Simple PRR Model, then discusses how its modification into the Extended Engagement Model and the Frameworks and SRE Platform structure allowed SRE to scale their engagement process and impact. Figure 32-1. A typical service lifecycle The SRE Engagement Model SRE seeks production responsibility for important services for which it can make concrete contributions to reliability. SRE is concerned with several aspects of a service, which are collectively referred to as production . These aspects include the following : System architecture and interservice dependencies Instrumentation, metrics, and monitoring Emergency response Capacity planning Change management Performance: availability, latency, and efficiency When SREs engage with a service, we aim to improve it along all of these axes, which makes managing production for the service easier. Alternative Support Not all Google services receive close SRE engagement. A couple of factors are at play here: Many services don't need high reliability and availability, so support can be provided by other means. By design, the number of development teams that request SRE support exceeds the available bandwidth of SRE teams (see Introduction ). When SRE can't provide full-fledged support, it provides other options for making improvements to production, such as documentation and consultation. Documentation Development guides are available for internal technologies and clients of widely used systems. Google's Production Guide documents production best practices for services , as determined by the experiences of SRE and development teams alike. Developers can implement the solutions and recommendations in such documentation to improve their services. Consultation Developers may also seek SRE consulting to discuss specific services or problem areas. The Launch Coordination Engineering (LCE) team (see Reliable Product Launches at Scale ) spends a majority of its time consulting with development teams. SRE teams that aren't specifically dedicated to launch consultations also engage in consultation with development teams. When a new service or a new feature has been implemented, developers usually consult with SRE for advice about preparing for the Launch phase. Launch consultation usually involves one or two SREs spending a few hours studying the design and implementation at a high level. The SRE consultants then meet with the development team to provide advice on risky areas that need attention and to discuss well-known patterns or solutions that can be incorporated to improve the service in production. Some of this advice may come from the Production Guide mentioned earlier. Consultation sessions are necessarily broad in scope because it's not possible to gain a deep understanding of a given system in the limited time available. For some development teams, consultation is not sufficient: Services that have grown by orders of magnitude since they launched, which now require more time to understand than is feasible through documentation and consultation. Services upon which many other services have subsequently come to rely upon, which now host significantly more traffic from many different clients. These types of services may have grown to the point at which they begin to encounter significant difficulties in production while simultaneously becoming important to users. In such cases, long-term SRE engagement becomes necessary to ensure that they are properly maintained in production as they grow. Production Readiness Reviews: Simple PRR Model When a development team requests that SRE take over production management of a service, SRE gauges both the importance of the service and the availability of SRE teams. If the service merits SRE support, and the SRE team and development organization agree on staffing levels to facilitate this support, SRE initiates a Production Readiness Review with the development team. The objectives of the Production Readiness Review are as follows: Verify that a service meets accepted standards of production setup and operational readiness, and that service owners are prepared to work with SRE and take advantage of SRE expertise. Improve the reliability of the service in production, and minimize the number and severity of incidents that might be expected. A PRR targets all aspects of production that SRE cares about. After sufficient improvements are made and the service is deemed ready for SRE support, an SRE team assumes its production responsibilities. This brings us to the Production Readiness Review process itself. There are three different but related engagement models (Simple PRR Model, Early Engagement Model, and Frameworks and SRE Platform), which will be discussed in turn. We will first describe the Simple PRR Model, which is usually targeted at a service that is already launched and will be taken over by an SRE team. A PRR follows several phases, much like a development lifecycle, although it may proceed independently in parallel with the development lifecycle. Engagement SRE leadership first decides which SRE team is a good fit for taking over the service. Usually one to three SREs are selected or self-nominated to conduct the PRR process. This small group then initiates discussion with the development team. The discussion covers matters such as: Establishing an SLO/SLA for the service Planning for potentially disruptive design changes required to improve reliability Planning and training schedules The goal is to arrive at a common agreement about the process, end goals, and outcomes that are necessary for the SRE team to engage with the development team and their service. Analysis Analysis is the first large segment of work. During this phase, the SRE reviewers learn about the service and begin analyzing it for production shortcomings. They aim to gauge the maturity of the service along the various axes of concern to SRE. They also examine the service's design and implementation to check if it follows production best practices. Usually, the SRE team establishes and maintains a PRR checklist explicitly for the Analysis phase. The checklist is specific to the service and is generally based on domain expertise, experience with related or similar systems, and best practices from the Production Guide. The SRE team may also consult other teams that have more experience with certain components or dependencies of the service. A few examples of checklist items include: Do updates to the service impact an unreasonably large percentage of the system at once? Does the service connect to the appropriate serving instance of its dependencies? For example, end-user requests to a service should not depend on a system that is designed for a batch-processing use case. Does the service request a sufficiently high network quality-of-service when talking to a critical remote service? Does the service report errors to central logging systems for analysis? Does it report all exceptional conditions that result in degraded responses or failures to the end users? Are all user-visible request failures well instrumented and monitored, with suitable alerting configured? The checklist may also include operational standards and best practices followed by a specific SRE team. For example, a perfectly functional service configuration that doesn't follow an SRE team's "gold standard" might be refactored to work better with SRE tools for scalably managing configurations. SREs also look at recent incidents and postmortems for the service, as well as follow-up tasks for the incidents. This evaluation gauges the demands of emergency response for the service and the availability of well-established operational controls. Improvements and Refactoring The Analysis phase leads to the identification of recommended improvements for the service. This next phase proceeds as follows: Improvements are prioritized based upon importance for service reliability. The priorities are discussed and negotiated with the development team, and a plan of execution is agreed upon. Both SRE and product development teams participate and assist each other in refactoring parts of the service or implementing additional features. This phase typically varies the most in duration and amount of effort. How much time and effort this phase will involve depends upon the availability of engineering time for refactoring, the maturity and complexity of the service at the start of the review, and myriad other factors. Training Responsibility for managing a service in production is generally assumed by an entire SRE team. To ensure that the team is prepared, the SRE reviewers who led the PRR take ownership of training the team, which includes the documentation necessary to support the service. Typically with the help and participation of the development team, these engineers organize a series of training sessions and exercises. Instruction can include: Design overviews Deep dives on various request flows in the system A description of the production setup Hands-on exercises for various aspects of system operations When the training is concluded, the SRE team should be prepared to manage the service. Onboarding The Training phase unblocks onboarding of the service by the SRE team. It involves a progressive transfer of responsibilities and ownership of various production aspects of the service, including parts of operations, the change management process, access rights, and so forth. The SRE team continues to focus on the various areas of production mentioned earlier. To complete the transition, the development team must be available to back up and advise the SRE team for a period of time as it settles in managing production for the service. This relationship becomes the basis for the ongoing work between the teams. Continuous Improvement Active services continuously change in response to new demands and conditions, including user requests for new features, evolving system dependencies, and technology upgrades, in addition to other factors. The SRE team must maintain service reliability standards in the face of these changes by driving continuous improvement. The responsible SRE team naturally learns more about the service in the course of operating the service, reviewing new changes, responding to incidents, and especially when conducting postmortems/root cause analyses. This expertise is shared with the development team as suggestions and proposals for changes to the service whenever new features, components, and dependencies may be added to the service. Lessons from managing the service are also contributed to best practices, which are documented in the Production Guide and elsewhere. Engaging with Shakespeare Initially, the developers of the Shakespeare service were responsible for the product, including carrying the pager for emergency response. However, with growing use of the service and the growth of the revenue coming from the service, SRE support became desirable. The product has already been launched, so SRE conducted a Production Readiness Review. One of the things they found was that the dashboards were not completely covering some of the metrics defined in the SLO, so that needed to be fixed. After all the issues that had been filed had been fixed, SRE took over the pager for the service, though two developers were in the on-call rotation as well. The developers are participating in the weekly on-call meeting discussing last week's problems and how to handle upcoming large-scale maintenance or cluster turndowns. Also future plans for the service are now discussed with the SREs to make sure that new launches will go flawlessly (though Murphy's law is always looking for opportunities to spoil that). Evolving the Simple PRR Model: Early Engagement Thus far, we've discussed the Production Readiness Review as it's used in the Simple PRR Model, which is limited to services that have already entered the Launch phase. There are several limitations and costs associated with this model. For example: Additional communication between teams can increase some process overhead for the development team, and cognitive burden for the SRE reviewers. The right SRE reviewers must be available, and capable of managing their time and priorities with regards to their existing engagements. Work done by SREs must be highly visible and sufficiently reviewed by the development team to ensure effective knowledge sharing. SREs should essentially work as a part of the development team, rather than an external unit. However, the main limitations of the PRR Model stem from the fact that the service is launched and serving at scale, and the SRE engagement starts very late in the development lifecycle. If the PRR occurred earlier in the service lifecycle, SRE's opportunity to remedy potential issues in the service would be markedly increased. As a result, the success of the SRE engagement and the future success of the service itself would likely improve. The resulting drawbacks can pose a significant challenge to the success of the SRE engagement and the future success of the service itself. Candidates for Early Engagement The Early Engagement Model introduces SRE earlier in the development lifecycle in order to achieve significant additional advantages. Applying the Early Engagement Model requires identifying the importance and/or business value of a service early in the development lifecycle, and determining if the service will have sufficient scale or complexity to benefit from SRE expertise. Applicable services often have the following characteristics: The service implements significant new functionality and will be part of an existing system already managed by SRE. The service is a significant rewrite or alternative to an existing system, targeting the same use cases. The development team sought SRE advice or approached SRE for takeover upon launch. The Early Engagement Model essentially immerses SREs in the development process. SRE's focus remains the same, though the means to achieve a better production service are different. SRE participates in Design and later phases, eventually taking over the service any time during or after the Build phase. This model is based on active collaboration between the development and SRE teams. Benefits of the Early Engagement Model While the Early Engagement Model does entail certain risks and challenges discussed previously, additional SRE expertise and collaboration during the entire lifecycle of the product creates significant benefits compared to an engagement initiated later in the service lifecycle. Design phase SRE collaboration during the Design phase can prevent a variety of problems or incidents from occurring later in production. While design decisions can be reversed or rectified later in the development lifecycle, such changes come at a high cost in terms of effort and complexity. The best production incidents are those that never happen! Occasionally, difficult trade-offs lead to the selection of a less-than-ideal design. Participation in the Design phase means that SREs are aware up front of the trade-offs and are part of the decision to pick a less-than-ideal option. Early SRE involvement aims to minimize future disputes over design choices once the service is in production. Build and implementation The Build phase addresses production aspects such as instrumentation and metrics, operational and emergency controls, resource usage, and efficiency. During this phase, SRE can influence and improve the implementation by recommending specific existing libraries and components, or helping build certain controls into the system. SRE participation at this stage helps enable ease of operations in the future and allows SRE to gain operational experience in advance of the launch. Launch SRE can also help implement widely used launch patterns and controls. For example, SRE might help implement a "dark launch" setup, in which part of the traffic from existing users is sent to the new service in addition to being sent to the live production service. The responses from the new service are "dark" since they are thrown away and not actually shown to users. Practices such as dark launches allow the team to gain operational insight, resolve issues without impacting existing users, and reduce the risk of encountering issues after launch. A smooth launch is immensely helpful in keeping the operational burden low and maintaining the development momentum after the launch. Disruptions around launch can easily result in emergency changes to source code and production, and disrupt the development team's work on future features. Post-launch Having a stable system at launch time generally leads to fewer conflicting priorities for the development team in terms of choosing between improving service reliability versus adding new features. In later phases of the service, the lessons from earlier phases can better inform refactoring or redesign. With extended involvement, the SRE team can be ready to take over the new service much sooner than is possible with the Simple PRR Model. The longer and closer engagement between the SRE and development teams also creates a collaborative relationship that can be sustained long term. A positive cross-team relationship fosters a mutual feeling of solidarity, and helps SRE establish ownership of the production responsibility. Disengaging from a service Sometimes a service doesn't warrant full-fledged SRE team management—this determination might be made post-launch, or SRE might engage with a service but never officially take it over. This is a positive outcome, because the service has been engineered to be reliable and low maintenance, and can therefore remain with the development team. It is also possible that SRE engages early with a service that fails to meet the levels of usage projected. In such cases, the SRE effort spent is simply part of the overall business risk that comes with new projects, and a small cost relative to the success of projects that meet expected scale. The SRE team can be reassigned, and lessons learned can be incorporated into the engagement process. Evolving Services Development: Frameworks and SRE Platform The Early Engagement Model made strides in evolving SRE engagement beyond the Simple PRR Model, which applied only to services that had already launched. However, there was still progress to be made in scaling SRE engagement to the next level by designing for reliability. Lessons Learned Over time, the SRE engagement model described thus far produced several distinct patterns: Onboarding each service required two or three SREs and typically lasted two or three quarters. The lead times for a PRR were relatively high (quarters away). The effort level required was proportional to the number of services under review, and was constrained by the insufficient number of SREs available to conduct PRRs. These conditions led to serialization of service takeovers and strict service prioritization. Due to differing software practices across services, each production feature was implemented differently. To meet PRR-driven standards, features usually had to be reimplemented specifically for each service or, at best, once for each small subset of services sharing code. These reimplementations were a waste of engineering effort. One canonical example is the implementation of functionally similar logging frameworks repeatedly in the same language because different services didn't implement the same coding structure. A review of common service issues and outages revealed certain patterns, but there was no way to easily replicate fixes and improvements across services. Typical examples included service overload situations and data hot-spotting. SRE software engineering contributions were often local to the service. Thus, building generic solutions to be reused was difficult. As a consequence, there was no easy way to implement new lessons individual SRE teams learned and best practices across services that had already been onboarded. External Factors Affecting SRE External factors have traditionally pressured the SRE organization and its resources in several ways. Google is increasingly following the industry trend of moving toward microservices. 151 As a result, both the number of requests for SRE support and the cardinality of services to support have increased. Because each service has a base fixed operational cost, even simple services demand more staffing. Microservices also imply an expectation of lower lead time for deployment, which was not possible with the previous PRR model (which had a lead time of months). Hiring experienced, qualified SREs is difficult and costly. Despite enormous effort from the recruiting organization, there are never enough SREs to support all the services that need their expertise. Once SREs are hired, their training is also a lengthier process than is typical for development engineers. Finally, the SRE organization is responsible for serving the needs of the large and growing number of development teams that do not already enjoy direct SRE support. This mandate calls for extending the SRE support model far beyond the original concept and engagement model. Toward a Structural Solution: Frameworks To effectively respond to these conditions, it became necessary to develop a model that allowed for the following principles: Codified best practices The ability to commit what works well in production to code, so services can simply use this code and become "production ready" by design. Reusable solutions Common and easily shareable implementations of techniques used to mitigate scalability and reliability issues. A common production platform with a common control surface Uniform sets of interfaces to production facilities, uniform sets of operational controls, and uniform monitoring, logging, and configuration for all services. Easier automation and smarter systems A common control surface that enables automation and smart systems at a level not possible before. For example, SREs can readily receive a single view of relevant information for an outage, rather than hand collecting and analyzing mostly raw data from disparate sources (logs, monitoring data, and so on). Based upon these principles, a set of SRE-supported platform and service frameworks were created, one for each environment we support(Java, C++, Go). Services built using these frameworks share implementations that are designed to work with the SRE-supported platform, and are maintained by both SRE and development teams. The main shift brought about by frameworks was to enable product development teams to design applications using the framework solution that was built and blessed by SRE, as opposed to either retrofitting the application to SRE specifications after the fact, or retrofitting more SREs to support a service that was markedly different than other Google services. An application typically comprises some business logic, which in turn depends on various infrastructure components. SRE production concerns are largely focused on the infrastructure-related parts of a service. The service frameworks implement infrastructure code in a standardized fashion and address various production concerns. Each concern is encapsulated in one or more framework modules, each of which provides a cohesive solution for a problem domain or infrastructure dependency. Framework modules address the various SRE concerns enumerated earlier, such as: Instrumentation and metrics Request logging Control systems involving traffic and load management SRE builds framework modules to implement canonical solutions for the concerned production area. As a result, development teams can focus on the business logic, because the framework already takes care of correct infrastructure use. A framework essentially is a prescriptive implementation for using a set of software components and a canonical way of combining these components. The framework can also expose features that control various components in a cohesive manner. For example, a framework might provide the following: Business logic organized as well-defined semantic components that can be referenced using standard terms Standard dimensions for monitoring instrumentation A standard format for request debugging logs A standard configuration format for managing load shedding Capacity of a single server and determination of "overload" that can both use a semantically consistent measure for feedback to various control systems Frameworks provide multiple upfront gains in consistency and efficiency. They free developers from having to glue together and configure individual components in an ad hoc service-specific manner, in ever-so-slightly incompatible ways, that then have to be manually reviewed by SREs. They drive a single reusable solution for production concerns across services, which means that framework users end up with the same common implementation and minimal configuration differences. Google supports several major languages for application development, and frameworks are implemented across all of these languages. While different implementations of the framework (say in C++ versus Java) can't share code, the goal is to expose the same API, behavior, configuration, and controls for identical functionality. Therefore, development teams can choose the language platform that fits their needs and experience, while SREs can still expect the same familiar behavior in production and standard tools to manage the service . New Service and Management Benefits The structural approach, founded on service frameworks and a common production platform and control surface, provided a host of new benefits. Significantly lower operational overhead A production platform built on top of frameworks with stronger conventions significantly reduced operational overhead, for the following reasons: It supports strong conformance tests for coding structure, dependencies, tests, coding style guides, and so on. This functionality also improves user data privacy, testing, and security conformance. It features built-in service deployment, monitoring, and automation for all services . It facilitates easier management of large numbers of services, especially micro-services, which are growing in number. It enables much faster deployment: an idea can graduate to fully deployed SRE-level production quality in a matter of days! Universal support by design The constant growth in the number of services at Google means that most of these services can neither warrant SRE engagement nor be maintained by SREs. Regardless, services that don't receive full SRE support can be built to use production features that are developed and maintained by SREs. This practice effectively breaks the SRE staffing barrier. Enabling SRE-supported production standards and tools for all teams improves the overall service quality across Google. Furthermore, all services that are implemented with frameworks automatically benefit from improvements made over time to frameworks modules. Faster, lower overhead engagements The frameworks approach results in faster PRR execution because we can rely upon: Built-in service features as part of the framework implementation Faster service onboarding (usually accomplished by a single SRE during one quarter) Less cognitive burden for the SRE teams managing services built using frameworks These properties allow SRE teams to lower the assessment and qualification effort for service onboarding, while maintaining a high bar on service production quality. A new engagement model based on shared responsibility The original SRE engagement model presented only two options: either full SRE support, or approximately no SRE engagement. 152 A production platform with a common service structure, conventions, and software infrastructure made it possible for an SRE team to provide support for the "platform" infrastructure, while the development teams provide on-call support for functional issues with the service—that is, for bugs in the application code. Under this model, SREs assume responsibility for the development and maintenance of large parts of service software infrastructure, particularly control systems such as load shedding, overload, automation, traffic management, logging, and monitoring. This model represents a significant departure from the way service management was originally conceived in two major ways: it entails a new relationship model for the interaction between SRE and development teams, and a new staffing model for SRE-supported service management. 153 Conclusion Service reliability can be improved through SRE engagement, in a process that includes systematic review and improvement of its production aspects. Google SRE’s initial such systematic approach, the Simple Production Readiness Review, made strides in standardizing the SRE engagement model, but was only applicable to services that had already entered the Launch phase. Over time, SRE extended and improved this model. The Early Engagement Model involved SRE earlier in the development lifecycle in order to "design for reliability." As demand for SRE expertise continued to grow, the need for a more scalable engagement model became increasingly apparent. Frameworks for production services were developed to meet this demand: code patterns based on production best practices were standardized and encapsulated in frameworks, so that use of frameworks became a recommended, consistent, and relatively simple way of building production-ready services. All three of the engagement models described are still practiced within Google. However, the adoption of frameworks is becoming a prominent influence on building production-ready services at Google as well as profoundly expanding the SRE contribution, lowering service management overhead, and improving baseline service quality across the organization. 151 See the Wikipedia page on microservices at https://en.wikipedia.org/wiki/Microservices . 152 Occasionally, there were consulting engagements by SRE teams with some non-onboarded services, but consultations were a best-effort approach and limited in number and scope. 153 The new model of service management changes the SRE staffing model in two ways: (1) because a lot of service technology is common, it reduces the number of required SREs per service; (2) it enables the creation of production platforms with separation of concerns between production platform support (done by SREs) and service-specific business-logic support, which remains with the development team. These platforms teams are staffed based upon the need to maintain the platform rather than upon service count, and can be shared across products. Previous Chapter 31 - Communication and Collaboration in SRE Next Part V - Conclusions Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:18
https://www.linkedin.com/products/categories/online-course-platforms?trk=products_details_guest_other_products_by_org_section_product_link_result-card_subtitle-click
Best Online Course Platforms | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Student (34) Teacher (33) Software Engineer (31) Human Resources Manager (30) Human Resources Specialist (16) See all products Find top products in Online Course Platforms category Software used to provide open online courses and other self-paced e-learning content in a range of subjects. - Host free MOOCs or offer paid member options with accredited certifications - Provide interactive instruction with multimedia sharing, tutorials, and staged skill assessments - Offer structured curriculum in technical fields, academic subjects, business, personal development, and other topics 865 results Coursera Online Course Platforms by Coursera We envision a world where anyone, anywhere can transform their lives through learning. Coursera partners with more than 250 leading universities and companies to bring flexible, affordable, job-relevant online learning to individuals and organizations worldwide. We offer a range of learning opportunities—from hands-on projects and courses to job-ready certificates and degree programs. View product Data Science Course with Guaranteed Internship Online Course Platforms by Internshala Our Data Science course is a 4-month comprehensive course that helps you learn 10+ most sought-after skills in the field of data science. It comes with 5 industry level projects, recorded and live lectures, placement assistance, and a 100% guaranteed internship. View product Udemy Business Online Course Platforms by Udemy With more than 22K+ online training courses, Udemy Business is a global learning solution that helps you drive business outcomes. View product Coursera for Business Online Course Platforms by Coursera Empower your talent with in-demand skills training with content from leading industry experts and top universities. Coursera for Business offers learning formats to drive employee engagement and skill development, the ability to customize learning programs at scale, and tools for measurement and benchmarking to articulate the value of your learning programs. View product DeepLearning.AI Online Course Platforms by DeepLearning.AI Learn the skills to start or advance your AI career | World-class education | Hands-on training | Collaborative community of peers and mentors. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Marketing from upGrad Online Course Platforms by upGrad Enrol in upGrad's online courses to gain certification in data science, digital marketing, product management, machine learning, software development, and more. Get the unique upGrad experience - learn through a content co-developed by academia and industry experts & get a dedicated mentor and career support. View product Professional Certificate Programme in HR Management and Analytics from IIM Kozhikode Online Course Platforms by upGrad Kick-start your HR Analytics Journey with comprehensive learning designed and delivered by IIM-K to keep you relevant in this dynamic working environment. Minimum Eligibility - If you have a Bachelor's Degree - Minimum 2 years of work experience - If you have a Master’s Degree - Freshers can apply View product Degrees on Coursera Online Course Platforms by Coursera Now you don't need to quit your job or or move to a new city to earn a degree from a top university. We’ve partnered with leading universities so you earn the same degree, complete the same curriculum, and learn from the same top-tier faculty and industry experts as on-campus students. Choose from online bachelor’s and master’s degree programs in today’s most in-demand fields. View product Coding Ninjas Online Course Platforms by Coding Ninjas If you have the courage and the commitment to learn coding, then Coding Ninjas will empower you. View product Unacademy Online Course Platforms by Unacademy Prepare for examinations and take any number of courses from various topics on Unacademy - an education revolution View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language
2026-01-13T09:29:18
https://git-scm.com/book/tl/v2/Mga-Pangunahing-Kaalaman-sa-Git-Pag-tag
Git - Pag-tag About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Pagsisimula 1.1 Tungkol sa Bersyon Kontrol 1.2 Isang Maikling Kasaysayan ng Git 1.3 Pangunahing Kaalaman sa Git 1.4 Ang Command Line 1.5 Pag-install ng Git 1.6 Unang Beses na Pag-Setup ng Git 1.7 Pagkuha ng Tulong 1.8 Buod 2. Mga Pangunahing Kaalaman sa Git 2.1 Pagkuha ng Repositoryo ng Git 2.2 Pagtatala ng mga Pagbabago sa Repositoryo 2.3 Pagtitingin sa Kasaysayan ng Commit 2.4 Pag-Undo ng mga Bagay 2.5 Paggawa gamit ang mga Remote 2.6 Pag-tag 2.7 Mga Alyas sa Git 2.8 Buod 3. Pag-branch ng Git 3.1 Mga Branch sa Maikling Salita 3.2 Batayan ng Pag-branch at Pag-merge 3.3 Pamamahala ng Branch 3.4 Mga Daloy ng Trabaho sa Pag-branch 3.5 Remote na mga Branch 3.6 Pag-rebase 3.7 Buod 4. Git sa Server 4.1 Ang Mga Protokol 4.2 Pagkuha ng Git sa isang Server 4.3 Ang paglikha ng iyong Pampublikong Susi ng SSH 4.4 Pag-Setup ng Server 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Mga Opsyon ng Naka-host sa Third Party 4.10 Buod 5. Distributed Git 5.1 Distributed Workflows 5.2 Contributing to a Project 5.3 Maintaining a Project 5.4 Summary 6. GitHub 6.1 Pag-setup at pagsasaayos ng Account 6.2 Pag-aambag sa isang Proyekto 6.3 Pagpapanatili ng isang Proyekto 6.4 Pamamahala ng isang organisasyon 6.5 Pag-iiskrip sa GitHub 6.6 Buod 7. Mga Git na Kasangkapan 7.1 Pagpipili ng Rebisyon 7.2 Staging na Interactive 7.3 Pag-stash at Paglilinis 7.4 Pag-sign sa Iyong Trabaho 7.5 Paghahanap 7.6 Pagsulat muli ng Kasaysayan 7.7 Ang Reset Demystified 7.8 Advanced na Pag-merge 7.9 Ang Rerere 7.10 Pagdebug gamit ang Git 7.11 Mga Submodule 7.12 Pagbibigkis 7.13 Pagpapalit 7.14 Kredensyal na ImbakanCredential Storage 7.15 Buod 8. Pag-aangkop sa Sariling Pangangailagan ng Git 8.1 Kompigurasyon ng Git 8.2 Mga Katangian ng Git 8.3 Mga Hook ng Git 8.4 An Example Git-Enforced Policy 8.5 Buod 9. Ang Git at iba pang mga Sistema 9.1 Git bilang isang Kliyente 9.2 Paglilipat sa Git 9.3 Buod 10. Mga Panloob ng GIT 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 Ang Refspec 10.6 Transfer Protocols 10.7 Pagpapanatili At Pagbalik ng Datos 10.8 Mga Variable sa Kapaligiran 10.9 Buod A1. Appendix A: Git in Other Environments A1.1 Grapikal Interfaces A1.2 Git in Visual Studio A1.3 Git sa Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git sa Powershell A1.7 Summary A2. Appendix B: Pag-embed ng Git sa iyong Mga Aplikasyon A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. Appendix C: Mga Kautusan ng Git A3.1 Setup at Config A3.2 Pagkuha at Paglikha ng Mga Proyekto A3.3 Pangunahing Snapshotting A3.4 Branching at Merging A3.5 Pagbabahagi at Pagbabago ng mga Proyekto A3.6 Pagsisiyasat at Paghahambing A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Pagtutuberong mga Utos 2nd Edition 2.6 Mga Pangunahing Kaalaman sa Git - Pag-tag Pag-tag Tulad sa karamihan sa mga VCS, may kakayahan ang Git para mag-tag sa isang tiyak na mga punto sa kasaysayan bilang mahalaga. Karaniwan, ang mga tao ay gumagamit sa kakayahang ito para markahan ang release na mga puntos (v1.0, at iba pa). Sa seksyon na ito, matutunan mo kung paano ilista ang magagamit na mga tag, paano gumawa ng bagong mga tag, at ano ang iba’t ibang mga uri ng mga tag. Paglista ng Iyong mga Tag Ang paglista ng mga magagamit na mga tag sa Git ay madali lang. I-type lang git tag (na may opsyonal na -l o --list ): $ git tag v0.1 v1.3 Ang mga listahan ng mga utos na ito ay nakalista ayon sa alpabetikong pagkasunod-sunod; ang pagkakasunod-sunod kung saan lumilitaw ang mga ito ay walang tunay na kahalagahan. Maaari ka ring maghanap para sa mga tag na tumutugma sa isang partikular na paterno. Ang Git source na repo, halimbawa, ay naglalaman ng higit sa 500 na mga tag. Kung ikaw lamang ay interesado sa pagtingin sa serye ng 1.8.5, maaari mo itong patakbuhin: $ git tag -l "v1.8.5*" v1.8.5 v1.8.5-rc0 v1.8.5-rc1 v1.8.5-rc2 v1.8.5-rc3 v1.8.5.1 v1.8.5.2 v1.8.5.3 v1.8.5.4 v1.8.5.5 Example 3. Ang paglista ng tag na mga wildcards ay nangangailangan ng -l o --list na opsyon Kung gusto mo lang ang buong listahan ng mga tag, ang pagpapatakbo ng utos na git tag ay nagpapalagay na gusto mo nang isang listahan at nagbibigay sa iyo ng isa; ang paggamit sa -l o --list sa kasong ito ay opsyonal. Kung, gayunpaman, ikaw ay nagbibigay ng isang wildcard pattern upang itugma ang tag na mga pangalan, ang gamit ng -l o --list ay sapilitan. Paglikha ng mga Tag Sinusuportahan ng Git and dalawang uri ng mga tag: ang lightweight at annotated . Ang lightweight na tag ay sobrang katulad lang ng isang branch na hindi nagbabago — ito ay isang pointer lamang sa isang tiyak na commit. Ang mga annotated na tag, gayunpaman, ay nakaimbak ng buong mga bagay sa database ng Git. Ang mga ito ay naka-checksum; naglalaman ng pangalan ng nag-tag, email, at petsa; magkaroon ng isang mensahe sa pag-tag; at maaaring mapirmahan at napatunayan na may GNU Privacy Guard (GPG). Sa pangkalahatan inirerekomenda mo na lumikha ng annotated na mga tag kaya maaari kang magkaroon ng lahat ng impormasyon na ito; ngunit kung gusto mo ng isang pansamantalang tag o para sa ilang kadahilanan na ayaw mong panatilihin ang iba pang impormasyon, lightweight na mga tag ay magagamit din. Annotated na mga Tag Paglikha ng isang annotated na tag sa Git ay madali lang. Ang pinakamadaling paraan ay ang pagtukoy ng -a kapag pinatakbo mo ang tag na utos: $ git tag -a v1.4 -m "my version 1.4" $ git tag v0.1 v1.3 v1.4 Ang -m ay tinutukoy ang isang mensahe ng pag-tag, na naka-imbak gamit ang tag. Kung wala kang tinukoy na isang mensahe para sa isang annotated tag, ang Git ay naglulunsad ng iyong editor kaya maaari mo itong i-type. Nakikita mo ang tag na datos na kasama ang commit na na-tag sa pamamagitan ng paggamit ng git show na utos: $ git show v1.4 tag v1.4 Tagger: Ben Straub <ben@straub.cc> Date: Sat May 3 20:19:12 2014 -0700 my version 1.4 commit ca82a6dff817ec66f44342007202690a93763949 Author: Scott Chacon <schacon@gee-mail.com> Date: Mon Mar 17 21:52:11 2008 -0700 changed the version number Na nagpapakita ng impormasyon ng nag-tag, ang petsa ng pag-commit ay na-tag, at ang annotation na mensahe bago magpakita ng commit na impormasyon. Lightweight na mga Tag Isa pang paraan upang i-tag ang mga commit ay ang isang lightweight na tag. Ito ay karaniwang gumagawa ng checksum na nakaimbak sa isang file — walang ibang impormasyon na itinabi. Upang lumikha ng isang lightweight na tag, huwag magbigay ng anuman ng -a , -s , o -m na mga opsyon, magbigay lamang ng pangalan ng tag: $ git tag v1.4-lw $ git tag v0.1 v1.3 v1.4 v1.4-lw v1.5 Sa ngayon, kung papatakbuhin mo ang git show sa tag, hindi mo makikita ang karagdagang impormasyon na tag. Ang utos ay nagpapakita lamang ng commit: $ git show v1.4-lw commit ca82a6dff817ec66f44342007202690a93763949 Author: Scott Chacon <schacon@gee-mail.com> Date: Mon Mar 17 21:52:11 2008 -0700 changed the version number Pag-tag Mamaya Maaari mo ring i-tag ang mga commit pagkatapos mong ilipat ang nakalipas sa kanila. Ipagpalagay na ang iyong commit na kasaysayan ay ganito ang hitsura: $ git log --pretty=oneline 15027957951b64cf874c3557a0f3547bd83b3ff6 Merge branch 'experiment' a6b4c97498bd301d84096da251c98a07c7723e65 beginning write support 0d52aaab4479697da7686c15f77a3d64d9165190 one more thing 6d52a271eda8725415634dd79daabbc4d9b6008e Merge branch 'experiment' 0b7434d86859cc7b8c3d5e1dddfed66ff742fcbc added a commit function 4682c3261057305bdd616e23b64b0857d832627b added a todo file 166ae0c4d3f420721acbb115cc33848dfcc2121a started write support 9fceb02d0ae598e95dc970b74767f19372d61af8 updated rakefile 964f16d36dfccde844893cac5b347e7b3d44abbc commit the todo 8a5cbc430f1a9c3d00faaeffd07798508422908a updated readme Ngayon, ipagpalagay na ikaw ay nakalimot na i-tag ang proyekto sa v1.2, na kung saan ay “updated rakefile” na commit. Maaari mo itong idagdag pagkatapos ng pangyayari. Upang i-tag ang commit, tukuyin mo ang commit na checksum (o bahagi nito) sa dulo ng utos: $ git tag -a v1.2 9fceb02 Maaari mong makita na mayroon kang na-tag na commit: $ git tag v0.1 v1.2 v1.3 v1.4 v1.4-lw v1.5 $ git show v1.2 tag v1.2 Tagger: Scott Chacon <schacon@gee-mail.com> Date: Mon Feb 9 15:32:16 2009 -0800 version 1.2 commit 9fceb02d0ae598e95dc970b74767f19372d61af8 Author: Magnus Chacon <mchacon@gee-mail.com> Date: Sun Apr 27 20:43:35 2008 -0700 updated rakefile ... Pagbabahagi ng mga Tag Bilang default, ang git push ang utos ay hindi nararapat na maglipat ng mga tag sa remote na mga server. Kailangan mo na tahasang itulak ang mga tag sa isang bumabahaging server pagkatapos na ginawa mo ang mga ito. Itong proseso ay tulad ng pagbabahagi ng remote na mga branch — maaari kang magpatakbo ng git push origin <tagname> . $ git push origin v1.5 Counting objects: 14, done. Delta compression using up to 8 threads. Compressing objects: 100% (12/12), done. Writing objects: 100% (14/14), 2.05 KiB | 0 bytes/s, done. Total 14 (delta 3), reused 0 (delta 0) To git@github.com:schacon/simplegit.git * [new tag] v1.5 -> v1.5 Kung mayroon kang maraming mga tag na nais mong i-push kaagad, maaari mo ring gamitin ang --tags na opsyon sa git push na utos. Ililipat nito ang lahat ng iyong mga tag sa remote server na wala pa doon. $ git push origin --tags Counting objects: 1, done. Writing objects: 100% (1/1), 160 bytes | 0 bytes/s, done. Total 1 (delta 0), reused 0 (delta 0) To git@github.com:schacon/simplegit.git * [new tag] v1.4 -> v1.4 * [new tag] v1.4-lw -> v1.4-lw Ngayon, kapag may iba pang panggagaya o humihila mula sa iyong repositoryo, makakakuha sila ng lahat ng iyong mga tag din. Sinusuri ang mga Tag Kung nais mong tingnan ang mga bersyon sa mga file ng isang tag na tumuturo sa, maaari kang gumawa ng git checkout, bagaman ito ay naglalagay sa iyong repositoryo sa “detached HEAD” na estado, na may ilang masamang mga epekto: $ git checkout 2.0.0 Note: checking out '2.0.0'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch> HEAD is now at 99ada87... Merge pull request #89 from schacon/appendix-final $ git checkout 2.0-beta-0.1 Previous HEAD position was 99ada87... Merge pull request #89 from schacon/appendix-final HEAD is now at df3f601... add atlas.json and cover image Sa “detached HEAD” na estado, kung gumawa ka ng mga pagbabago at pagkatapos ay lumikha ka ng commit, ang tag ay mananatiling pareho, ngunit ang iyong bagong commit ay hindi pagmamay-ari ng anumang branch at hindi maaabot, maliban sa pamamagitan ng eksaktong commit hash. Kaya naman, kung kailangan mong gumagawa ng mga pagbabago — sabihin mo na ikaw ay nag-aayos ng isang bug na nasa mas lumang bersyon, halimbawa — sa pangkalahatan ay nais mong lumikha ng branch: $ git checkout -b version2 v2.0.0 Switched to a new branch 'version2' Kung gagawin mo ito at gumawa ng commit, ang iyong version2 na branch ay bahagyang naiiba kaysa sa iyong v2.0.0 na tag dahil ito ay sumusulong sa iyong mga bagong pagbabago, kaya maging maingat. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/credential-provider-protocol.html#example-communication-to-request-a-token-for-reading
Credential Provider Protocol - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Credential Provider Protocol This document describes information for building a Cargo credential provider. For information on setting up or using a credential provider, see Registry Authentication . When using an external credential provider, Cargo communicates with the credential provider using stdin/stdout messages passed as single lines of JSON. Cargo will always execute the credential provider with the --cargo-plugin argument. This enables a credential provider executable to have additional functionality beyond what Cargo needs. Additional arguments are included in the JSON via the args field. JSON messages The JSON messages in this document have newlines added for readability. Actual messages must not contain newlines. Credential hello Sent by: credential provider Purpose: used to identify the supported protocols on process startup { "v":[1] } Requests sent by Cargo will include a v field set to one of the versions listed here. If Cargo does not support any of the versions offered by the credential provider, it will issue an error and shut down the credential process. Registry information Sent by: Cargo Not a message by itself. Included in all messages sent by Cargo as the registry field. { // Index URL of the registry "index-url":"https://github.com/rust-lang/crates.io-index", // Name of the registry in configuration (optional) "name": "crates-io", // HTTP headers received from attempting to access an authenticated registry (optional) "headers": ["WWW-Authenticate: cargo"] } Login request Sent by: Cargo Purpose: collect and store credentials { // Protocol version "v":1, // Action to perform: login "kind":"login", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // User-specified token from stdin or command line (optional) "token": "<the token value>", // URL that the user could visit to get a token (optional) "login-url": "http://registry-url/login", // Additional command-line args (optional) "args":[] } If the token field is set, then the credential provider should use the token provided. If the token is not set, then the credential provider should prompt the user for a token. In addition to the arguments that may be passed to the credential provider in configuration, cargo login also supports passing additional command line args via cargo login -- <additional args> . These additional arguments will be included in the args field after any args from Cargo configuration. Read request Sent by: Cargo Purpose: Get the credential for reading crate information { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: read crate information "operation":"read", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Publish request Sent by: Cargo Purpose: Get the credential for publishing a crate { // Protocol version "v":1, // Request kind: get credentials "kind":"get", // Action to perform: publish crate "operation":"publish", // Crate name "name":"sample", // Crate version "vers":"0.1.0", // Crate checksum "cksum":"...", // Registry information (see Registry information) "registry":{"index-url":"sparse+https://registry-url/index/", "name": "my-registry"}, // Additional command-line args (optional) "args":[] } Get success response Sent by: credential provider Purpose: Gives the credential to Cargo {"Ok":{ // Response kind: this was a get request "kind":"get", // Token to send to the registry "token":"...", // Cache control. Can be one of the following: // * "never": do not cache // * "session": cache for the current cargo session // * "expires": cache for the current cargo session until expiration "cache":"expires", // Unix timestamp (only for "cache": "expires") "expiration":1693942857, // Is the token operation independent? "operation_independent":true }} The token will be sent to the registry as the value of the Authorization HTTP header. operation_independent indicates whether the token can be cached across different operations (such as publishing or fetching). In general, this should be true unless the provider wants to generate tokens that are scoped to specific operations. Login success response Sent by: credential provider Purpose: Indicates the login was successful {"Ok":{ // Response kind: this was a login request "kind":"login" }} Logout success response Sent by: credential provider Purpose: Indicates the logout was successful {"Ok":{ // Response kind: this was a logout request "kind":"logout" }} Failure response (URL not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ "kind":"url-not-supported" }} Sent if the credential provider is designed to only handle specific registry URLs and the given URL is not supported. Cargo will attempt another provider if available. Failure response (not found) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"not-found" }} Sent if the credential could not be found. This is expected for get requests where the credential is not available, or logout requests where there is nothing found to erase. Failure response (operation not supported) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: The credential could not be found in the provider. "kind":"operation-not-supported" }} Sent if the credential provider does not support the requested operation. If a provider only supports get and a login is requested, the provider should respond with this error. Failure response (other) Sent by: credential provider Purpose: Gives error information to Cargo {"Err":{ // Error: something else has failed "kind":"other", // Error message string to be displayed "message": "free form string error message", // Detailed cause chain for the error (optional) "caused-by": ["cause 1", "cause 2"] }} Example communication to request a token for reading: Cargo spawns the credential process, capturing stdin and stdout. Credential process sends the Hello message to Cargo { "v": [1] } Cargo sends the CredentialRequest message to the credential process (newlines added for readability). { "v": 1, "kind": "get", "operation": "read", "registry":{"index-url":"sparse+https://registry-url/index/"} } Credential process sends the CredentialResponse to Cargo (newlines added for readability). { "token": "...", "cache": "session", "operation_independent": true } Cargo closes the stdin pipe to the credential provider and it exits. Cargo uses the token for the remainder of the session (until Cargo exits) when interacting with this registry.
2026-01-13T09:29:18
https://git-scm.com/book/sv/v2/Grunder-i-Git-Skaffa-ett-Git-f%c3%b6rvar
Git - Skaffa ett Git-förvar About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Kom igång 1.1 Om versionshantering 1.2 En kort historik av Git 1.3 Vad är Git? 1.4 Kommandoraden 1.5 Installera Git 1.6 Använda Git för första gången 1.7 Få hjälp 1.8 Sammanfattning 2. Grunder i Git 2.1 Skaffa ett Git-förvar 2.2 Spara ändringar till förvaret 2.3 Visa historiken 2.4 Ångra saker 2.5 Jobba med fjärrförvar 2.6 Taggning 2.7 Git alias 2.8 Sammanfattning 3. Git förgreningar 3.1 Grenar i ett nötskal 3.2 Grundläggande förgrening och sammanslagning 3.3 Hantera grenar 3.4 Arbetsflöde med grenar 3.5 Fjärrgrenar 3.6 Grenflytt 3.7 Sammanfattning 4. Git på servern 4.1 Protokollen 4.2 Skaffa Git på en server 4.3 Generera din publika SSH-nyckel 4.4 Konvigurera servern 4.5 Git Daemonen 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Alternativ tillhandahållna av tredje part 4.10 Sammanfattning 5. Distribuerade Git 5.1 Distribuerade arbetsflöden 5.2 Medverka i ett projekt 5.3 Underhålla ett projekt 5.4 Sammanfattning 6. GitHub 6.1 Account Setup and Configuration 6.2 Contributing to a Project 6.3 Maintaining a Project 6.4 Managing an organization 6.5 Scripting GitHub 6.6 Summary 7. Git Tools 7.1 Revision Selection 7.2 Interactive Staging 7.3 Stashing and Cleaning 7.4 Signing Your Work 7.5 Searching 7.6 Rewriting History 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugging with Git 7.11 Submodules 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 Summary 8. Customizing Git 8.1 Git Configuration 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Summary 9. Git and Other Systems 9.1 Git as a Client 9.2 Migrating to Git 9.3 Summary 10. Git Internals 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 The Refspec 10.6 Transfer Protocols 10.7 Maintenance and Data Recovery 10.8 Environment Variables 10.9 Summary A1. Bilaga A: Git in Other Environments A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git in PowerShell A1.7 Summary A2. Bilaga B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bilaga C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 2.1 Grunder i Git - Skaffa ett Git-förvar Om du bara har möjlighet att läsa ett kapitel för att komma igång med Git, så är detta rätt kapitel. Detta kapitel täcker de grundläggande kommandon i Git du behöver för att utföra majoriteten av de saker som du kommer stöta på. Efter att ha läst kapitlet, skall du kunna konfigurera och initiera ett förvar, lägga till och ta bort revisionshantering av filer, samt preparera och spara versioner. Vi kommer också att visa dig hur du konfigurerar Git att ignorera vissa filer och vissa filmönster, hur du lätt och snabbt gör misstag ogjorda, hur man kollar i historiken för ditt projekt, visar ändringar mellan sparade versioner, samt hur man hämtar information från fjärrförvar. Skaffa ett Git-förvar Vanligtvis skaffar du ett Git-förvar på något av dessa två sätt: Du tar en lokal mapp som för närvarande inte är versionskontrollerat, och gör den till ett Git-förvar, eller Du kan klona ett existerande förvar från någonstans. Vilket du än väljer, så har du i slutändan ett lokalt förvar på din dator, redo att arbeta med. Initialisera ett förvar i en befintlig mapp Om du har en projektmapp som för närvarande inte är versionskontrollerat och du vill börja hantera det med Git ställer du dig först i katalogen i fråga. Det ser lite olika ut beroende på vilket operativsystem du använder: i Linux: $ cd /home/user/my_project på Mac: $ cd /Users/user/my_project i Windows: $ cd /c/user/my_project och skriv: $ git init Detta skapar en ny underkatalog som heter .git som innehåller alla nödvändiga filer — ett skelett för Git-förvaret. Ännu så länge är inget i ditt projekt spårat. (Se Git Internals för mer information om exakt vilka filer som finns i .git katalogen du nyss skapade.) Om du vill börja versionshantera existerande filer (till skillnad från en tom mapp), skall du förmodligen spåra filerna och göra en första version. Du kan åstadkomma det med ett fåtal git add kommandon som specificerar de filer du vill spåra, följt av kommandot git commit : $ git add *.c $ git add LICENSE $ git commit -m 'initial projektversion' Vi kommer gå igenom vad dessa kommandon inom kort. Nu har du ett Git-förvar med spårade filer och en första version (commit). Klona ett existerande förvar Om du vill ha en kopia av ett existerande Git-förvar — till exempel för ett projekt som du gärna vill bidra till — så behöver du kommandot git clone . Om du känner till andra versionshanteringssystem som t.ex. Subversion, kommer du notera att kommandot är "clone" och inte "checkout". Det finns en viktig distinktion — istället av att få en arbetskopia, får Git en fullständig kopia av i princip all data som servern har. Varje version av filen för hela projektets historia hämtas normalt när du kör git clone . Ifall serverns disk blir korrupt kan du i själva verket ofta använda nästan vilken klon som helst på någon klient för att återskapa all information som servern hade då förvaret klonades (man kan förlora några krokar på serversidan och liknande, men all versionshanterad data kommer att finnas — se Skaffa Git på en server för mer detaljer). Du klonar ett förvar med git clone <url> . Om du till exempel vill klona det Git-länkade förvaret kallat libgit2 , kan du göra såhär: $ git clone https://github.com/libgit2/libgit2 Det skapar en mapp kallad libgit2 , initierar en .git -mapp inuti den, hämtar all data för det förvaret, och checkar ut en arbetskopia av den senaste versionen. Om du går ner i libgit2 -mappen som nyss skapades kommer du se projektfilerna där, redo att arbeta på eller användas. Om du vill klona förvaret till en mapp som heter något annat än libgit2 , går det att specificera den nya mappens namn som ett extra argument: $ git clone https://github.com/libgit2/libgit2 mylibgit Det kommandot gör samma sak som det tidigare, men målmappen kallas mylibgit . Git har ett flertal olika överföringsprotokoll du kan använda. Det föregående exemplet använder https:// -protokollet men du kan också hitta git:// och användare@server:sökväg/till/förvar.git , som använder överföringsprotokollet SSH. Skaffa Git på en server kommer introducera alla tillgängliga val som servern kan konfigurara för att komma åt dit Git-förvar, samt för- och nackdelarna med de olika valen. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://www.linkedin.com/products/odoo-crm/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click
Odoo CRM | LinkedIn Skip to main content LinkedIn Odoo in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Odoo CRM Customer Relationship Management (CRM) Software by Odoo See who's skilled in this Add as skill Learn more Report this product About With over 5 million users, Odoo is the #1 open-source CRM software in the world! Sales teams can easily keep track of top leads and opportunities with just a few clicks. Users can personalize their sales cycle, take control of their statistics/forecasts, and create incredible marketing campaign automation to increase their overall sales performance. With Odoo CRM, sales teams are able to analyze the quality of their leads, make faster decisions, and save valuable time by integrating emails from all contacts directly into this fully-integrated app. Media Products media viewer No more previous content Odoo CRM (Tour) Odoo CRM Dashboard Odoo CRM Odoo CRM Pipeline Analysis Odoo CRM Customers No more next content Similar products Sales Cloud Sales Cloud Customer Relationship Management (CRM) Software Zoho CRM Zoho CRM Customer Relationship Management (CRM) Software Bigin by Zoho CRM Bigin by Zoho CRM Customer Relationship Management (CRM) Software Experian DataShare Experian DataShare Customer Relationship Management (CRM) Software Freshsales Freshsales Customer Relationship Management (CRM) Software Clickup Clickup Customer Relationship Management (CRM) Software Sign in to see more Show more Show less Odoo products Odoo Odoo Business Management Software Odoo Accounting Odoo Accounting Enterprise Accounting Software Odoo eCommerce Odoo eCommerce E-Commerce Platforms Odoo Inventory Odoo Inventory Inventory Management Software Odoo Manufacturing Odoo Manufacturing Manufacturing Execution System (MES) Software Odoo Point of Sale Odoo Point of Sale Retail POS Systems Odoo Projects Odoo Projects Business Management Software Odoo Sign Odoo Sign E-Signature Software Odoo Website Odoo Website Website Builder Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://git-scm.com/book/ru/v2/%d0%9e%d1%81%d0%bd%d0%be%d0%b2%d1%8b-Git-%d0%a0%d0%b0%d0%b1%d0%be%d1%82%d0%b0-%d1%81-%d1%82%d0%b5%d0%b3%d0%b0%d0%bc%d0%b8
Git - Работа с тегами About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Введение 1.1 О системе контроля версий 1.2 Краткая история Git 1.3 Что такое Git? 1.4 Командная строка 1.5 Установка Git 1.6 Первоначальная настройка Git 1.7 Как получить помощь? 1.8 Заключение 2. Основы Git 2.1 Создание Git-репозитория 2.2 Запись изменений в репозиторий 2.3 Просмотр истории коммитов 2.4 Операции отмены 2.5 Работа с удалёнными репозиториями 2.6 Работа с тегами 2.7 Псевдонимы в Git 2.8 Заключение 3. Ветвление в Git 3.1 О ветвлении в двух словах 3.2 Основы ветвления и слияния 3.3 Управление ветками 3.4 Работа с ветками 3.5 Удалённые ветки 3.6 Перебазирование 3.7 Заключение 4. Git на сервере 4.1 Протоколы 4.2 Установка Git на сервер 4.3 Генерация открытого SSH ключа 4.4 Настраиваем сервер 4.5 Git-демон 4.6 Умный HTTP 4.7 GitWeb 4.8 GitLab 4.9 Git-хостинг 4.10 Заключение 5. Распределённый Git 5.1 Распределённый рабочий процесс 5.2 Участие в проекте 5.3 Сопровождение проекта 5.4 Заключение 6. GitHub 6.1 Настройка и конфигурация учётной записи 6.2 Внесение собственного вклада в проекты 6.3 Сопровождение проекта 6.4 Управление организацией 6.5 Создание сценариев GitHub 6.6 Заключение 7. Инструменты Git 7.1 Выбор ревизии 7.2 Интерактивное индексирование 7.3 Припрятывание и очистка 7.4 Подпись 7.5 Поиск 7.6 Перезапись истории 7.7 Раскрытие тайн reset 7.8 Продвинутое слияние 7.9 Rerere 7.10 Обнаружение ошибок с помощью Git 7.11 Подмодули 7.12 Создание пакетов 7.13 Замена 7.14 Хранилище учётных данных 7.15 Заключение 8. Настройка Git 8.1 Конфигурация Git 8.2 Атрибуты Git 8.3 Хуки в Git 8.4 Пример принудительной политики Git 8.5 Заключение 9. Git и другие системы контроля версий 9.1 Git как клиент 9.2 Переход на Git 9.3 Заключение 10. Git изнутри 10.1 Сантехника и Фарфор 10.2 Объекты Git 10.3 Ссылки в Git 10.4 Pack-файлы 10.5 Спецификации ссылок 10.6 Протоколы передачи данных 10.7 Обслуживание репозитория и восстановление данных 10.8 Переменные окружения 10.9 Заключение A1. Приложение A: Git в других окружениях A1.1 Графические интерфейсы A1.2 Git в Visual Studio A1.3 Git в Visual Studio Code A1.4 Git в Eclipse A1.5 Git в IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.6 Git в Sublime Text A1.7 Git в Bash A1.8 Git в Zsh A1.9 Git в PowerShell A1.10 Заключение A2. Приложение B: Встраивание Git в ваши приложения A2.1 Git из командной строки A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Приложение C: Команды Git A3.1 Настройка и конфигурация A3.2 Клонирование и создание репозиториев A3.3 Основные команды A3.4 Ветвление и слияния A3.5 Совместная работа и обновление проектов A3.6 Осмотр и сравнение A3.7 Отладка A3.8 Внесение исправлений A3.9 Работа с помощью электронной почты A3.10 Внешние системы A3.11 Администрирование A3.12 Низкоуровневые команды 2nd Edition 2.6 Основы Git - Работа с тегами Работа с тегами Как и большинство других систем контроля версий, Git имеет возможность помечать определённые моменты в истории как важные. Как правило, эта функциональность используется для отметки моментов выпуска версий (v1.0, и т. п.). Такие пометки в Git называются тегами. В этом разделе вы узнаете, как посмотреть имеющиеся теги, как создать новые или удалить существующие, а также какие типы тегов существуют в Git. Просмотр списка тегов Просмотреть список имеющихся тегов в Git можно очень просто. Достаточно набрать команду git tag (параметры -l и --list опциональны): $ git tag v1.0 v2.0 Данная команда перечисляет теги в алфавитном порядке; порядок их отображения не имеет существенного значения. Так же можно выполнить поиск тега по шаблону. Например, репозиторий Git содержит более 500 тегов. Если вы хотите посмотреть теги выпусков 1.8.5, то выполните следующую команду: $ git tag -l "v1.8.5*" v1.8.5 v1.8.5-rc0 v1.8.5-rc1 v1.8.5-rc2 v1.8.5-rc3 v1.8.5.1 v1.8.5.2 v1.8.5.3 v1.8.5.4 v1.8.5.5 Примечание Для отображение тегов согласно шаблону требуются параметры -l или --list Если вы хотите посмотреть весь список тегов, запуск команды git tag неявно подразумевает это и выводит полный список; использование параметров -l или --list в этом случае опционально. Если вы хотите отфильтровать список тегов согласно шаблону, использование параметров -l или --list становится обязательным. Создание тегов Git использует два основных типа тегов: легковесные и аннотированные. Легковесный тег — это что-то очень похожее на ветку, которая не изменяется — просто указатель на определённый коммит. А вот аннотированные теги хранятся в базе данных Git как полноценные объекты. Они имеют контрольную сумму, содержат имя автора, его e-mail и дату создания, имеют комментарий и могут быть подписаны и проверены с помощью GNU Privacy Guard (GPG). Обычно рекомендуется создавать аннотированные теги, чтобы иметь всю перечисленную информацию; но если вы хотите сделать временную метку или по какой-то причине не хотите сохранять остальную информацию, то для этого годятся и легковесные. Аннотированные теги Создание аннотированного тега в Git выполняется легко. Самый простой способ — это указать -a при выполнении команды tag : $ git tag -a v1.4 -m "my version 1.4" $ git tag v0.1 v1.3 v1.4 Опция -m задаёт сообщение, которое будет храниться вместе с тегом. Если не указать сообщение, то Git запустит редактор, чтобы вы смогли его ввести. С помощью команды git show вы можете посмотреть данные тега вместе с коммитом: $ git show v1.4 tag v1.4 Tagger: Ben Straub <ben@straub.cc> Date: Sat May 3 20:19:12 2014 -0700 my version 1.4 commit ca82a6dff817ec66f44342007202690a93763949 Author: Scott Chacon <schacon@gee-mail.com> Date: Mon Mar 17 21:52:11 2008 -0700 Change version number Здесь приведена информация об авторе тега, дате его создания и аннотирующее сообщение перед информацией о коммите. Легковесные теги Легковесный тег — это ещё один способ пометить коммит. По сути, это контрольная сумма коммита, сохранённая в файл — больше никакой информации не хранится. Для создания легковесного тега не передавайте опций -a , -s и -m , укажите только название: $ git tag v1.4-lw $ git tag v0.1 v1.3 v1.4 v1.4-lw v1.5 На этот раз при выполнении git show для этого тега вы не увидите дополнительной информации. Команда просто покажет коммит: $ git show v1.4-lw commit ca82a6dff817ec66f44342007202690a93763949 Author: Scott Chacon <schacon@gee-mail.com> Date: Mon Mar 17 21:52:11 2008 -0700 Change version number Отложенная расстановка тегов Также возможно помечать уже пройденные коммиты. Предположим, история коммитов выглядит следующим образом: $ git log --pretty=oneline 15027957951b64cf874c3557a0f3547bd83b3ff6 Merge branch 'experiment' a6b4c97498bd301d84096da251c98a07c7723e65 Create write support 0d52aaab4479697da7686c15f77a3d64d9165190 One more thing 6d52a271eda8725415634dd79daabbc4d9b6008e Merge branch 'experiment' 0b7434d86859cc7b8c3d5e1dddfed66ff742fcbc Add commit function 4682c3261057305bdd616e23b64b0857d832627b Add todo file 166ae0c4d3f420721acbb115cc33848dfcc2121a Create write support 9fceb02d0ae598e95dc970b74767f19372d61af8 Update rakefile 964f16d36dfccde844893cac5b347e7b3d44abbc Commit the todo 8a5cbc430f1a9c3d00faaeffd07798508422908a Update readme Теперь предположим, что вы забыли отметить версию проекта v1.2, которая была там, где находится коммит «Update rakefile». Вы можете добавить тег и позже. Для отметки коммита укажите его контрольную сумму (или её часть) как параметр команды: $ git tag -a v1.2 9fceb02 Проверим, что коммит отмечен: $ git tag v0.1 v1.2 v1.3 v1.4 v1.4-lw v1.5 $ git show v1.2 tag v1.2 Tagger: Scott Chacon <schacon@gee-mail.com> Date: Mon Feb 9 15:32:16 2009 -0800 version 1.2 commit 9fceb02d0ae598e95dc970b74767f19372d61af8 Author: Magnus Chacon <mchacon@gee-mail.com> Date: Sun Apr 27 20:43:35 2008 -0700 Update rakefile ... Обмен тегами По умолчанию, команда git push не отправляет теги на удалённые сервера. После создания теги нужно отправлять явно на удалённый сервер. Процесс аналогичен отправке веток — достаточно выполнить команду git push origin <tagname> . $ git push origin v1.5 Counting objects: 14, done. Delta compression using up to 8 threads. Compressing objects: 100% (12/12), done. Writing objects: 100% (14/14), 2.05 KiB | 0 bytes/s, done. Total 14 (delta 3), reused 0 (delta 0) To git@github.com:schacon/simplegit.git * [new tag] v1.5 -> v1.5 Если у вас много тегов, и вам хотелось бы отправить всё за один раз, то можно использовать опцию --tags для команды git push . В таком случае все ваши теги отправятся на удалённый сервер (если только их уже там нет). $ git push origin --tags Counting objects: 1, done. Writing objects: 100% (1/1), 160 bytes | 0 bytes/s, done. Total 1 (delta 0), reused 0 (delta 0) To git@github.com:schacon/simplegit.git * [new tag] v1.4 -> v1.4 * [new tag] v1.4-lw -> v1.4-lw Теперь, если кто-то клонирует (clone) или выполнит git pull из вашего репозитория, то он получит вдобавок к остальному и ваши метки. Примечание git push отправляет оба типа тегов Отправка тегов командой git push <remote> --tags не различает аннотированные и легковесные теги. В настоящее время не существует опции чтобы отправить только легковесные теги, но если использовать команду git push <remote> --follow-tags , то отправятся только аннотированные теги. Удаление тегов Для удаления тега в локальном репозитории достаточно выполнить команду git tag -d <tagname> . Например, удалить созданный ранее легковесный тег можно следующим образом: $ git tag -d v1.4-lw Deleted tag 'v1.4-lw' (was e7d5add) Обратите внимание, что при удалении тега не происходит его удаления с внешних серверов. Существует два способа изъятия тега из внешнего репозитория. Первый способ — это выполнить команду git push <remote> :refs/tags/<tagname> : $ git push origin :refs/tags/v1.4-lw To /git@github.com:schacon/simplegit.git - [deleted] v1.4-lw Это следует понимать как обновление внешнего тега пустым значением, что приводит к его удалению. Второй способ убрать тег из внешнего репозитория более интуитивный: $ git push origin --delete <tagname> Переход на тег Если вы хотите получить версии файлов, на которые указывает тег, то вы можете сделать git checkout для тега. Однако, это переведёт репозиторий в состояние «detached HEAD», которое имеет ряд неприятных побочных эффектов. $ git checkout v2.0.0 Note: switching to 'v2.0.0'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c <new-branch-name> Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 99ada87... Merge pull request #89 from schacon/appendix-final $ git checkout v2.0-beta-0.1 Previous HEAD position was 99ada87... Merge pull request #89 from schacon/appendix-final HEAD is now at df3f601... Add atlas.json and cover image Если в состоянии «detached HEAD» внести изменения и сделать коммит, то тег не изменится, при этом новый коммит не будет относиться ни к какой из веток, а доступ к нему можно будет получить только по его хешу. Поэтому, если вам нужно внести изменения — исправить ошибку в одной из старых версий — скорее всего вам следует создать ветку: $ git checkout -b version2 v2.0.0 Switched to a new branch 'version2' Если сделать коммит в ветке version2 , то она сдвинется вперёд и будет отличаться от тега v2.0.0 , так что будьте с этим осторожны. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://web3.okx.com/pl/dex-swap
Wymiana DEX: Handluj kryptowalutami z niskimi opłatami i wysoką płynnością | OKX Wallet Przejdź do głównej treści Rynek Pompa Meme Tracker Sygnał Tabela wyników Zamiana Portfolio Polecenie Więcej Produkty DeFi Rynek Ochrona Buduj X Layer Dowiedz się więcej o X Layer Eksplorator X Layer Most Dokumenty deweloperskie Kranik sieci testowej GitHub Portfel API Odkryj portfel API Dokumenty API Zarządzaj kluczami API Eksplorator blockchainów połącz portfel w aplikacjach dApp Boost X Launch Weź udział i zarabiaj tokeny Kampania na X Dołącz i wygraj atrakcyjne nagrody Centrum nagród Odbierz bonusy i airdropy Alert Język Waluta Preferencje kolorów Pobierz OKX Wallet. Wszystkie informacje o Web3 Ogłoszenia Wstecz Polski 简体中文 繁體中文 English Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Suomi Svenska Türkçe Wstecz Wstecz Learn Centrum wsparcia Zamiana Most Zamiana CZĘSTO ZADAWANE PYTANIA Co to jest OKX DEX? OKX DEX, to zdecentralizowana giełda (czyli DEX - Decentralized Exchange) oraz agregator mostów międzyłańcuchowych. Naszym zadaniem jest wyszukiwanie najlepszej płynności w łańcuchach i umożliwianie bezproblemowego obrotu dowolnymi aktywami. Nasz algorytm X Routing wyszukuje najlepsze ceny na różnych giełdach DEX. Porównujemy ceny różnych Dostawców płynności (LP - Liquidity Providers) z najlepszymi pulami płynności, dzielimy zamówienia i uwzględniamy ceny, poślizg i opłaty sieci za gaz. Jaka jest różnica między swapem, a swapem międzyłańcuchowym? Swap, to zamiana kryptowalut w ramach tej samej sieci, w której realizowana są wpłacanie i odbiór tokenów. Swap międzyłańcuchowy to wymiana aktywów w różnych sieciach, czyli transakcja w której token, którym płacisz, i token, który otrzymujesz znajdują się w różnych sieciach Dlaczego transakcje kończą się niepowodzeniem? 1. Niewystarczające opłaty sieciowe. Gdy sieć jest przeciążona, opłaty sieciowe mogą gwałtownie wzrastać. Górnicy traktują wtedy priorytetowo transakcje z wyższymi opłatami. Jeśli Twoja transakcja nie zostanie wybrana przez górnika przez dłuższy czas, może to być przyczyną niepowodzenia realizacji transakcji. Dlatego przy inicjowaniu transakcji zaleca się stosowanie domyślnego poziomu opłaty, proponowanego przez platformę. 2. Poślizg przekracza ustawiony limit. Gdy występują wahania rynkowe, zmienia się płynność, i kwota, którą ostatecznie otrzymasz może drastycznie spaść, co spowoduje zakończenie Twojej transakcji. 3. Powtarzające się transakcje. Jeśli jednocześnie zainicjujesz wiele identycznych transakcji, dysponując przy tym saldem wystarczającym na pokrycie tylko pierwszej transakcji, to kolejne transakcje zakończą się niepowodzeniem. Co zrobić, gdy nie mam wystarczającego salda na opłatę sieciową? Możesz kupić odpowiednie tokeny na Giełdzie OKX i wypłacić je na adres swojego portfela lub wymienić tokeny na OKX DEX, aby pokryć opłaty sieciowe. Czym jest poślizg? Poślizg, to różnica między oczekiwaną ceną transakcji, a jej rzeczywistą ceną wykonania. Poślizg jest ukrytym kosztem w handlu i w większości transakcji jest nieunikniony, zwłaszcza w przypadku DEX. Poślizg staje się bardziej widoczny podczas niestabilności rynku lub gdy platforma transakcyjna nie dysponuje wystarczającą płynnością. Jak uniknąć poślizgu? Poślizgu nie da się całkowicie uniknąć, ale można ograniczyć jego wpływ. Po pierwsze, najlepszym sposobem będzie wybieranie aktywów o wysokim wolumenie obrotu i o wysokiej płynności. Po drugie, zamiast składać od razu duże zlecenia, możesz realizować wiele transakcji na mniejsze kwoty, bo będą one miały większe szanse na realizację. Po trzecie, zastanów się nad dostosowaniem tolerancji poślizgu podczas większych wahań cen, gdy poślizg może przekroczyć zadany limit podczas realizacji transakcji. W takim przypadku zwiększenie poślizgu pozwoliłoby zapewnić powodzenie transakcji, ale uniemożliwi także uzyskanie dobrej ceny. Prosimy o działanie z rozwagą. Dlaczego handel niektórymi tokenami wymaga ustawienia wyższego poślizgu? Niektóre tokeny, np. SAFEMOON, mają wyższą tolerancję poślizgu ze względu na obecność podatków transakcyjnych, które są wykorzystywane do wykupu puli płynności, spalania tokenów i nagradzania posiadaczy tokenów. Aby pomóc w handlu tymi tokenami, OKX DEX wprowadziła funkcję automatycznego poślizgu, która pozwoli zwiększyć udział pomyślnych transakcji przy obecnym optymalnym poziomie poślizgu. Wymień po najlepszym kursie Swobodnie wybieraj trasy handlowe ze wszystkich głównych agregatorów DEX, gdy X Routing znajduje najlepsze notowania w ponad 400 DEX i 30 sieciach. Szybkie wyceny Anty-MEV ©2017 - 2026 WEB3.OKX.COM Polski 简体中文 繁體中文 English Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Suomi Svenska Türkçe Więcej o OKX Web3 Pobierz Akademia Informacje Kariera Kontakt Warunki świadczenia usługi Polityka prywatności X (dawniej Twitter) Produkt Panel portfela Zamiana (Swap) OKX NFT Zarabiaj Odkryj Buduj Eksplorator Bezpieczeństwo Wsparcie Centrum wsparcia Oficjalna weryfikacja Ogłoszenia Harmonogram opłat DEX Połącz z OKX Portfel Bitcoin Portfel Ethereum Portfel Solana
2026-01-13T09:29:18
https://git-scm.com/book/bg/v2/%d0%9e%d1%81%d0%bd%d0%be%d0%b2%d0%b8-%d0%bd%d0%b0-Git-%d0%a1%d1%8a%d0%b7%d0%b4%d0%b0%d0%b2%d0%b0%d0%bd%d0%b5-%d0%bd%d0%b0-Git-%d1%85%d1%80%d0%b0%d0%bd%d0%b8%d0%bb%d0%b8%d1%89%d0%b5
Git - Създаване на Git хранилище About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Начало 1.1 За Version Control системите 1.2 Кратка история на Git 1.3 Какво е Git 1.4 Конзолата на Git 1.5 Инсталиране на Git 1.6 Първоначална настройка на Git 1.7 Помощна информация в Git 1.8 Обобщение 2. Основи на Git 2.1 Създаване на Git хранилище 2.2 Запис на промени в хранилището 2.3 Преглед на историята на действията 2.4 Възстановяване на направени действия 2.5 Работа с отдалечени хранилища 2.6 Тагове в Git 2.7 Псевдоними в Git 2.8 Обобщение 3. Клонове в Git 3.1 Накратко за разклоненията 3.2 Основи на клоновете код и сливането 3.3 Управление на клонове 3.4 Стратегии за работа с клонове код 3.5 Отдалечени клонове 3.6 Управление на проект 3.7 Обобщение 4. GitHub 4.1 Създаване и настройка на акаунт 4.2 Как да сътрудничим в проект 4.3 Управление на проект 4.4 Управление на организация 4.5 Автоматизиране с GitHub 4.6 Обобщение 5. Git инструменти 5.1 Избор на къмити 5.2 Интерактивно индексиране 5.3 Stashing и Cleaning 5.4 Подписване на вашата работа 5.5 Търсене 5.6 Манипулация на историята 5.7 Мистерията на командата Reset 5.8 Сливане за напреднали 5.9 Rerere 5.10 Дебъгване с Git 5.11 Подмодули 5.12 Пакети в Git (Bundling) 5.13 Заместване 5.14 Credential Storage система 5.15 Обобщение 6. Настройване на Git 6.1 Git конфигурации 6.2 Git атрибути 6.3 Git Hooks 6.4 Примерна Git-Enforced политика 6.5 Обобщение 7. Git и други системи 7.1 Git като клиент 7.2 Миграция към Git 7.3 Обобщение 8. Git на ниско ниво 8.1 Plumbing и Porcelain команди 8.2 Git обекти 8.3 Git референции 8.4 Packfiles 8.5 Refspec спецификации 8.6 Транспортни протоколи 8.7 Поддръжка и възстановяване на данни 8.8 Environment променливи 8.9 Обобщение 9. Приложение A: Git в други среди 9.1 Графични интерфейси 9.2 Git във Visual Studio 9.3 Git във Visual Studio Code 9.4 Git in IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine 9.5 Git в Sublime Text 9.6 Git в Bash 9.7 Git в Zsh 9.8 Git в PowerShell 9.9 Обобщение 10. Приложение B: Вграждане на Git в приложения 10.1 Git от команден ред 10.2 Libgit2 10.3 JGit 10.4 go-git 10.5 Dulwich A1. Приложение C: Git команди A1.1 Настройки и конфигурация A1.2 Издърпване и създаване на проекти A1.3 Snapshotting A1.4 Клонове и сливане A1.5 Споделяне и обновяване на проекти A1.6 Инспекция и сравнение A1.7 Дебъгване A1.8 Patching A1.9 Email команди A1.10 Външни системи A1.11 Административни команди A1.12 Plumbing команди 2nd Edition 2.1 Основи на Git - Създаване на Git хранилище Ако ви трябва само един раздел за да стартирате с Git - това е той. Ще разгледаме всяка една базова команда, от която бихте се нуждали в повечето време, в което ще използвате Git. В края, би трябвало да можете да конфигурирате и инициализирате хранилище, да стартирате/спирате следеното на файлове, да индексирате (stage) и публикувате (commit) своите промени по кода. Ще ви покажем също как да настроите Git така, че да игнорирате специфични файлове или типове файлове, как да отменяте погрешно направени промени лесно и бързо, как да разглеждате историята на вашия проект и промените между различните публикувания, и как да изпращате и издърпвате към/от отдалечени хранилища. Създаване на Git хранилище Можете да се сдобиете с Git хранилище (repository) по два основни начина: Първият от тях взема съществуващ проект/директория, който в момента не под контрол на версиите и го импортира в Git Вторият клонира съществуващо Git хранилище от друг сървър. И в двата случая получавате Git хранилище на локалната си машина, готово за работа. Инициализиране на хранилище в налична директория Ако започвате да следите съществуващ проект, трябва да отидете в директорията му. Ако никога не сте го правили, това изглежда различно според операционната система: за Linux: $ cd /home/user/my_project за macOS: $ cd /Users/user/my_project за Windows: $ cd C:/Users/user/my_project и напишете: $ git init Това създава нова под-директория с име .git , която съдържа всичко необходимо, от което се нуждае Git — нещо като скелет на хранилището. В този момент - нищо от вашите файлове не се следи все още. Вижте Git на ниско ниво > за повече информация какви точно файлове се съдържат в тази новосъздадена директория.) Ако желаете да започнете контрол на версиите на съществуващи файлове (вместо на празна директория), вероятно ще искате да започнете да следите файловете и да направите първоначален къмит. Можете да направите това с малко git add команди, които указват файловете, които искате да следите, последвани от git commit : $ git add *.c $ git add LICENSE $ git commit -m 'Initial project version' Ще се върнем на тези команди след малко. В този момент, вие разполагате с готово Git хранилище със следящи се файлове и сте ги къмитнали за пръв път. Клониране на съществуващо хранилище Ако искате да изтеглите копие от съществуващо Git хранилище — например проект, в който желаете да сътрудничите, то командата която ви трябва е git clone . Ако сте запознати с други VCS системи като например Subversion, веднага ще забележите разликата - командата е "clone", а не "checkout". Това е важна разлика - вместо да издърпва просто работещо копие, Git получава пълно копие на почти всички данни, които сървърът пази. Всяка версия на всеки файл от историята на проекта се издърпва по подразбиране когато изпълните git clone . Практически, ако сървърът се повреди, можете често да използвате почти всеки от клоновете на всеки клиент за да го възстановите в състоянието му към момента, в който хранилището е било клонирано (може да загубите някои server-side специфични елементи, но цялата ви следяща се и значима информация ще е налична — вижте [_getting_git_on_a_server] за повече подробности) Хранилище се клонира с git clone <url> . Например, ако искате да клонирате linkable библиотеката libgit2 , можете да го направите така: $ git clone https://github.com/libgit2/libgit2 Командата ще създаде директория с име libgit2 , ще инициализира .git под-директория, ще изтегли на компютъра ви всички данни от това хранилище и ще ви даде всички файлове от последната работеща версия. Ако влезнете в новата папка libgit2 , ще видите файловете вътре - готови за работа по тях. Ако желаете клонирането да е в директория с различно от libgit2 име - можете да го подадете като следващ параметър към командата: $ git clone https://github.com/libgit2/libgit2 mylibgit Тази команда прави същото като предишната, но в резултат ще ви даде директория с име mylibgit . Git може да работи с различни протоколи за трансфер. Предишният пример използва https:// , но може да видите също и git:// или user@server:path/to/repo.git , което използва SSH като транспортен протокол. [_getting_git_on_a_server] ще разкаже повече за всички налични опции, които един сървър може да ползва за да предостави достъп до вашите Git хранилища, в едно с предимствата и недостатъците им. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://www.linkedin.com/products/netscout-omnis-threat-horizon/?trk=products_details_guest_other_products_by_org_section_product_link_result-card_full-click
Omnis Threat Horizon | LinkedIn Skip to main content LinkedIn NETSCOUT in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Omnis Threat Horizon DDoS Protection Software by NETSCOUT See who's skilled in this Add as skill Try now Report this product About NETSCOUT's Omnis Threat Horizon is a free tool composed of highly curated, real-time global threat data presented in a way that allows you to understand how it impacts your organization. Media Products media viewer No more previous content NETSCOUT Omnis Threat Horizon Omnis Threat Horizon delivers situational awareness across the threat landscape and presents it in a way that organizations can use to understand how it impacts them. Omnis Threat Horizon allows you to view the data however you want to see it, by industry, by vertical, region or country. Our unparalleled visibility allows you to understand what is happening with malicious traffic across the internet that is relevant to your organization. DDoS Attacks During COVID-19 Omnis Threat Horizon provides free insight into the rise in DDoS attacks during the COVID-19 pandemic. Omnis Threat Horizon Real-time global threat intelligence. No more next content Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kona DDoS Defender Kona DDoS Defender DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Sign in to see more Show more Show less NETSCOUT products Arbor Edge Defense Arbor Edge Defense DDoS Protection Software Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Arbor Sightline Arbor Sightline Network Monitoring Software Arbor Threat Mitigation System (TMS) Arbor Threat Mitigation System (TMS) DDoS Protection Software InfiniStreamNG (ISNG) InfiniStreamNG (ISNG) Business Continuity Software nGenius Business Analytics nGenius Business Analytics Business Intelligence (BI) Software nGeniusONE nGeniusONE Application Performance Monitoring (APM) Software nGeniusPULSE nGeniusPULSE Network Management Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://web3.okx.com/nb/dex-swap
DEX Swap: Handle krypto med lave gebyrer og høy likviditet | OKX Wallet Hopp over til hovedinnhold Marked Meme Pump Sporing Signal Copy trading Veksle Portefølje Verving Mer Produkt. DeFi Markedsplass Sikkerh. Bygg X Layer Finn ut mer om X Layer X Layer Explorer Overføringsbro Utviklerdokumenter Testnet-kran GitHub Lommebok-API Oppdag lommebok-API API-dokumenter Administrer API-nøkler Blockchain explorer Koble til lommebok i DApper Boost X-lansering Delta og tjen tokener X-kampanje Bli med for å vinne spennende belønninger Belønningssenter Gjør krav på bonuser og airdrops Varsel Språk Valuta Fargepreferanser Last ned OKX Wallet Alt om Web3 Kunngjøringer Tilbake Norsk (bokmål) 简体中文 繁體中文 English Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Suomi Svenska Türkçe Tilbake Tilbake Learn Supportsenter Bytt Bro Bytt VANLIGE SPØRSMÅL What is OKX DEX Swap? OKX DEX Swap is a decentralized exchange (DEX) aggregator that allows you to trade crypto seamlessly across multiple blockchains. Our X Routing algorithm finds the best prices by comparing liquidity pools, splitting orders, and optimizing for fees and slippage. How does OKX DEX ensure the best swap prices? OKX DEX uses advanced algorithms to aggregate liquidity from multiple DEXs and liquidity providers (LPs). By splitting orders and considering factors such as price, slippage, and network fees, we ensure you get the best possible rates. Why do trades fail? 1. Insufficient network fee. During times of network congestion, network fees may skyrocket. Miners prioritize transactions with higher network fees, so if your transaction isn’t picked up by a miner for an extended period, it can result in transaction failure. Therefore, when initiating a transaction, it’s recommended to use the default values recommended by the platform. 2. Exceeded slippage. In volatile markets, liquidity depths can change. If the trading amount falls below the required minimum, the contract may halt the transaction, leading to a failed transaction due to exceeded slippage. 3.Duplicate transactions. If a user initiates multiple identical transactions simultaneously, but their balance is only sufficient to cover the first transaction, subsequent transactions will fail. What to do when I don’t have enough balance for the network fee? You can buy the corresponding tokens on the OKX Exchange and withdraw them to your wallet address, or swap tokens on OKX DEX to cover the network fees. What is slippage? Slippage refers to the price difference between the expected price of a trade and its actual execution price. Slippage is an implicit cost of trading and is inevitable in most trades, especially in a DEX. Slippage becomes more prominent during market volatility or when there is insufficient liquidity on the trading platform. How can slippage be avoided? Slippage cannot be completely avoided, but its impact can be minimized in three ways: 1. The best way is to choose assets with high trading volume and liquidity. 2. You can opt for transactions with smaller amounts, which are more likely to be executed, instead of placing large orders directly. 3. Consider adjusting the slippage tolerance during price fluctuations, where slippage may exceed the preset limit when your transaction is executed. In this case, increasing slippage could avoid transaction failure, but it also means you can't get a good price. Please proceed with caution. Why trading some tokens requires a higher slippage setting? Some tokens have a higher slippage tolerance due to the presence of transaction taxes, such as SAFEMOON, which are used for liquidity pool buy-back, token burning, and rewarding token holders. To help you trade these tokens, OKX DEX has launched automated slippage to improve the success rate of these trades at the current optimal slippage level. Veksle med beste pris Velg fritt handelsruter fra alle store DEX-aggregatorer, mens X Routing finner de beste prisene på tvers av over 400 DEX-er og 30 nettverk. Raske tilbud Anti-MEV ©2017 - 2026 WEB3.OKX.COM Norsk (bokmål) 简体中文 繁體中文 English Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Suomi Svenska Türkçe More about OKX Wallet Last ned Lær Om oss Karrierer Kontakt oss Vilkår for bruk Personvernerklæring X (tidligere Twitter) Product Tokens Swap Marketplace Earn Discover Build Utforsker Sikkerhet Støtte Supportsenter Verifisering av kanal Kunngjøringer DEX-gebyrplan Bli kjent med OKX Bitcoin-lommebok Ethereum-lommebok Solana-lommebok
2026-01-13T09:29:18
https://git-scm.com/book/zh/v2/Git-%e5%9f%ba%e7%a1%80-%e8%8e%b7%e5%8f%96-Git-%e4%bb%93%e5%ba%93
Git - 获取 Git 仓库 About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. 起步 1.1 关于版本控制 1.2 Git 简史 1.3 Git 是什么? 1.4 命令行 1.5 安装 Git 1.6 初次运行 Git 前的配置 1.7 获取帮助 1.8 总结 2. Git 基础 2.1 获取 Git 仓库 2.2 记录每次更新到仓库 2.3 查看提交历史 2.4 撤消操作 2.5 远程仓库的使用 2.6 打标签 2.7 Git 别名 2.8 总结 3. Git 分支 3.1 分支简介 3.2 分支的新建与合并 3.3 分支管理 3.4 分支开发工作流 3.5 远程分支 3.6 变基 3.7 总结 4. 服务器上的 Git 4.1 协议 4.2 在服务器上搭建 Git 4.3 生成 SSH 公钥 4.4 配置服务器 4.5 Git 守护进程 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 第三方托管的选择 4.10 总结 5. 分布式 Git 5.1 分布式工作流程 5.2 向一个项目贡献 5.3 维护项目 5.4 总结 6. GitHub 6.1 账户的创建和配置 6.2 对项目做出贡献 6.3 维护项目 6.4 管理组织 6.5 脚本 GitHub 6.6 总结 7. Git 工具 7.1 选择修订版本 7.2 交互式暂存 7.3 贮藏与清理 7.4 签署工作 7.5 搜索 7.6 重写历史 7.7 重置揭密 7.8 高级合并 7.9 Rerere 7.10 使用 Git 调试 7.11 子模块 7.12 打包 7.13 替换 7.14 凭证存储 7.15 总结 8. 自定义 Git 8.1 配置 Git 8.2 Git 属性 8.3 Git 钩子 8.4 使用强制策略的一个例子 8.5 总结 9. Git 与其他系统 9.1 作为客户端的 Git 9.2 迁移到 Git 9.3 总结 10. Git 内部原理 10.1 底层命令与上层命令 10.2 Git 对象 10.3 Git 引用 10.4 包文件 10.5 引用规范 10.6 传输协议 10.7 维护与数据恢复 10.8 环境变量 10.9 总结 A1. 附录 A: 在其它环境中使用 Git A1.1 图形界面 A1.2 Visual Studio 中的 Git A1.3 Visual Studio Code 中的 Git A1.4 IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine 中的 Git A1.5 Sublime Text 中的 Git A1.6 Bash 中的 Git A1.7 Zsh 中的 Git A1.8 PowerShell 中的 Git A1.9 总结 A2. 附录 B: 在你的应用中嵌入 Git A2.1 命令行 Git 方式 A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. 附录 C: Git 命令 A3.1 设置与配置 A3.2 获取与创建项目 A3.3 快照基础 A3.4 分支与合并 A3.5 项目分享与更新 A3.6 检查与比较 A3.7 调试 A3.8 补丁 A3.9 邮件 A3.10 外部系统 A3.11 管理 A3.12 底层命令 2nd Edition 2.1 Git 基础 - 获取 Git 仓库 如果你只想通过阅读一章来学习 Git,那么本章将是你的不二选择。 本章涵盖了你在使用 Git 完成各种工作时将会用到的各种基本命令。 在学习完本章之后,你应该能够配置并初始化一个仓库(repository)、开始或停止跟踪(track)文件、暂存(stage)或提交(commit)更改。 本章也将向你演示了如何配置 Git 来忽略指定的文件和文件模式、如何迅速而简单地撤销错误操作、如何浏览你的项目的历史版本以及不同提交(commits)之间的差异、如何向你的远程仓库推送(push)以及如何从你的远程仓库拉取(pull)文件。 获取 Git 仓库 通常有两种获取 Git 项目仓库的方式: 将尚未进行版本控制的本地目录转换为 Git 仓库; 从其它服务器 克隆 一个已存在的 Git 仓库。 两种方式都会在你的本地机器上得到一个工作就绪的 Git 仓库。 在已存在目录中初始化仓库 如果你有一个尚未进行版本控制的项目目录,想要用 Git 来控制它,那么首先需要进入该项目目录中。 如果你还没这样做过,那么不同系统上的做法有些不同: 在 Linux 上: $ cd /home/user/my_project 在 macOS 上: $ cd /Users/user/my_project 在 Windows 上: $ cd /c/user/my_project 之后执行: $ git init 该命令将创建一个名为 .git 的子目录,这个子目录含有你初始化的 Git 仓库中所有的必须文件,这些文件是 Git 仓库的骨干。 但是,在这个时候,我们仅仅是做了一个初始化的操作,你的项目里的文件还没有被跟踪。 (参见 Git 内部原理 来了解更多关于到底 .git 文件夹中包含了哪些文件的信息。) 如果在一个已存在文件的文件夹(而非空文件夹)中进行版本控制,你应该开始追踪这些文件并进行初始提交。 可以通过 git add 命令来指定所需的文件来进行追踪,然后执行 git commit : $ git add *.c $ git add LICENSE $ git commit -m 'initial project version' 稍后我们再逐一解释这些指令的行为。 现在,你已经得到了一个存在被追踪文件与初始提交的 Git 仓库。 克隆现有的仓库 如果你想获得一份已经存在了的 Git 仓库的拷贝,比如说,你想为某个开源项目贡献自己的一份力,这时就要用到 git clone 命令。 如果你对其它的 VCS 系统(比如说 Subversion)很熟悉,请留心一下你所使用的命令是"clone"而不是"checkout"。 这是 Git 区别于其它版本控制系统的一个重要特性,Git 克隆的是该 Git 仓库服务器上的几乎所有数据,而不是仅仅复制完成你的工作所需要文件。 当你执行 git clone 命令的时候,默认配置下远程 Git 仓库中的每一个文件的每一个版本都将被拉取下来。 事实上,如果你的服务器的磁盘坏掉了,你通常可以使用任何一个克隆下来的用户端来重建服务器上的仓库 (虽然可能会丢失某些服务器端的钩子(hook)设置,但是所有版本的数据仍在,详见 在服务器上搭建 Git )。 克隆仓库的命令是 git clone <url> 。 比如,要克隆 Git 的链接库 libgit2 ,可以用下面的命令: $ git clone https://github.com/libgit2/libgit2 这会在当前目录下创建一个名为 “libgit2” 的目录,并在这个目录下初始化一个 .git 文件夹, 从远程仓库拉取下所有数据放入 .git 文件夹,然后从中读取最新版本的文件的拷贝。 如果你进入到这个新建的 libgit2 文件夹,你会发现所有的项目文件已经在里面了,准备就绪等待后续的开发和使用。 如果你想在克隆远程仓库的时候,自定义本地仓库的名字,你可以通过额外的参数指定新的目录名: $ git clone https://github.com/libgit2/libgit2 mylibgit 这会执行与上一条命令相同的操作,但目标目录名变为了 mylibgit 。 Git 支持多种数据传输协议。 上面的例子使用的是 https:// 协议,不过你也可以使用 git:// 协议或者使用 SSH 传输协议,比如 user@server:path/to/repo.git 。 在服务器上搭建 Git 将会介绍所有这些协议在服务器端如何配置使用,以及各种方式之间的利弊。 prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/commands/cargo-publish.html#option-cargo-publish---dry-run
cargo publish - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-publish(1) NAME cargo-publish — Upload a package to the registry SYNOPSIS cargo publish [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory and upload it to a registry. The default registry is https://crates.io . This performs the following steps: Performs a few checks, including: Checks the package.publish key in the manifest for restrictions on which registries you are allowed to publish to. Create a .crate file by following the steps in cargo-package(1) . Upload the crate to the registry. The server will perform additional checks on the crate. The client will poll waiting for the package to appear in the index, and may timeout. In that case, you will need to check for completion manually. This timeout does not affect the upload. This command requires you to be authenticated using cargo-login(1) or environment variables of the registry.token and registries.<name>.token config fields. See the reference for more details about packaging and publishing. OPTIONS Publish Options --dry-run Perform all checks without uploading. --no-verify Don’t verify the contents by building them. --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --index index The URL of the registry index to use. --registry registry Name of the registry to publish to. Registry names are defined in Cargo config files . If not specified, and there is a package.publish field in Cargo.toml with a single registry, then it will publish to that registry. Otherwise it will use the default registry, which is defined by the registry.default config key which defaults to crates-io . Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Publish only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Publish all members in the workspace. --all Deprecated alias for --workspace . --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Publish for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo publish -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo publish -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Publish the current package: cargo publish SEE ALSO cargo(1) , cargo-package(1) , cargo-login(1)
2026-01-13T09:29:18
https://bit.ly/4k6xWXQ?trk=organization_guest_main-feed-card-text
Control-M Reviews & Ratings 2026 Skip to main content Search Categories Vendor solutions Write a review Research Boards Log in Sign up Control-M Customer Verified Top Rated Trusted Seller Score  9.3  out of 10 Top Rated Trusted Seller 302 Reviews and Ratings Workload Automation Save Learn More Learn More Control-M Customer Verified Top Rated Trusted Seller Save Learn More Control-M Open Menu Overview Pricing Product Demos Features Scorecard Product Details Comparisons Reviews and Ratings (302) Starting at $ 29,000 per year View Pricing Learn More Do you work for BMC Software Inc. ? Learn how we help vendors Get your free intent data report Profile Claimed Overview What is Control-M? Control-M from BMC is a platform for integrating, automating, and orchestrating application and data workflows in production across complex hybrid technology ecosystems. It provides deep operational capabilities, delivering speed, scale, security, and governance. Read more Loading... Return to navigation Loading... Loading...
2026-01-13T09:29:18
https://sre.google/sre-book/evolving-sre-engagement-model/#id-oAduAS7I8t2uXl-marker
Google SRE - Production Readiness Review: Engagement Insight Chapter 32 - The Evolving SRE Engagement Model Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolving SRE Engagement Model Written by Acacio Cruz and Ashish Bhambhani Edited by Betsy Beyer and Tim Harvey SRE Engagement: What, How, and Why We've discussed in most of the rest of this book what happens when SRE is already in charge of a service. Few services begin their lifecycle enjoying SRE support, so there needs to be a process for evaluating a service, making sure that it merits SRE support, negotiating how to improve any deficits that bar SRE support, and actually instituting SRE support. We call this process onboarding . If you are in an environment where you are surrounded by a lot of existing services in varying states of perfection, your SRE team will probably be running through a prioritized queue of onboardings for quite a while until the team has finished taking on the highest-value targets. Although this is very common, and a completely reasonable way of dealing with a fait accompli environment, there are actually at least two better ways of bringing the wisdom of production, and SRE support, to services old and new alike. In the first case, just as in software engineering—where the earlier the bug is found, the cheaper it is to fix—the earlier an SRE team consultation happens, the better the service will be and the quicker it will feel the benefit. When SRE is engaged during the earliest stages of design , the time to onboard is lowered and the service is more reliable "out of the gate," usually because we don't have to spend the time unwinding suboptimal design or implementation. Another way, perhaps the best, is to short-circuit the process by which specially created systems with lots of individual variations end up "arriving" at SRE's door. Provide product development with a platform of SRE-validated infrastructure, upon which they can build their systems. This platform will have the double benefit of being both reliable and scalable. This avoids certain classes of cognitive load problems entirely, and by addressing common infrastructure practices, allows product development teams to focus on innovation at the application layer, where it mostly belongs. In the following sections, we'll spend some time looking at each of these models in turn, beginning with the "classic" one, the PRR-driven model. The PRR Model The most typical initial step of SRE engagement is the Production Readiness Review (PRR), a process that identifies the reliability needs of a service based on its specific details. Through a PRR, SREs seek to apply what they've learned and experienced to ensure the reliability of a service operating in production. A PRR is considered a prerequisite for an SRE team to accept responsibility for managing the production aspects of a service. Figure 32-1 illustrates the lifecycle of a typical service. The Production Readiness Review can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. This chapter describes the Simple PRR Model, then discusses how its modification into the Extended Engagement Model and the Frameworks and SRE Platform structure allowed SRE to scale their engagement process and impact. Figure 32-1. A typical service lifecycle The SRE Engagement Model SRE seeks production responsibility for important services for which it can make concrete contributions to reliability. SRE is concerned with several aspects of a service, which are collectively referred to as production . These aspects include the following : System architecture and interservice dependencies Instrumentation, metrics, and monitoring Emergency response Capacity planning Change management Performance: availability, latency, and efficiency When SREs engage with a service, we aim to improve it along all of these axes, which makes managing production for the service easier. Alternative Support Not all Google services receive close SRE engagement. A couple of factors are at play here: Many services don't need high reliability and availability, so support can be provided by other means. By design, the number of development teams that request SRE support exceeds the available bandwidth of SRE teams (see Introduction ). When SRE can't provide full-fledged support, it provides other options for making improvements to production, such as documentation and consultation. Documentation Development guides are available for internal technologies and clients of widely used systems. Google's Production Guide documents production best practices for services , as determined by the experiences of SRE and development teams alike. Developers can implement the solutions and recommendations in such documentation to improve their services. Consultation Developers may also seek SRE consulting to discuss specific services or problem areas. The Launch Coordination Engineering (LCE) team (see Reliable Product Launches at Scale ) spends a majority of its time consulting with development teams. SRE teams that aren't specifically dedicated to launch consultations also engage in consultation with development teams. When a new service or a new feature has been implemented, developers usually consult with SRE for advice about preparing for the Launch phase. Launch consultation usually involves one or two SREs spending a few hours studying the design and implementation at a high level. The SRE consultants then meet with the development team to provide advice on risky areas that need attention and to discuss well-known patterns or solutions that can be incorporated to improve the service in production. Some of this advice may come from the Production Guide mentioned earlier. Consultation sessions are necessarily broad in scope because it's not possible to gain a deep understanding of a given system in the limited time available. For some development teams, consultation is not sufficient: Services that have grown by orders of magnitude since they launched, which now require more time to understand than is feasible through documentation and consultation. Services upon which many other services have subsequently come to rely upon, which now host significantly more traffic from many different clients. These types of services may have grown to the point at which they begin to encounter significant difficulties in production while simultaneously becoming important to users. In such cases, long-term SRE engagement becomes necessary to ensure that they are properly maintained in production as they grow. Production Readiness Reviews: Simple PRR Model When a development team requests that SRE take over production management of a service, SRE gauges both the importance of the service and the availability of SRE teams. If the service merits SRE support, and the SRE team and development organization agree on staffing levels to facilitate this support, SRE initiates a Production Readiness Review with the development team. The objectives of the Production Readiness Review are as follows: Verify that a service meets accepted standards of production setup and operational readiness, and that service owners are prepared to work with SRE and take advantage of SRE expertise. Improve the reliability of the service in production, and minimize the number and severity of incidents that might be expected. A PRR targets all aspects of production that SRE cares about. After sufficient improvements are made and the service is deemed ready for SRE support, an SRE team assumes its production responsibilities. This brings us to the Production Readiness Review process itself. There are three different but related engagement models (Simple PRR Model, Early Engagement Model, and Frameworks and SRE Platform), which will be discussed in turn. We will first describe the Simple PRR Model, which is usually targeted at a service that is already launched and will be taken over by an SRE team. A PRR follows several phases, much like a development lifecycle, although it may proceed independently in parallel with the development lifecycle. Engagement SRE leadership first decides which SRE team is a good fit for taking over the service. Usually one to three SREs are selected or self-nominated to conduct the PRR process. This small group then initiates discussion with the development team. The discussion covers matters such as: Establishing an SLO/SLA for the service Planning for potentially disruptive design changes required to improve reliability Planning and training schedules The goal is to arrive at a common agreement about the process, end goals, and outcomes that are necessary for the SRE team to engage with the development team and their service. Analysis Analysis is the first large segment of work. During this phase, the SRE reviewers learn about the service and begin analyzing it for production shortcomings. They aim to gauge the maturity of the service along the various axes of concern to SRE. They also examine the service's design and implementation to check if it follows production best practices. Usually, the SRE team establishes and maintains a PRR checklist explicitly for the Analysis phase. The checklist is specific to the service and is generally based on domain expertise, experience with related or similar systems, and best practices from the Production Guide. The SRE team may also consult other teams that have more experience with certain components or dependencies of the service. A few examples of checklist items include: Do updates to the service impact an unreasonably large percentage of the system at once? Does the service connect to the appropriate serving instance of its dependencies? For example, end-user requests to a service should not depend on a system that is designed for a batch-processing use case. Does the service request a sufficiently high network quality-of-service when talking to a critical remote service? Does the service report errors to central logging systems for analysis? Does it report all exceptional conditions that result in degraded responses or failures to the end users? Are all user-visible request failures well instrumented and monitored, with suitable alerting configured? The checklist may also include operational standards and best practices followed by a specific SRE team. For example, a perfectly functional service configuration that doesn't follow an SRE team's "gold standard" might be refactored to work better with SRE tools for scalably managing configurations. SREs also look at recent incidents and postmortems for the service, as well as follow-up tasks for the incidents. This evaluation gauges the demands of emergency response for the service and the availability of well-established operational controls. Improvements and Refactoring The Analysis phase leads to the identification of recommended improvements for the service. This next phase proceeds as follows: Improvements are prioritized based upon importance for service reliability. The priorities are discussed and negotiated with the development team, and a plan of execution is agreed upon. Both SRE and product development teams participate and assist each other in refactoring parts of the service or implementing additional features. This phase typically varies the most in duration and amount of effort. How much time and effort this phase will involve depends upon the availability of engineering time for refactoring, the maturity and complexity of the service at the start of the review, and myriad other factors. Training Responsibility for managing a service in production is generally assumed by an entire SRE team. To ensure that the team is prepared, the SRE reviewers who led the PRR take ownership of training the team, which includes the documentation necessary to support the service. Typically with the help and participation of the development team, these engineers organize a series of training sessions and exercises. Instruction can include: Design overviews Deep dives on various request flows in the system A description of the production setup Hands-on exercises for various aspects of system operations When the training is concluded, the SRE team should be prepared to manage the service. Onboarding The Training phase unblocks onboarding of the service by the SRE team. It involves a progressive transfer of responsibilities and ownership of various production aspects of the service, including parts of operations, the change management process, access rights, and so forth. The SRE team continues to focus on the various areas of production mentioned earlier. To complete the transition, the development team must be available to back up and advise the SRE team for a period of time as it settles in managing production for the service. This relationship becomes the basis for the ongoing work between the teams. Continuous Improvement Active services continuously change in response to new demands and conditions, including user requests for new features, evolving system dependencies, and technology upgrades, in addition to other factors. The SRE team must maintain service reliability standards in the face of these changes by driving continuous improvement. The responsible SRE team naturally learns more about the service in the course of operating the service, reviewing new changes, responding to incidents, and especially when conducting postmortems/root cause analyses. This expertise is shared with the development team as suggestions and proposals for changes to the service whenever new features, components, and dependencies may be added to the service. Lessons from managing the service are also contributed to best practices, which are documented in the Production Guide and elsewhere. Engaging with Shakespeare Initially, the developers of the Shakespeare service were responsible for the product, including carrying the pager for emergency response. However, with growing use of the service and the growth of the revenue coming from the service, SRE support became desirable. The product has already been launched, so SRE conducted a Production Readiness Review. One of the things they found was that the dashboards were not completely covering some of the metrics defined in the SLO, so that needed to be fixed. After all the issues that had been filed had been fixed, SRE took over the pager for the service, though two developers were in the on-call rotation as well. The developers are participating in the weekly on-call meeting discussing last week's problems and how to handle upcoming large-scale maintenance or cluster turndowns. Also future plans for the service are now discussed with the SREs to make sure that new launches will go flawlessly (though Murphy's law is always looking for opportunities to spoil that). Evolving the Simple PRR Model: Early Engagement Thus far, we've discussed the Production Readiness Review as it's used in the Simple PRR Model, which is limited to services that have already entered the Launch phase. There are several limitations and costs associated with this model. For example: Additional communication between teams can increase some process overhead for the development team, and cognitive burden for the SRE reviewers. The right SRE reviewers must be available, and capable of managing their time and priorities with regards to their existing engagements. Work done by SREs must be highly visible and sufficiently reviewed by the development team to ensure effective knowledge sharing. SREs should essentially work as a part of the development team, rather than an external unit. However, the main limitations of the PRR Model stem from the fact that the service is launched and serving at scale, and the SRE engagement starts very late in the development lifecycle. If the PRR occurred earlier in the service lifecycle, SRE's opportunity to remedy potential issues in the service would be markedly increased. As a result, the success of the SRE engagement and the future success of the service itself would likely improve. The resulting drawbacks can pose a significant challenge to the success of the SRE engagement and the future success of the service itself. Candidates for Early Engagement The Early Engagement Model introduces SRE earlier in the development lifecycle in order to achieve significant additional advantages. Applying the Early Engagement Model requires identifying the importance and/or business value of a service early in the development lifecycle, and determining if the service will have sufficient scale or complexity to benefit from SRE expertise. Applicable services often have the following characteristics: The service implements significant new functionality and will be part of an existing system already managed by SRE. The service is a significant rewrite or alternative to an existing system, targeting the same use cases. The development team sought SRE advice or approached SRE for takeover upon launch. The Early Engagement Model essentially immerses SREs in the development process. SRE's focus remains the same, though the means to achieve a better production service are different. SRE participates in Design and later phases, eventually taking over the service any time during or after the Build phase. This model is based on active collaboration between the development and SRE teams. Benefits of the Early Engagement Model While the Early Engagement Model does entail certain risks and challenges discussed previously, additional SRE expertise and collaboration during the entire lifecycle of the product creates significant benefits compared to an engagement initiated later in the service lifecycle. Design phase SRE collaboration during the Design phase can prevent a variety of problems or incidents from occurring later in production. While design decisions can be reversed or rectified later in the development lifecycle, such changes come at a high cost in terms of effort and complexity. The best production incidents are those that never happen! Occasionally, difficult trade-offs lead to the selection of a less-than-ideal design. Participation in the Design phase means that SREs are aware up front of the trade-offs and are part of the decision to pick a less-than-ideal option. Early SRE involvement aims to minimize future disputes over design choices once the service is in production. Build and implementation The Build phase addresses production aspects such as instrumentation and metrics, operational and emergency controls, resource usage, and efficiency. During this phase, SRE can influence and improve the implementation by recommending specific existing libraries and components, or helping build certain controls into the system. SRE participation at this stage helps enable ease of operations in the future and allows SRE to gain operational experience in advance of the launch. Launch SRE can also help implement widely used launch patterns and controls. For example, SRE might help implement a "dark launch" setup, in which part of the traffic from existing users is sent to the new service in addition to being sent to the live production service. The responses from the new service are "dark" since they are thrown away and not actually shown to users. Practices such as dark launches allow the team to gain operational insight, resolve issues without impacting existing users, and reduce the risk of encountering issues after launch. A smooth launch is immensely helpful in keeping the operational burden low and maintaining the development momentum after the launch. Disruptions around launch can easily result in emergency changes to source code and production, and disrupt the development team's work on future features. Post-launch Having a stable system at launch time generally leads to fewer conflicting priorities for the development team in terms of choosing between improving service reliability versus adding new features. In later phases of the service, the lessons from earlier phases can better inform refactoring or redesign. With extended involvement, the SRE team can be ready to take over the new service much sooner than is possible with the Simple PRR Model. The longer and closer engagement between the SRE and development teams also creates a collaborative relationship that can be sustained long term. A positive cross-team relationship fosters a mutual feeling of solidarity, and helps SRE establish ownership of the production responsibility. Disengaging from a service Sometimes a service doesn't warrant full-fledged SRE team management—this determination might be made post-launch, or SRE might engage with a service but never officially take it over. This is a positive outcome, because the service has been engineered to be reliable and low maintenance, and can therefore remain with the development team. It is also possible that SRE engages early with a service that fails to meet the levels of usage projected. In such cases, the SRE effort spent is simply part of the overall business risk that comes with new projects, and a small cost relative to the success of projects that meet expected scale. The SRE team can be reassigned, and lessons learned can be incorporated into the engagement process. Evolving Services Development: Frameworks and SRE Platform The Early Engagement Model made strides in evolving SRE engagement beyond the Simple PRR Model, which applied only to services that had already launched. However, there was still progress to be made in scaling SRE engagement to the next level by designing for reliability. Lessons Learned Over time, the SRE engagement model described thus far produced several distinct patterns: Onboarding each service required two or three SREs and typically lasted two or three quarters. The lead times for a PRR were relatively high (quarters away). The effort level required was proportional to the number of services under review, and was constrained by the insufficient number of SREs available to conduct PRRs. These conditions led to serialization of service takeovers and strict service prioritization. Due to differing software practices across services, each production feature was implemented differently. To meet PRR-driven standards, features usually had to be reimplemented specifically for each service or, at best, once for each small subset of services sharing code. These reimplementations were a waste of engineering effort. One canonical example is the implementation of functionally similar logging frameworks repeatedly in the same language because different services didn't implement the same coding structure. A review of common service issues and outages revealed certain patterns, but there was no way to easily replicate fixes and improvements across services. Typical examples included service overload situations and data hot-spotting. SRE software engineering contributions were often local to the service. Thus, building generic solutions to be reused was difficult. As a consequence, there was no easy way to implement new lessons individual SRE teams learned and best practices across services that had already been onboarded. External Factors Affecting SRE External factors have traditionally pressured the SRE organization and its resources in several ways. Google is increasingly following the industry trend of moving toward microservices. 151 As a result, both the number of requests for SRE support and the cardinality of services to support have increased. Because each service has a base fixed operational cost, even simple services demand more staffing. Microservices also imply an expectation of lower lead time for deployment, which was not possible with the previous PRR model (which had a lead time of months). Hiring experienced, qualified SREs is difficult and costly. Despite enormous effort from the recruiting organization, there are never enough SREs to support all the services that need their expertise. Once SREs are hired, their training is also a lengthier process than is typical for development engineers. Finally, the SRE organization is responsible for serving the needs of the large and growing number of development teams that do not already enjoy direct SRE support. This mandate calls for extending the SRE support model far beyond the original concept and engagement model. Toward a Structural Solution: Frameworks To effectively respond to these conditions, it became necessary to develop a model that allowed for the following principles: Codified best practices The ability to commit what works well in production to code, so services can simply use this code and become "production ready" by design. Reusable solutions Common and easily shareable implementations of techniques used to mitigate scalability and reliability issues. A common production platform with a common control surface Uniform sets of interfaces to production facilities, uniform sets of operational controls, and uniform monitoring, logging, and configuration for all services. Easier automation and smarter systems A common control surface that enables automation and smart systems at a level not possible before. For example, SREs can readily receive a single view of relevant information for an outage, rather than hand collecting and analyzing mostly raw data from disparate sources (logs, monitoring data, and so on). Based upon these principles, a set of SRE-supported platform and service frameworks were created, one for each environment we support(Java, C++, Go). Services built using these frameworks share implementations that are designed to work with the SRE-supported platform, and are maintained by both SRE and development teams. The main shift brought about by frameworks was to enable product development teams to design applications using the framework solution that was built and blessed by SRE, as opposed to either retrofitting the application to SRE specifications after the fact, or retrofitting more SREs to support a service that was markedly different than other Google services. An application typically comprises some business logic, which in turn depends on various infrastructure components. SRE production concerns are largely focused on the infrastructure-related parts of a service. The service frameworks implement infrastructure code in a standardized fashion and address various production concerns. Each concern is encapsulated in one or more framework modules, each of which provides a cohesive solution for a problem domain or infrastructure dependency. Framework modules address the various SRE concerns enumerated earlier, such as: Instrumentation and metrics Request logging Control systems involving traffic and load management SRE builds framework modules to implement canonical solutions for the concerned production area. As a result, development teams can focus on the business logic, because the framework already takes care of correct infrastructure use. A framework essentially is a prescriptive implementation for using a set of software components and a canonical way of combining these components. The framework can also expose features that control various components in a cohesive manner. For example, a framework might provide the following: Business logic organized as well-defined semantic components that can be referenced using standard terms Standard dimensions for monitoring instrumentation A standard format for request debugging logs A standard configuration format for managing load shedding Capacity of a single server and determination of "overload" that can both use a semantically consistent measure for feedback to various control systems Frameworks provide multiple upfront gains in consistency and efficiency. They free developers from having to glue together and configure individual components in an ad hoc service-specific manner, in ever-so-slightly incompatible ways, that then have to be manually reviewed by SREs. They drive a single reusable solution for production concerns across services, which means that framework users end up with the same common implementation and minimal configuration differences. Google supports several major languages for application development, and frameworks are implemented across all of these languages. While different implementations of the framework (say in C++ versus Java) can't share code, the goal is to expose the same API, behavior, configuration, and controls for identical functionality. Therefore, development teams can choose the language platform that fits their needs and experience, while SREs can still expect the same familiar behavior in production and standard tools to manage the service . New Service and Management Benefits The structural approach, founded on service frameworks and a common production platform and control surface, provided a host of new benefits. Significantly lower operational overhead A production platform built on top of frameworks with stronger conventions significantly reduced operational overhead, for the following reasons: It supports strong conformance tests for coding structure, dependencies, tests, coding style guides, and so on. This functionality also improves user data privacy, testing, and security conformance. It features built-in service deployment, monitoring, and automation for all services . It facilitates easier management of large numbers of services, especially micro-services, which are growing in number. It enables much faster deployment: an idea can graduate to fully deployed SRE-level production quality in a matter of days! Universal support by design The constant growth in the number of services at Google means that most of these services can neither warrant SRE engagement nor be maintained by SREs. Regardless, services that don't receive full SRE support can be built to use production features that are developed and maintained by SREs. This practice effectively breaks the SRE staffing barrier. Enabling SRE-supported production standards and tools for all teams improves the overall service quality across Google. Furthermore, all services that are implemented with frameworks automatically benefit from improvements made over time to frameworks modules. Faster, lower overhead engagements The frameworks approach results in faster PRR execution because we can rely upon: Built-in service features as part of the framework implementation Faster service onboarding (usually accomplished by a single SRE during one quarter) Less cognitive burden for the SRE teams managing services built using frameworks These properties allow SRE teams to lower the assessment and qualification effort for service onboarding, while maintaining a high bar on service production quality. A new engagement model based on shared responsibility The original SRE engagement model presented only two options: either full SRE support, or approximately no SRE engagement. 152 A production platform with a common service structure, conventions, and software infrastructure made it possible for an SRE team to provide support for the "platform" infrastructure, while the development teams provide on-call support for functional issues with the service—that is, for bugs in the application code. Under this model, SREs assume responsibility for the development and maintenance of large parts of service software infrastructure, particularly control systems such as load shedding, overload, automation, traffic management, logging, and monitoring. This model represents a significant departure from the way service management was originally conceived in two major ways: it entails a new relationship model for the interaction between SRE and development teams, and a new staffing model for SRE-supported service management. 153 Conclusion Service reliability can be improved through SRE engagement, in a process that includes systematic review and improvement of its production aspects. Google SRE’s initial such systematic approach, the Simple Production Readiness Review, made strides in standardizing the SRE engagement model, but was only applicable to services that had already entered the Launch phase. Over time, SRE extended and improved this model. The Early Engagement Model involved SRE earlier in the development lifecycle in order to "design for reliability." As demand for SRE expertise continued to grow, the need for a more scalable engagement model became increasingly apparent. Frameworks for production services were developed to meet this demand: code patterns based on production best practices were standardized and encapsulated in frameworks, so that use of frameworks became a recommended, consistent, and relatively simple way of building production-ready services. All three of the engagement models described are still practiced within Google. However, the adoption of frameworks is becoming a prominent influence on building production-ready services at Google as well as profoundly expanding the SRE contribution, lowering service management overhead, and improving baseline service quality across the organization. 151 See the Wikipedia page on microservices at https://en.wikipedia.org/wiki/Microservices . 152 Occasionally, there were consulting engagements by SRE teams with some non-onboarded services, but consultations were a best-effort approach and limited in number and scope. 153 The new model of service management changes the SRE staffing model in two ways: (1) because a lot of service technology is common, it reduces the number of required SREs per service; (2) it enables the creation of production platforms with separation of concerns between production platform support (done by SREs) and service-specific business-logic support, which remains with the development team. These platforms teams are staffed based upon the need to maintain the platform rather than upon service count, and can be shared across products. Previous Chapter 31 - Communication and Collaboration in SRE Next Part V - Conclusions Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:18
https://git-scm.com/book/nl/v2/GitHub-Account-setup-en-configuratie
Git - Account setup en configuratie About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 6.1 GitHub - Account setup en configuratie GitHub is de grootste host voor Git repositories, en het is het middelpunt van samenwerking voor miljoenen ontwikkelaars en projecten. Het leeuwendeel van alle Git repositories worden op GitHub gehost, en veel open-source projecten gebruiken het voor Git hosting, issue tracking, code review en andere zaken. Dus alhoewel het geen direct onderdeel is van het Git open source project, is er een grote kans dat je op enig moment met GitHub wilt of moet interacteren als je beroepsmatig Git gebruikt. Dit hoofdstuk bespreekt hoe je GitHub doelmatig gebruiken kunt. We zullen het inschrijven en het beheren van een account, het maken en gebruiken van Git repositories behandelen, veel gebruikte workflows voor het bijdragen aan projecten en het accepteren van bijdragen aan jouw project, de programmatische interface van GitHub bespreken en veel kleine tips geven om je het leven in het algemeen makkelijker te maken. Als je niet geïnteresseerd bent in het gebruik van GitHub als host voor je eigen projecten of om samen te werken met andere projecten die op GitHub zijn gehost, kan je zonder problemen doorgaan naar Git Tools . Account setup en configuratie Het eerste wat je dient te doen is om een gratis user account aan te maken. Eenvoudigweg https://github.com bezoeken, een gebruikersnaam kiezen die nog niet in gebruik is, een email adres en een wachtwoord opgeven, en op de grote groene “Sign up for GitHub” knop klikken. Figuur 82. Het GitHub aanmeldings-formulier. Het volgende wat je zult zien is de prijslijst voor opwaardeer schema’s, maar deze kan je op dit momment veilig negeren. GitHub zal je een email sturen om het opgegeven adres te verifiëren. Ga dit nu even doen, het is nogal belangrijk (zoals we straks zullen zien). Noot GitHub levert al haar functionaliteit met gratis account, met de beperking dat al je projecten compleet openbaar zijn (iedereen heeft leesrechten). De betaalde diensten van GitHub bevatten een aantal besloten projecten, maar we zullen deze in dit boek niet behandelen. Door op het Octocat logo links boven op het scherm te klikken wordt je naar je dashboard-pagina geleid. Je bent nu klaar om GitHub te gebruiken. SSH Toegang Vanaf nu ben je compleet in staat om contact met Git repositories te maken met het https:// protocol, je bekend makend met de gebruikersnaam en wachtwoord die je zojuist opgezet hebt. Echter, om eenvoudigweg openbare projecten te klonen hoef je niet eens in te schrijven - het account dat we zojuist gemaakt hebben gaat een rol spelen als we straks projecten gaan forken en als we naar onze forks gaan pushen. Als je SSH remotes wilt gebruiken, zal je een publieke sleutel moeten configureren. (Als je er nog geen hebt, zie Je publieke SSH sleutel genereren .) Open je account instellingen met de link rechtsboven op het scherm: Figuur 83. De “Account settings” link. Selecteer dan de “SSH keys” sectie aan de linker kant. Figuur 84. De “SSH keys” link. Van daar, klik op de “Add an SSH key” knop, geef je sleutel een naam, plak de inhoud van je ~/.ssh/id_rsa.pub (of hoe je 'm genoemd hebt) public-key bestand in het tekstgebied en klik “Add key”. Noot Zorg ervoor de je je SSH sleuten een naam geeft die je kunt onthouden. Je kunt elk van je sleutels benoemen (bijv. "Mijn laptop" of "Werk account") zodat je, als je een sleutel moet innemen, je eenvoudig kunt zien welke je moet hebben. Jouw avatar Daarna kan je, als je dat wilt, de gegenereerde avatar vervangen met een afbeelding van jouw keuze. Ga eerst naar de “Profile” tab (boven de SSH Keys tab) en klik op “Upload new picture”. Figuur 85. De “Profile” link. We zullen een kopie van het Git logo gebruiken dat op je harde schijf staat en dan krijgen we de kans om het bij te snijden. Figuur 86. Je avatar bijsnijden Nu zal iedereen jouw avatar naast je gebruikersnaam zien. Als je toevallig een geüploade avatar op de populaire Gravatar dienst hebt (vaak gebruikt voor Wordpress accounts), zal die avatar standaard worden gebruikt en hoef je deze stap niet te doen. Je email adressen De manier waarop GitHub je Git commits koppelt aan je gebruiker is via het email adres. Als je meerdere email adressen gebruikt in je commits en je wilt dat GitHub ze juist koppelt, moet je alle email adressen die je gebruikt hebt toevoegen in het Emails deel van de admin sectie. Figuur 87. Voeg email adressen toe In Voeg email adressen toe kunnen we een aantal van de verschillende statussen zien die mogelijk zijn. Het bovenste adres is geverifiëerd en is gezet als het primaire adres, wat inhoudt dat notificaties en ontvangstbewijzen naar dit adres gestuurd zullen worden. Het tweede adres is geverifiëerd en kan dus als primair worden gekozen als je ze zou willen wisselen. Het laatste adres is niet geverifiëerd, wat inhoudt dat je deze niet je primaire adres kunt maken. Als GitHub een van deze ziet in commit berichten in een van de repositories op de site zal deze nu aan jouw gebruiker worden gekoppeld. Dubbele factor authenticatie Tot slot, als extra beveiliging, zou je zeker de dubbele factor authenticatie of “2FA” moeten inrichten. Dubbele factor authenticatie is een authenticatie mechanisme die de laatste tijd steeds populairder wordt om het risico te ondervangen dat jouw account wordt misbruikt als je wachtwoord op de een of andere manier wordt gestolen. Door dit aan te zetten zal GitHub je naar twee manieren van authenticeren vragen, zodat wanneer een van deze gecompromitteerd is een aanvaller niet in staat zal zijn je account te gebruiken. Je kunt de dubbele factor authenticatie instelling vinden onder de Security tab van je Account instellingen. Figuur 88. 2FA in de Security Tab Als je de “Set up two-factor authentication”-knop klikt, zal dit je naar een configuratie-pagina leiden waar je kunt kiezen om een telefoon app te gebruiken om een tweede code te genereren (een “time based one-time password” - een tijdsgerelateerde eenmalig wachtwoord), of je kunt GitHub elke keer als je moet inloggen een code laten SMSsen. Nadat je een voorkeursmethode hebt gekozen en de instructies volgt voor het instellen van 2FA, zal je account iets veiliger zijn en zal je een aanvullende code moeten opgeven bij je wachtwoord elke keer als je in GitHub inlogt. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://www.linkedin.com/products/categories/workload-automation-software?trk=organization_guest_product_card_category_link
Best Workload Automation Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Chief Technology Officer (11) Chief Information Officer (11) Director of Information Technology (9) Head of Information Technology (6) Chief Executive Officer (5) See all products Find top products in Workload Automation Software category Software used to organize and manage IT processing tasks. - Support multiple applications and workflows - Schedule automated jobs 84 results Control-M Workload Automation Software by BMC Software Transform your business with award-winning application and data workflow orchestration. Control-M simplifies application and data workflow orchestration on premises or as a service. It makes it easy to build, define, schedule, manage, and monitor production workflows, ensuring visibility and reliability and improving service level agreements (SLAs). Take advantage of: - Control-M SaaS or - Control-M for Mainframe View product RunMyJobs by Redwood Workload Automation Software by Redwood Software RunMyJobs by Redwood is a cloud-based workload automation platform that allows organizations to automate and orchestrate their mission-critical business processes and IT workflows across hybrid IT environments. It’s the first SaaS modern architecture built for efficiency and scale, deeply integrated with ERP, offering a composable platform and extensible to all major applications. Take the next step towards transforming your operations — explore RunMyJobs today. View product ActiveBatch by Redwood Workload Automation Software by Redwood Software ActiveBatch by Redwood is a user-friendly job scheduling and workload automation software solution designed specifically for the mid-market. It offers powerful capabilities for automating and managing IT processes and workflows. With its intuitive interface and comprehensive features, ActiveBatch streamlines operations and increases productivity, providing businesses with the tools to manage their IT operations efficiently, whether scheduling batch jobs, orchestrating workflows or monitoring tasks. Additionally, ActiveBatch offers an intuitive, no-code/low-code drag-and-drop GUI, streamlining workflow development and simplifying automation solution deployment. The powerful Super REST API adapter simplifies authentication and custom extension creation, further enhancing its usability and versatility. View product Altair PBS Professional Workload Automation Software by Altair Industry-leading Workload Manager and Job Scheduler for HPC and High-throughput Computing PBS Professional is a fast, powerful workload manager designed to improve productivity, optimize utilization and efficiency, and simplify administration for clusters, clouds, and supercomputers — from the biggest HPC workloads to millions of small, high-throughput jobs. PBS Professional automates job scheduling, management, monitoring, and reporting, and it's the trusted solution for complex Top500 systems as well as smaller clusters. Cloud bursting to and between your favorite providers is easier than ever with an intuitive bursting GUI built right in. PBS Professional also delivers a workload simulator that makes it easy to understand job behavior and the effects of policy changes, plus allocation and budget management capabilities that let you manage budgets across your entire enterprise. View product Stonebranch Workload Automation Software by Stonebranch Universal Automation Center (UAC) is a software platform designed to automate and orchestrate your IT and business processes, securely manage file transfers, and centralize the management of disparate IT job scheduling and workload automation solutions. With our event-driven automation technology, it is now possible to achieve real-time automation across your entire hybrid IT environment. Unlock the automation required to support cloud migrations, self-service infrastructure provisioning, jobs-as-code for DevOps, big data pipeline management, and much more. All this while centralizing operational control and insight with proactive monitoring, reporting, and alerts. Available on-premises or as a SaaS-based deployment, the UAC is a modern platform built to scale with your business. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights Tidal by Redwood Workload Automation Software by Redwood Software Tidal by Redwood is a unified workload automation for centralized management and control that makes scheduling business, operations and IT processes across your entire IT stack easier to manage using a single platform. Tidal goes beyond traditional workload automation, offering extensive integration with Oracle JD Edwards, a testament to its versatility and adaptability to your unique business needs. Empower your team to focus on innovation and strategic initiatives by automating routine tasks and providing real-time visibility into your workflows. Experience the difference with Tidal and unlock the full potential of your organization. View product Odin Automator Workload Automation Software by Odin AI Odin AI Task Automator streamlines complex workflows by automating repetitive and manual tasks across your business. It integrates with various platforms like Salesforce, Slack, Google Sheets, and more, enabling efficient data management and communication. With Odin's AI-powered automation, you can automate payment alerts, data extraction, email campaigns, and lead generation, among other tasks. This tool helps you save time, reduce human error, and optimize overall productivity, making it ideal for teams across sales, marketing, support, and product engineering. View product Green Button Go Scheduler Workload Automation Software by Biosero Automated laboratory scheduling software that helps you make the most of the devices in your lab Popular features ✔️ Drag-and-drop workflow configuration ✔️ Device library with 100s of drivers ✔️ Real-time monitoring and troubleshooting View product Motor AI Agent Workload Automation Software by Simplifai Accelerate motor claims handling with an AI Agent tailored for the motor insurance domain. The Motor AI Agent can read and interpret repair estimates, damage reports, invoices, and other case documentation. It extracts key information, cross-checks data, and flags inconsistencies – enabling faster, more accurate decision-making for claim handlers. With the ability to handle high volumes of incoming communication, the AI Agent automates time-consuming tasks such as data entry, email classification, and case updates. It integrates seamlessly into your existing claims systems and workflows, improving operational efficiency and reducing processing time, while keeping your team focused on value-adding work. View product DIY AI Agent Workload Automation Software by Simplifai Build and deploy your own AI Agent to automate everyday insurance tasks – no coding required. The DIY AI Agent gives insurers the freedom to create tailored automation flows for repetitive tasks like email handling, data entry, and customer communication. With an intuitive interface, teams can configure actions, define inputs and outputs, and integrate with internal systems. Perfect for optimizing admin-heavy processes across claims, customer service, and back-office operations, the DIY AI Agent reduces manual workloads, improves consistency while keeping full control of the logic and workflows. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language
2026-01-13T09:29:18
https://www.linkedin.com/uas/login?fromSignIn=true&session_redirect=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fbmc-software&trk=organization_guest_main-feed-card_ellipsis-menu-semaphore-sign-in-redirect&guestReportContentType=POST&_f=guest-reporting
LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional))
2026-01-13T09:29:18
https://www.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fmessenger.com%252F%26amp%253Bh%3DAT0NYKzIfH0K_3n8zEg2FDOZcaDpnHRvH0R8dh-lPmKwTCHeq_8KnftluFblL_25g--6zmzfczgAJeQwSsawTrtkNWyUyAgtdjIzSYKuD5W1hnR8LO8GCp5d3SZTsqtKSfCJcbVVJU2fV45q
Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026
2026-01-13T09:29:18
https://www.salesforce.com/marketing/?trk=products_details_guest_secondary_call_to_action
Marketing Software by Salesforce Marketing Cloud | Salesforce Skip to content Products Back Products Agentforce Back Agentforce Humans with Agents drive customer success together. Explore Agentforce How Agentforce works Why Agentforce? Agentforce Builder Agentforce Voice Agentforce Dev Tools AgentExchange Customer Stories Use Cases Agentforce Pricing ROI Calculator Sales Back Sales Boost pipeline, win rate, and revenue with Sales Cloud. See all sales solutions Sales Force Automation Sales AI Sales Agents Sales Performance Management Partner Cloud Revenue Lifecycle Management CPQ Sales Pricing Service Back Service Cut service costs with humans & AI agents on one platform. See all service solutions Customer Service Management Field Service IT Service Self-Service Digital Engagement Service Analytics Customer Service AI Contact Center Voice & Phone Support HR Service Service Pricing Marketing Back Marketing Personalize every moment of engagement across the customer lifecycle with AI agents, actionable data, and workflows. See all marketing solutions Marketing AI Email Marketing, Mobile, & Advertising B2B Marketing Automation Personalization Marketing Analytics Customer Data Platform Loyalty Management Small Business Marketing Marketing Pricing Commerce Back Commerce Increase revenue and deliver consistent customer experiences across online, in-store, and mobile channels with Commerce Cloud. See all commerce solutions B2C Commerce B2B Commerce Order Management Commerce AI Payments Commerce Innovations Commerce Pricing Analytics Back Analytics Empower every user, in every industry to transform trusted data into actionable insights, wherever they work. Explore analytics solutions Tableau Next Tableau Semantics Tableau CRM Analytics Data Culture Analytics Pricing Slack Back Slack Bring your people, agents data, apps and Salesforce solutions to where work happens in Slack. Learn about Slack and Salesforce AI-Powered Productivity Slack Platform Salesforce in Slack Agentforce in Slack Salesforce channels Pricing CRM Net Zero Back Net Zero Sustainability, simplified with Agentforce. See all Net Zero solutions Agentforce for Net Zero Cloud Net Zero Cloud Net Zero Marketplace Net Zero Pricing Small Business Back Small Business Find more customers, win their business, and keep them happy with tools that help you work smarter, not harder. Explore small business solutions Free Suite Starter Suite Pro Suite Marketing Sales Service Commerce Productivity with Slack Small Business Pricing Data Back Data Bring together, understand, and act on all your data, from any source. Explore Data 360 How Data 360 Works How Connectivity Works Data Security & Privacy Connectors Directory Analytics Platform Back Agentforce 360 Platform Build and customize your Agentforce and Customer 360 with the Agentforce 360 Platform. See all platform capabilities Agentforce Artificial Intelligence Agentforce & AI App Development Data Security & Privacy Informatica Analytics MuleSoft Flow Automation Heroku Customer Success Back Customer Success Reach your goals with our customer success experts. See all services Success Plans Professional Services Plans & Services Pricing Partner Apps & Experts Back Partner Apps & Experts Get the most out of Salesforce with partners, apps, solutions, and consultants. About our partners AppExchange Partner Apps Consultants Become a Partner Pricing Product Support See all products Discover the #1 AI CRM Humans with Agents drive customer success together. Explore the Salesforce portfolio Discover the #1 AI CRM Humans with Agents drive customer success together. Explore the Salesforce portfolio Industries Back Industries Automotive Back Automotive Drive a lifetime of loyalty. Explore automotive solutions OEM Dealer/Retailers Captive Software-Defined Vehicle (SDV) Automotive Cloud Automotive Pricing Communications Back Communications Deliver the connected future, faster. Explore communications solutions Unify the Telecom Customer Journey Accelerate Communication Sales Simplify Wholesale Transactions Light Up Fiber Experiences Fulfill the Perfect Order Optimize Telecom Customer Service Communications Cloud Communications Cloud Pricing Construction & Real Estate Back Engineering, Construction & Real Estate Achieve operational excellence. Explore engineering, construction, and real estate solutions Real Estate Engineering & Construction Residential & Commercial Services Consumer Goods Back Consumer Goods Deliver profitable growth. Explore consumer goods solutions Fast-Moving Consumer Goods Home Durables Distributors Apparel, Footwear, and Accessories Beauty Consumer Goods Cloud Consumer Goods Cloud Pricing Education Back Education Deliver connected education journeys for life on the #1 AI CRM.​ Explore education solutions Recruitment & Admissions Academic Operations Student Success Advancement & Alumni Relations Communications, Marketing, and Engagement Lifelong Learning K-12 Education Cloud Education Cloud Pricing Energy & Utilities Back Energy & Utilities Lead the energy transition. Explore energy & utilities solutions Power & Utilities Oil & Gas Energy & Utilities Cloud Energy & Utilities Cloud Pricing Financial Services Back Financial Services Empower your customers’ financial success with trusted AI. Explore financial services solutions Scale Service Streamline Onboarding Personalize Engagement Modernize Insurance Sales Next Gen Service for Banking Next Gen Advisor for Wealth Financial Services Cloud Financial Services Pricing Healthcare & Life Sciences Back Healthcare & Life Sciences Grow healthy relationships on the #1 AI CRM. Explore health and life sciences solutions Provider Payer Public Sector Health Pharma MedTech Health Cloud Life Sciences Cloud Health Cloud Pricing Life Sciences Cloud Pricing Manufacturing Back Manufacturing Connect the value chain. Explore manufacturing solutions Modernize Commercial Operations Transform the Service Experience Simplify Partner Engagement Manufacturing Cloud Manufacturing Cloud Pricing Media Back Media Cultivate loyal audiences faster. Explore media solutions Accelerate Advertising Sales Automate Media and Entertainment Processes Grow Subscriber Relationships Maximize Audience Engagement Streamline Production and Operations Media Cloud Media Cloud Pricing Nonprofit Back Nonprofit Build relationships that drive change. Explore nonprofit solutions Fundraising Marketing and Engagement Program and Outcome Management IT Operations for Nonprofit Grantmaking Volunteer Management Nonprofit Cloud Nonprofit Cloud Pricing Professional Services Back Professional Services Increase client trust. Explore professional services solutions Accounting, Tax, and Audit Consulting Staffing and Recruiting Legal Public Sector Back Public Sector Modernize government service. Explore public sector solutions Federal/National State and Local Defense and National Security Government Cloud Public Sector Solutions Government Cloud Pricing Public Sector Apps Pricing Retail Back Retail Smarter retail starts with Salesforce. Explore retail solutions Fashion Restaurants Grocery, Drug, and Convenience Stores Mass Merchants and Department Stores Retail Cloud POS Technology Back Technology Drive cost-efficient growth with intelligent automated processes. Explore technology solutions Unlock Partner Growth Drive Recruiting Revenue Travel, Transportation & Hospitality Back Travel, Transportation & Hospitality Deliver connected experiences. Explore travel, transportation, and hospitality experiences Travel Transportation and Logistics Hospitality See all industries Explore Salesforce for industries. Get industry-specific innovations that modernize your business, save time, and lower costs. See all industries Explore Salesforce for industries. Get industry-specific innovations that modernize your business, save time, and lower costs. See all industries Customers Back Customers Customer Stories Back Customer Stories Explore success stories from customers of all sizes, in every industry. See all stories Customer Highlights from Earnings Tableau Stories MuleSoft Stories Slack Stories Heroku Stories Salesforce on Salesforce Stories Back Salesforce on Salesforce Stories Explore success stories of how Salesforce uses our own technology to run every part of our business. See all stories Trailblazer Stories Back Trailblazer Stories Meet the innovators driving business impact and building successful careers with Salesforce. See all stories Trailblazer Series Trailblazer 360 Blog Be a Trailblazer Explore success stories. See how our customers transform their businesses with CRM + AI + Data + Trust. See all stories Explore success stories. See how our customers transform their businesses with CRM + AI + Data + Trust. See all stories Events Back Events Dreamforce Back Dreamforce Relive the innovation and inspiration of Dreamforce 2025 with on-demand keynotes, sessions, and highlights available anytime on Salesforce+. Learn more Save the Date for Dreamforce 2026 Mark your calendar for next year’s Dreamforce. Join us in San Francisco for more learning, inspiration, and innovation. Dreamforce 2025 Main Keynote Watch the headline keynote from Dreamforce 2025, featuring groundbreaking announcements and inspiring customer stories. Best of Dreamforce 2025 Catch the top sessions, moments, and takeaways, all in one place. TDX Back TDX Relive the best of TDX 2025 with on-demand technical sessions, product deep dives, and community-led discussions. Learn more Save the Date for TDX 2026 Join the must-attend conference for Salesforce developers, admins, and architects — save your spot now. TDX 2025 Main Keynote Watch the keynote showcasing the latest innovations in AI agents, data, integration, automation, and more. Best of TDX 2025 Catch the most impactful sessions, announcements, and community highlights from TDX 2025. Connections Back Connections Revisit the most impactful moments from Connections 2025 and learn how marketers can lead the way with next-gen innovations. Learn more Save the Date for Connections 2026 Be part of the marketing and commerce event of the year. Mark your calendar and join us for more insights, innovation, and inspiration. Connections 2025 Main Keynote Watch the keynote that sets the stage for an agentic future where customer engagement will never be the same. Best of Connections 2025 Explore the standout sessions, product demos, and takeaways. Tableau Conference Back Tableau Conference Revisit the excitement of Tableau Conference 2025 with on-demand keynotes, trainings, and data stories from around the world. Learn more Save the Date for Tableau Conference 2026 Mark your calendar for the data event of the year — connect with the Tableau community and sharpen your analytics skills. Tableau Conference 2025 Main Keynote Watch the keynote that unveiled the future of data and analytics, featuring product innovations and inspiring customer stories. Best of Tableau Conference 2025 See the top sessions, data visualizations, and takeaways from this year’s gathering of data enthusiasts. Agentforce World Tours Back Agentforce World Tours Join us in cities around the world to experience the latest Salesforce innovations in apps, data, and agents. Connect with leaders and experts from your industry at an Agentforce World Tour near you. Learn more Find an Agentforce World Tour in a city near you Browse current Agentforce World Tour events, or sign up today to be the first to know when one is coming to a city near you Agentforce World Tour New York Explore 140+ expert-led sessions, demos, and hands-on trainings featuring Dreamforce’s boldest launches. All in one day, all for free. Agentforce World Tour Chicago Experience the Dreamforce innovations that power Agentic Enterprises with expert-led sessions, demos, and hands-on trainings. All in one day, all for free. Salesforce+ Back Salesforce+ Fuel your professional growth and transform your company with our free live streaming and on-demand events. Visit Salesforce+ Watch Dreamforce On-Demand Stream the world’s largest, most trusted technology event. Agentforce and AI Join us to learn how AI agents are driving customer success. Data, Analytics, and Integration Explore data and AI demos, trainings, and release announcements. My List Save inspiring content to watch later. More Salesforce Events Back More Salesforce Events Dive into highlights from past events and explore on-demand content for every role and industry. See all events Webinars Join live webinars or listen to on-demand webinars at any time. Community Led Learn from peers, build your Salesforce network, and give back to the community. Customer Success Events Build product knowledge and get implementation and adoption guidance from Salesforce experts. Developer Events Learn and get hands-on the with the latest product innovations. Local User Groups Join groups to meet, collaborate, and learn new skills with Trailblazers. Salesforce Events Join us in person or on Salesforce+ to experience expert-led learning and groundbreaking innovation. See all events Salesforce Events Join us in person or on Salesforce+ to experience expert-led learning and groundbreaking innovation. See all events Learning Back Learning Learning on Trailhead Back Learning on Trailhead Skill up for the future from anywhere with Trailhead, our free, online learning platform. Explore Trailhead Trails Explore guided learning paths to quickly learn in-demand skills. Expert-led Learning Accelerate learning with access to Salesforce experts. Salesforce Certification Earn resume-worthy Salesforce credentials. Career Paths Find your career in the Salesforce ecosystem. Try Salesforce for Free Back Try Salesforce for Free Explore free and trial offers for Salesforce data, analytics, and CRM products — for any size business, in any industry. Get started Free Suite The free CRM suite with sales, service, marketing, and Slack for a maximum of 2 users. Starter Suite The marketing, sales, service, and commerce tools you need to succeed — all in a single CRM suite. Developer Edition Build enterprise-quality apps fast and get hands-on with Agentforce and Data Cloud. New to Salesforce Back New to Salesforce Salesforce is the #1 AI CRM, with AI agents, data, and CRM apps on a single, unified platform. Learn more about Salesforce What is CRM? Learn all about what CRM is, what it does, and how it can improve customer relationships. What is Customer 360? Salesforce’s category-leading apps for sales, service, marketing, commerce, IT, and more. What are AI agents? Learn more about AI agents and how they can help your business. What is Agentforce? It’s always-on digital labor augmenting every employee, department, and business process to improve customer experiences. Blogs Back Blogs Discover tips and insights from experts to supercharge your digital journey. Explore all blogs The 360 Blog Get the latest on AI, data, digital labor, and more. AI Research Blog Keep up with Salesforce’s AI research team and stay on the cutting edge of technology. Developer Blog Get the latest tooling innovation news, insights, and inspiration. Salesblazer Grow your career with the latest in sales content, training, and events — all part of the world’s leading sales community. Resources Back Resources Find free articles, guides, best practices, research, and more. Explore all resources Analyst Reports Read the latest studies and reports from the world’s best-known business analysts. Partners Explore industry-leading Salesforce partner apps and experts. Developer Documentation Get developer guides, tutorials, sample code, and API references. Newsletter Sign up to receive the latest Salesforce tips and insights, tailored just for you. Become a Trailblazer. Unlock free hands-on learning, our global community, original series, and support. Join for free Become a Trailblazer. Unlock free hands-on learning, our global community, original series, and support. Join for free Support Back Support Help & Documentation Back Help & Documentation Need help? Ask Agentforce. Find documentation, access learning resources, log cases, and more – all the support you need, wherever you need it. Ask Agentforce on Salesforce Help Known Issues Find up-to-date information and status about our technology bugs and workarounds. Developer Documentation Find tutorials, sample code, developer guides, and API references. Architecture Center Explore diagrams, decision guides, and resources to build trusted, easy, and adaptable solutions. Communities Back Communities Meet people, learn skills, find apps and experts, and share feedback by joining Salesforce communities. Explore communities Trailblazer Community Connect with peers to learn skills and grow. Slack Community Connect with Slack end users, admins, and developers. MuleSoft Community Connect with MuleSoft developers, architects and business users. Tableau Community Connect with Tableau analysts, admins, developers and business users. Partner Community Collaborate, learn, and manage your partner business. IdeaExchange Help us build the Salesforce roadmap by sharing your feedback. Services & Plans Back Services & Plans Reach your goals with our customer success experts. See all services Success Plans Get the right level of support and guidance to grow your business. Professional Services Implement, innovate, and transform your enterprise with help from Salesforce experts. Account Management Back Account Management Manage your Salesforce subscriptions and contracts. Explore Your Account features. Your Account App Make payments, buy products, and manage renewals (Salesforce login required). Billing Portal Access invoices and make payments (Trailblazer ID login required). Digital Wallet Track and manage consumption with near-real time usage data. Questions? We can help. Get answers fast with Agentforce. Explore product documentation, access learning resources, log cases, and more. Get started Questions? We can help. Get answers fast with Agentforce. Explore product documentation, access learning resources, log cases, and more. Get started Company Back Company About Salesforce Back About Salesforce Learn about our history, products, and values. See our story What Is Salesforce? Leadership Global Offices Salesforce Swag Store Salesforce Ventures Investor Relations Our Values Back Our Values We are guided by our values and strive to do well in business while doing good in our communities. See what drives us Trust Customer success Innovation Equality Sustainability Our Impact Back Our Impact Business is the greatest platform for change. We use the full power of Salesforce to make the world a better place for all of our stakeholders. Learn about our Impact initiatives Equality Accessibility Sustainability Philanthropy Ethical and Humane Use Public Policy Careers Back Careers Start a new journey with us – a career here at Salesforce. See all careers University Recruiting News Back Newsroom Stay informed with the latest news and announcements, fresh off the press. Visit the newsroom Analyst Reports Quarter in Review Legal Back Legal Find answers to your questions about Salesforce agreement terms, policies, intellectual property, corporate governance and compliance for customers, employees, partners and suppliers. See legal For Customers For Partners and Alliances For Suppliers Terms of Service TMCU Guidelines DMCA Privacy Info Compliance Product Accessibility Status Governance More Salesforce Brands Back More Salesforce Brands Explore the Salesforce brands delivering success now. See our brands Tableau Slack MuleSoft Heroku Hear our story. We believe in building relationships – not just between companies and customers, but among our global communities. Learn about us Hear our story. We believe in building relationships – not just between companies and customers, but among our global communities. Learn about us Contact Us Back Contact Us By phone 1-800-664-9073 Online Contact Us ‍ Change Region Back Change Region Americas América Latina (Español) Brasil (Português) Canada (English) Canada (Français) United States (English) Europe, Middle East, and Africa España (Español) Deutschland (Deutsch) France (Français) Italia (Italiano) Nederland (Nederlands) Sverige (Svenska) United Kingdom (English) All other countries (English) Asia Pacific Australia & New Zealand (English) India (English) 日本 (日本語) 中国 (简体中文) 台灣 (繁體中文) 한국 (한국어) ประเทศไทย (ไทย) All other countries (English) 1-800-664-9073 Contact Us 1-800-664-9073 Change Region Americas América Latina (Español) Brasil (Português) Canada (English) Canada (Français) United States (English) Europe, Middle East, and Africa España (Español) Deutschland (Deutsch) France (Français) Italia (Italiano) Nederland (Nederlands) Sverige (Svenska) United Kingdom (English) All other countries (English) Asia Pacific Australia & New Zealand (English) India (English) 日本 (日本語) 中国 (简体中文) 台灣 (繁體中文) 한국 (한국어) ประเทศไทย (ไทย) All other countries (English) Change Region ‍ Change Region Americas América Latina (Español) Brasil (Português) Canada (English) Canada (Français) United States (English) Europe, Middle East, and Africa España (Español) Deutschland (Deutsch) France (Français) Italia (Italiano) Nederland (Nederlands) Sverige (Svenska) United Kingdom (English) All other countries (English) Asia Pacific Australia & New Zealand (English) India (English) 日本 (日本語) 中国 (简体中文) 台灣 (繁體中文) 한국 (한국어) ประเทศไทย (ไทย) All other countries (English) Start for free Start for free Join Salesforce & Bose for an insightful discussion on how to adapt your marketing strategy for 2026. Register. Agentforce Marketing (formerly Marketing Cloud) Spark a conversation. Grow lasting relationships. Experiences that adapt. Marketing that responds. Marketing Cloud brings autonomous AI agents, actionable data, and full-funnel workflows together to power two-way conversations across every channel. Watch demo Take a tour Leading marketing teams of every size and every industry drive growth with Salesforce.    What is Marketing Cloud? Marketing Cloud is a complete marketing platform designed to help you personalize every moment and turn one-way campaigns into two-way conversations. By connecting every department through actionable data and Agentforce, we empower teams to work together to build lasting customer relationships. No matter your tech stack, company size, or industry Marketing Cloud can help you drive growth for your business. How can Marketing Cloud help your business? The end of “Do Not Reply” marketing Marketing that stops talking at customers and starts working with them. Watch now Journeys powered by unified data Every marketing moment, powered by a single trusted view of the customer. Watch now Experiences that adapt with customers Digital that remembers and engages in real time through a two-way conversation. Watch now Campaigns built at the speed of creativity Automation plus your imagination, powered by unified data and Agentforce. Watch now Insights that turn into impact Marketing that never sleeps but continually learns and improves on its own. Watch now Here’s what sets Marketing Cloud apart. 32 % increase in overall marketing ROI 34 % increase in customer lifetime value 32 % increase in customer engagement 27 % decrease in cost to acquire new customers *Salesforce Customer Success Metrics Discover why Marketing Cloud is the #1 agentic marketing platform. Next-gen Marketing Cloud A complete agentic marketing solution designed to help you personalize the right moments and have two-way conversations across the entire customer relationship. Learn more Cross-Channel Marketing Build customer relationships and drive loyalty through better experiences across connected marketing channels. Learn more B2B Marketing Automation Grow revenue by aligning marketing and sales around every account. Learn more Customer Data Platform Drive growth with faster access to centralized data that's seamlessly activated to any channel. Learn more Personalization Drive ROI and CLV with personalized experiences across every customer touchpoint. Learn more Marketing Analytics Optimize cross-channel campaigns and spend with a single view of your marketing performance. Learn more Loyalty Management Increase profitability and loyalty with better engagement and personalized rewards. Learn more Small Business Marketing Grow your small business faster with marketing automation on the Salesforce platform. Learn more Marketing AI Optimize every step of the campaign process with trusted, purpose-built AI. Learn more See all Product Pricing See how some of the world's biggest companies succeed with Marketing Cloud. SiriusXM Formula 1 SiriusXM Engages the Next Generation of Listeners with Data 360 With Data 360 and Marketing Cloud, SiriusXM harmonizes data across channels to create a 360-degree view of listeners. That way, they can personalize marketing experiences to keep subscribers engaged. Watch now Formula 1 boosts fan satisfaction to 88% by making every second count. With a shared view of customer data and powerful real-time insights from Salesforce Data 360, sales, service, and marketing pros can delight fans with well-choreographed speed and consistency. Here’s how they do it. Read the story Discover what's new in Marketing. Innovation Content Analyst Reports Webinars Demo Agentforce Marketing Generate segments in real time, adapt experiences to every individual, and deliver personalized answers in every channel 24/7. Watch demo Guide See how personalized marketing can lift loyalty and raise ROI. In this e-book, you’ll discover the importance of personalization, and how you can create a strategy of your own. Get the guide Demo Meet Salesforce Personalization Power 1:1 experiences in real-time with Data and AI. Watch demo Research State of Marketing Report See the top trends in data, AI, and more — from nearly 5,000 marketers worldwide. Get the report Guide CDP Activation Guide Learn five ways to engage customers with unlocked data. Get the guide Article Connecting Marketing and Service Experiences A unified marketing and service strategy can deliver more personalized customer journeys. Here's how to build yours. Get the guide Article Engage across the customer lifecycle with Agentforce See how agentic automation helps you build lasting customer relationships. Read the article Report Gartner® 2025 Magic Quadrant™ for Multichannel Marketing Hubs See why Gartner® named Salesforce a Leader for the eighth year in a row, and placed furthest for Completeness of Vision. Get the report Report Gartner® names Salesforce a CDP Leader two years in a row. Learn why Gartner named Salesforce a Leader among 12 other vendors. Read the report Report Gartner® 2025 Magic Quadrant™ for B2B Marketing Automation Platforms. See why Salesforce was named a Leader for the eighth year in a row. Read the report Webinar Get Started with Agentforce Marketing Join our Marketing Cloud AI product leaders to discover how Agentforce for Marketing transforms AI insights into tangible actions, driving real enterprise value. Watch now Webinar Unlock the Power of First-Party Data with Salesforce and Meta Discover how to enhance personalization and optimize marketing strategies through the partnership between Meta and Salesforce, leveraging innovations like the Data 360 and Meta Conversions API. Watch now Webinar AI Driven Personalization: Learn how to Access and Activate your Data Learn how to leverage data for personalized customer engagement through customer stories focusing on data activation and personalization strategies. Watch now Webinar Transform your B2B Marketing Strategy with Data 360 See how Flexo Concepts transformed their B2B Marketing with Salesforce Account Engagement. Watch now Couldn’t join us at Dreamforce? We’ve got you. Marketing is transforming once again, and this time it’s thanks to agentic AI. In this year’s Agentforce Marketing keynote, we explored how AI agents can help you spark conversations with your customers, acting on every message and interaction in real time. Watch now Read the blog Join the Moment Marketer Community. Looking to meet and learn from fellow Marketing professionals? Our #MomentMarketer Community is the place for you. Join the community Explore our content Achieve your AI and data goals with Salesforce CTOs. Trusted advisors with unmatched Salesforce experience and expertise, by your side. Learn more Discover more about Marketing Cloud with free learning on Trailhead. Ready to take the next step? Talk to an expert. Tell us a bit more so the right person can reach out faster. Request a call Stay up to date. Get the latest research, industry insights, and product news delivered straight to your inbox. Sign up for newsletter Salesforce Marketing Cloud FAQ What is Marketing Cloud? Marketing Cloud is a complete marketing platform designed to help you personalize every moment of engagement across the customer lifecycle. By connecting every department through actionable data and Agentforce, we empower teams to work together to build lasting customer relationships. No matter your tech stack, company size, or industry, Marketing Cloud can help you drive growth for your business. What does Marketing Cloud do? Marketing Cloud brings agentic marketing to life by giving every customer access to autonomous AI agents that act as always-on collaborators, generate segments in real time, adapt experiences to every individual, and deliver personalized answers in every channel—24/7. Unstructured data is AI-ready with Data Cloud to unify PDFs, videos, CMS, and more to power automation, real-time analytics, and Agentforce —turning content into a live knowledge base. The end of one-way marketing is here. With Agentforce embedded across email, SMS, and web, “do-not-reply” messages become two-way customer conversations across channels and departments - connecting marketing with sales, service, and commerce. Campaigns can now assemble and optimize themselves in Marketing Cloud. Deploy campaigns in hours, not weeks with always-on execution, performance monitoring, & real-time optimization based on the marketer’s goals and guidelines. What is Agentic Marketing? Powered by agentic AI, agentic marketing puts autonomous AI agents to work — personalizing and connecting experiences across every touchpoint. Unlike traditional automation, marketers define the strategy — such as “increase loyalty among at-risk customers” — and AI agents handle the execution. They create content, build audiences, personalize messaging, optimize performance, and even manage handoffs between teams and channels. So instead of juggling tools and chasing approvals, marketers get time back for what really matters: strategy, creativity, and driving growth. What is Agentforce Marketing? Agentforce is a type of  artificial intelligence (AI) application designed to execute specialized tasks to help marketers and customers. Agentforce for Marketing Cloud offers marketers pre-built skills to streamline key marketing tasks such as planning and creating marketing campaigns - Agentforce can help marketers create a brief, target audience segment, email and SMS content, build a customer journey in Flow, and even provide a campaign summary! For more specialized needs, marketers can create custom agents using Agent Builder. What is marketing software? Choosing the right marketing software involves considering several key factors. Start by identifying your specific marketing needs and goals. Evaluate the features and capabilities of different software options, ensuring they align with your requirements. Consider factors such as ease of use, scalability, integration with existing systems, and customer support. Take advantage of demos to assess its user-friendliness. Read reviews, seek recommendations, and consider the reputation and track record of the software provider. Pay attention to data security, privacy, and compliance with regulations. Finally, evaluate the cost and value proposition to ensure the software fits within your budget and offers a strong return on investment. By carefully assessing these factors, you can choose the marketing software that best suits your business needs. Does my business need marketing software? If you run digital marketing campaigns across multiple channels, marketing software can greatly benefit your business through automation, personalization, and data-driven marketing decision-making. Marketing software streamlines processes, improves productivity, enables personalized customer experiences, and provides valuable insights for optimizing marketing strategies. It helps businesses stay competitive in today's digital landscape and grow more efficiently. Assess your marketing needs, resources, and goals to determine which marketing software is most valuable to your business objectives. How do I choose the right marketing software? Choosing the right marketing software involves considering several key factors. Start by identifying your specific marketing needs and goals. Evaluate the features and capabilities of different software options, ensuring they align with your requirements. Consider factors such as ease of use, scalability, integration with existing systems, and customer support. Take advantage of demos to assess its user-friendliness. Read reviews, seek recommendations, and consider the reputation and track record of the software provider. Pay attention to data security, privacy, and compliance with regulations. Finally, evaluate the cost and value proposition to ensure the software fits within your budget and offers a strong return on investment. By carefully assessing these factors, you can choose the marketing software that best suits your business needs. The end of “Do Not Reply” marketing Journeys powered by unified data Experiences that adapt with customers Campaigns at the speed of creativity Insights that turn into action
2026-01-13T09:29:18
https://git-scm.com/book/de/v2/Git-Grundlagen-Ein-Git-Repository-anlegen
Git - Ein Git-Repository anlegen About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Erste Schritte 1.1 Was ist Versionsverwaltung? 1.2 Kurzer Überblick über die Historie von Git 1.3 Was ist Git? 1.4 Die Kommandozeile 1.5 Git installieren 1.6 Git Basis-Konfiguration 1.7 Hilfe finden 1.8 Zusammenfassung 2. Git Grundlagen 2.1 Ein Git-Repository anlegen 2.2 Änderungen nachverfolgen und im Repository speichern 2.3 Anzeigen der Commit-Historie 2.4 Ungewollte Änderungen rückgängig machen 2.5 Mit Remotes arbeiten 2.6 Taggen 2.7 Git Aliases 2.8 Zusammenfassung 3. Git Branching 3.1 Branches auf einen Blick 3.2 Einfaches Branching und Merging 3.3 Branch-Management 3.4 Branching-Workflows 3.5 Remote-Branches 3.6 Rebasing 3.7 Zusammenfassung 4. Git auf dem Server 4.1 Die Protokolle 4.2 Git auf einem Server einrichten 4.3 Erstellung eines SSH-Public-Keys 4.4 Einrichten des Servers 4.5 Git-Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Von Drittanbietern gehostete Optionen 4.10 Zusammenfassung 5. Verteiltes Git 5.1 Verteilter Arbeitsablauf 5.2 An einem Projekt mitwirken 5.3 Ein Projekt verwalten 5.4 Zusammenfassung 6. GitHub 6.1 Einrichten und Konfigurieren eines Kontos 6.2 Mitwirken an einem Projekt 6.3 Ein Projekt betreuen 6.4 Verwalten einer Organisation 6.5 Skripte mit GitHub 6.6 Zusammenfassung 7. Git Tools 7.1 Revisions-Auswahl 7.2 Interaktives Stagen 7.3 Stashen und Bereinigen 7.4 Deine Arbeit signieren 7.5 Suchen 7.6 Den Verlauf umschreiben 7.7 Reset entzaubert 7.8 Fortgeschrittenes Merging 7.9 Rerere 7.10 Debuggen mit Git 7.11 Submodule 7.12 Bundling 7.13 Replace (Ersetzen) 7.14 Anmeldeinformationen speichern 7.15 Zusammenfassung 8. Git einrichten 8.1 Git Konfiguration 8.2 Git-Attribute 8.3 Git Hooks 8.4 Beispiel für Git-forcierte Regeln 8.5 Zusammenfassung 9. Git und andere VCS-Systeme 9.1 Git als Client 9.2 Migration zu Git 9.3 Zusammenfassung 10. Git Interna 10.1 Basisbefehle und Standardbefehle (Plumbing and Porcelain) 10.2 Git Objekte 10.3 Git Referenzen 10.4 Packdateien (engl. Packfiles) 10.5 Die Referenzspezifikation (engl. Refspec) 10.6 Transfer Protokolle 10.7 Wartung und Datenwiederherstellung 10.8 Umgebungsvariablen 10.9 Zusammenfassung A1. Anhang A: Git in anderen Umgebungen A1.1 Grafische Schnittstellen A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Zusammenfassung A2. Anhang B: Git in deine Anwendungen einbetten A2.1 Die Git-Kommandozeile A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Anhang C: Git Kommandos A3.1 Setup und Konfiguration A3.2 Projekte importieren und erstellen A3.3 Einfache Snapshot-Funktionen A3.4 Branching und Merging A3.5 Projekte gemeinsam nutzen und aktualisieren A3.6 Kontrollieren und Vergleichen A3.7 Debugging A3.8 Patchen bzw. Fehlerkorrektur A3.9 E-mails A3.10 Externe Systeme A3.11 Administration A3.12 Basisbefehle 2nd Edition 2.1 Git Grundlagen - Ein Git-Repository anlegen Falls du es eilig hast und du nur die Zeit hast ein einziges Kapitel dieses hervorragendes Buches durchzulesen, bist du hier genau richtig. Dieses Kapitel behandelt alle grundlegenden Befehle, die Du benötigst, um den großteil der Aufgaben zu erledigen, die mit Git erledigt werden müssen. Am Ende des Kapitels solltest Du in der Lage sein, ein neues Repository anzulegen und zu konfigurieren, Dateien zu versionieren bzw. sie aus der Versionsverwaltung zu entfernen, Dateien in die Staging-Area hinzuzufügen und einen Commit durchzuführen. Außerdem wird gezeigt, wie Du Git so konfigurieren kannst, dass es bestimmte Dateien und Dateimuster ignoriert, wie Du Fehler schnell und einfach rückgängig machen, wie Du die Historie eines Projekts durchsuchen und Änderungen zwischen Commits nachschlagen, und wie Du von einem Remote-Repository Daten herunter- bzw. dort hochladen kannst. Ein Git-Repository anlegen Du hast zwei Möglichkeiten, ein Git-Repository auf Deinem Rechner anzulegen. Du kannst ein lokales Verzeichnis, das sich derzeit nicht unter Versionskontrolle befindet, in ein Git-Repository verwandeln, oder Du kannst ein bestehendes Git-Repository von einem anderen Ort aus klonen . In beiden Fällen erhältst Du ein einsatzbereites Git-Repository auf Deinem lokalen Rechner. Ein Repository in einem bestehenden Verzeichnis einrichten Wenn Du ein Projektverzeichnis hast, das sich derzeit nicht unter Versionskontrolle befindet, und Du mit dessen Versionierung über Git beginnen möchten, musst Du zunächst in das Verzeichnis dieses Projekts wechseln. Das sieht je nachdem, welches System verwendet wird, unterschiedlich aus: für Linux: $ cd /home/user/my_project für macOS: $ cd /Users/user/my_project für Windows: $ cd C:/Users/user/my_project Führe dort folgenden Befehl aus: $ git init Der Befehl erzeugt ein Unterverzeichnis .git , in dem alle relevanten Git-Repository-Daten enthalten sind, also so etwas wie ein Git-Repository Grundgerüst. Zu diesem Zeitpunkt werden noch keine Dateien in Git versioniert. In Kapitel 10, Git Interna , findest Du weitere Informationen, welche Dateien im .git Verzeichnis erzeugt wurden. Wenn Du bereits existierende Dateien versionieren möchtest (und es sich nicht nur um ein leeres Verzeichnis handelt), dann solltest Du den aktuellen Stand mit einem initialen Commit tracken. Mit dem Befehl git add legst Du fest, welche Dateien versioniert werden sollen und mit dem Befehl git commit erzeugst Du einen neuen Commit: Du kannst dies mit ein paar git add -Befehlen erreichen. Damit markierst du die zu trackende Dateien. Anschließend gibst du ein git commit ein: $ git add *.c $ git add LICENSE $ git commit -m 'Initial project version' Wir werden gleich noch einmal genauer auf diese Befehle eingehen. Im Moment ist nur wichtig zu verstehen, dass du jetzt ein Git-Repository erzeugt und einen ersten Commit angelegt hast. Ein existierendes Repository klonen Wenn du eine Kopie eines existierenden Git-Repositorys anlegen möchtest – um beispielsweise an einem Projekt mitzuarbeiten – kannst du den Befehl git clone verwenden. Wenn du bereits Erfahrung mit einem anderen VCS-System wie Subversion gesammelt hast, fällt dir bestimmt sofort auf, dass der Befehl „clone“ und nicht etwa „checkout“ lautet. Das ist ein wichtiger Unterschied, den du unbedingt verstehen solltest. Anstatt nur eine einfache Arbeitskopie vom Projekt zu erzeugen, lädt Git nahezu alle Daten, die der Server bereithält, auf den lokalen Rechner. Jede Version von jeder Datei der Projekt-Historie wird automatisch heruntergeladen, wenn du git clone ausführst. Wenn deine Serverfestplatte beschädigt wird, kannst du nahezu jeden der Klone auf irgendeinem Client verwenden, um den Server wieder in den Zustand zurückzusetzen, in dem er sich zum Zeitpunkt des Klonens befand. (Du wirst vielleicht einige serverseitige Hooks und dergleichen verlieren, aber alle versionierte Daten wären vorhanden – siehe Kapitel 4, Git auf dem Server , für weitere Details.) Du klonst ein Repository mit dem Befehl git clone [url] . Um beispielsweise das Repository der verlinkbaren Git-Bibliothek libgit2 zu klonen, führst du folgenden Befehl aus: $ git clone https://github.com/libgit2/libgit2 Git legt dann ein Verzeichnis libgit2 an, initialisiert in diesem ein .git Verzeichnis, lädt alle Daten des Repositorys herunter und checkt eine Kopie der aktuellsten Version aus. Wenn du in das neue libgit2 Verzeichnis wechselst, findest du dort die Projektdateien und kannst gleich damit arbeiten. Wenn du das Repository in ein Verzeichnis mit einem anderen Namen als libgit2 klonen möchtest, kannst du das wie folgt durchführen: $ git clone https://github.com/libgit2/libgit2 mylibgit Dieser Befehl tut das Gleiche wie der vorhergehende, aber das Zielverzeichnis lautet diesmal mylibgit . Git unterstützt eine Reihe unterschiedlicher Übertragungsprotokolle. Das vorhergehende Beispiel verwendet das https:// Protokoll, aber dir könnten auch die Angaben git:// oder user@server:path/to/repo.git begegnen, welches das SSH-Transfer-Protokoll verwendet. Kapitel 4, Git auf dem Server , stellt alle verfügbaren Optionen vor, die der Server für den Zugriff auf dein Git-Repository hat und die Vor- und Nachteile der möglichen Optionen. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://git-scm.com/book/zh-tw/v2/Git-%e5%9f%ba%e7%a4%8e-%e5%8f%96%e5%be%97%e4%b8%80%e5%80%8b-Git-%e5%80%89%e5%84%b2
Git - 取得一個 Git 倉儲 About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. 開始 1.1 關於版本控制 1.2 Git 的簡史 1.3 Git 基礎要點 1.4 命令列 1.5 Git 安裝教學 1.6 初次設定 Git 1.7 取得說明文件 1.8 摘要 2. Git 基礎 2.1 取得一個 Git 倉儲 2.2 紀錄變更到版本庫中 2.3 檢視提交的歷史記錄 2.4 復原 2.5 與遠端協同工作 2.6 標籤 2.7 Git Aliases 2.8 總結 3. 使用 Git 分支 3.1 簡述分支 3.2 分支和合併的基本用法 3.3 分支管理 3.4 分支工作流程 3.5 遠端分支 3.6 衍合 3.7 總結 4. 伺服器上的 Git 4.1 通訊協定 4.2 在伺服器上佈署 Git 4.3 產生你的 SSH 公鑰 4.4 設定伺服器 4.5 Git 常駐程式 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 第3方 Git 託管方案 4.10 總結 5. 分散式的 Git 5.1 分散式工作流程 5.2 對專案進行貢獻 5.3 維護一個專案 5.4 Summary 6. GitHub 6.1 建立帳戶及設定 6.2 參與一個專案 6.3 維護專案 6.4 Managing an organization 6.5 Scripting GitHub 6.6 總結 7. Git 工具 7.1 Revision Selection 7.2 Interactive Staging 7.3 Stashing and Cleaning 7.4 Signing Your Work 7.5 Searching 7.6 Rewriting History 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugging with Git 7.11 Submodules 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 總結 8. Customizing Git 8.1 Git Configuration 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Summary 9. Git and Other Systems 9.1 Git as a Client 9.2 Migrating to Git 9.3 Summary 10. Git Internals 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 The Refspec 10.6 Transfer Protocols 10.7 Maintenance and Data Recovery 10.8 Environment Variables 10.9 Summary A1. 附錄 A: Git in Other Environments A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git in Powershell A1.7 Summary A2. 附錄 B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. 附錄 C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 2.1 Git 基礎 - 取得一個 Git 倉儲 假如您只能閱讀一章來學習 Git,那麼這個章節就是您的不二選擇。 本章節涵蓋你以後使用 Git 來完成絕大多數工作時,最常被使用的基本指令。 在讀完本章節後,你應該有能力設定及初始化一個倉儲(repository)、開始及停止追蹤檔案(track)、預存(stage)及提交(commit)更新。 本章還會提到如何讓 Git 忽略某些檔案和檔案匹配(patterns)、如何迅速而簡單地撤銷錯誤操作、如何瀏覽你專案的歷史版本及觀看不同提交(commits)之間的變更、以及如何將更新推送(push)至遠端倉儲或從遠端倉儲拉取(pull)提交。 取得一個 Git 倉儲 你有兩種主要方法來取得一個 Git 倉儲。 第一種是將現有的專案或者資料夾匯入 Git; 第二種是從其它伺服器克隆(clone)一份現有的 Git 倉儲。 在現有資料夾中初始化倉儲 若你打算使用 Git 來追蹤(track)現有的專案,只需要進入該專案的資料夾並執行: $ git init 這個命令將會建立一個名為 .git 的子資料夾,其中包含 Git 所有必需的倉儲檔案,也就是 Git 倉儲的骨架。 到現在這步驟為止,倉儲預設沒有追蹤任何檔案。 (想知道你剛建立的 .git 資料夾內有些什麼檔案,請參考 [ch10-git-internals] ) 如果你的專案資料夾原本已經有檔案(不是空的),那麼建議你應該馬上追蹤這些原本就有的檔案,然後進行第一次提交。 你可以通過多次 git add 指令來追蹤完所有你想要追蹤的檔案,然後執行 git commit 提交: $ git add *.c $ git add LICENSE $ git commit -m 'initial project version' 進行這些動作應該不會花你太多時間。 到現在這步驟為止,你已經得到了一個追蹤若干檔案及第一次提交內容的 Git 倉儲。 克隆現有的倉儲 若你想要取得現有 Git 倉儲的複本(例如:你想要開始協作的倉儲),那你需要使用的命令是 git clone 。 若你熟悉其它像是 Subversion 的版本控制系統,你應該注意現在這個命令是克隆(clone),而非取出(checkout)。 這是 Git 和其他版本控制系統的重要差異:Git 並不僅只是取得專案最新的內容,而是把遠端倉儲內幾乎所有的資料都抓回來了。 專案歷史紀錄中,每個檔案的每個版本預設都會在你執行 git clone 時被拉取(pull)回來。 實際上,如果伺服器的硬碟損壞,你通常可以使用任何客戶端克隆的倉儲來將伺服器重建回原本克隆的狀態。(你可能遺失一些伺服器的掛勾程式 hooks ,但你所有的版本資料都還會健在,請查看 在伺服器上佈署 Git 獲得更多資訊) 克隆倉庫的命令格式是 git clone [url] 。 例如:若你想克隆名為 libgit2 的 Git linkable library,可以執行下列命令: $ git clone https://github.com/libgit2/libgit2 這指令將會建立名為「libgit2」的資料夾,並在這個資料夾下初始化一個 .git 資料夾,從遠端倉儲拉取所有資料,並且取出(checkout)專案中最新的版本。 若你進入新建立的 libgit2 資料夾,你將會看到專案的檔案都在裡面了,並且準備就緒等你進行後續的開發或使用。 若你想要將倉儲克隆到「libgit2」以外名字的資料夾,只需要再多指定一個參數即可: $ git clone https://github.com/libgit2/libgit2 mylibgit 這個命令做的事與上一個命令大致相同,只不過在本地創建的倉庫名字變為 mylibgit 。 Git 支援多種數據傳輸協定。 上一個範例採用 https:// 協定,但你可能會看過 git:// 或 user@server:path/to/repo.git 等使用 SSH 傳輸的協定。 在 在伺服器上佈署 Git 章節將會介紹這些協定在伺服器端如何配置及使用,以及各種方式的優缺點。 prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/commands/cargo-publish.html#option-cargo-publish---no-verify
cargo publish - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-publish(1) NAME cargo-publish — Upload a package to the registry SYNOPSIS cargo publish [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory and upload it to a registry. The default registry is https://crates.io . This performs the following steps: Performs a few checks, including: Checks the package.publish key in the manifest for restrictions on which registries you are allowed to publish to. Create a .crate file by following the steps in cargo-package(1) . Upload the crate to the registry. The server will perform additional checks on the crate. The client will poll waiting for the package to appear in the index, and may timeout. In that case, you will need to check for completion manually. This timeout does not affect the upload. This command requires you to be authenticated using cargo-login(1) or environment variables of the registry.token and registries.<name>.token config fields. See the reference for more details about packaging and publishing. OPTIONS Publish Options --dry-run Perform all checks without uploading. --no-verify Don’t verify the contents by building them. --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --index index The URL of the registry index to use. --registry registry Name of the registry to publish to. Registry names are defined in Cargo config files . If not specified, and there is a package.publish field in Cargo.toml with a single registry, then it will publish to that registry. Otherwise it will use the default registry, which is defined by the registry.default config key which defaults to crates-io . Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Publish only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Publish all members in the workspace. --all Deprecated alias for --workspace . --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Publish for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo publish -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo publish -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Publish the current package: cargo publish SEE ALSO cargo(1) , cargo-package(1) , cargo-login(1)
2026-01-13T09:29:18
https://www.linkedin.com/products/teamviewer-frontline/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click
TeamViewer Frontline | LinkedIn Skip to main content LinkedIn TeamViewer in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in TeamViewer Frontline Workflow Management Software by TeamViewer See who's skilled in this Add as skill Request demo Report this product About Empower your Frontline Workforce with Augmented Reality! Digitalize and streamline processes for frontline employees in desk-free workspaces with AR-guided workflows – seamlessly integrated with wearables and mobile devices. With TeamViewer Frontline you can increase your company’s productivity, efficiency, and quality along the entire value chain. This product is intended for Supply Chain Manager Director of Logistics Vice President Logistics Chief Supply Chain Officer Manufacturing Executive Director Installation Director of Maintenance Director of Field Operations Director Technical Training Operations Manager Media Products media viewer No more previous content TeamViewer Frontline – Empowering Frontline Workers 80 percent of today’s workers globally are working deskless jobs. Frontline workers are often overlooked when companies focus on digitization. TeamViewer Frontline is here to change this. As the world’s first and fully integrated augmented reality (AR) platform for mobile workers, Frontline from TeamViewer enables a truly digitalized industrial workplace, simplifying processes, minimizing error rates, ensuring high-quality output, and increasing productivity along your entire value chain. TeamViewer Frontline xPick & Samsung SDS – How vision picking transforms warehouse operations Easy and hands-free: Step-by-step instructions with xInspec Discover our augmented reality solution xInspect for inspection and maintenance. It provides hands-free and visual instuctions, on-site documentation and instant connection to expert support at any time. The container inspection use case shown here originates from the research project PortForward, which we actively contribute to. Together with our partners we are researching to make the future of European ports more smart, environmentally friendly and interconnected.. Lewis Hamilton trying out augmented reality software by TeamViewer Lewis Hamilton, 7 times Formula 1 world champion, using TeamViewer’s Frontline AR solution to look at his past season’s car in a totally new way! Beyond the racetrack, Frontline improves manual work processes across industries on the basis of the latest wearable computing technologies. How TeamViewer Frontline Supports Deyaar FM's Facility Management Operations Thanks to TeamViewer's Augmented Reality-based solution, TeamViewer Frontline xAssist, Deyaar Facilities Management can remotely support its customers quickly and effectively while avoiding unnecessary travel and reducing its carbon footprint. With the use of smart glasses, its remote expert sees exactly what the on-site technician sees and can provide technical expertise. Find more information about our latest case study here: https://www.teamviewer.com/en-us/success-stories/deyaar-fm/ No more next content Featured customers of TeamViewer Frontline DB Schenker Transportation, Logistics, Supply Chain and Storage 1,063,239 followers AUDI AG Motor Vehicle Manufacturing 2,271,749 followers GlobalFoundries Semiconductor Manufacturing 290,958 followers DHL Supply Chain Transportation, Logistics, Supply Chain and Storage 2,154,574 followers Boehringer Ingelheim Pharmaceutical Manufacturing 2,159,518 followers Samsung SDS IT Services and IT Consulting 203,738 followers BMW Group Motor Vehicle Manufacturing 3,535,364 followers Intel Corporation Semiconductor Manufacturing 4,100,345 followers Ford Motor Company Motor Vehicle Manufacturing 4,288,347 followers Coca-Cola HBC Food and Beverage Services 677,887 followers SIG Group Packaging and Containers Manufacturing 248,926 followers Airbus Aviation and Aerospace Component Manufacturing 2,754,147 followers Lexus Motor Vehicle Manufacturing 137,581 followers AGCO Corporation Machinery Manufacturing 448,087 followers Liebherr Group Industrial Machinery Manufacturing 468,297 followers KEMPER GmbH Industrial Machinery Manufacturing 8,794 followers NSF Public Safety 257,168 followers Toyota Deutschland GmbH Automotive 9,431 followers Show more Show less Similar products n8n n8n Workflow Management Software Automation Engine Automation Engine Workflow Management Software Pipefy Pipefy Workflow Management Software Jotform Workflows Jotform Workflows Workflow Management Software Value Stream Management Value Stream Management Workflow Management Software ApprovalMax ApprovalMax Workflow Management Software Sign in to see more Show more Show less TeamViewer products TeamViewer Remote TeamViewer Remote Remote Desktop Software TeamViewer Tensor TeamViewer Tensor Enterprise Cloud Platforms LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://git-scm.com/book/nl/v2/Git-Tools-Revisie-Selectie
Git - Revisie Selectie About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 7.1 Git Tools - Revisie Selectie Op dit moment heb je de meeste van de alledaagse commando’s en workflows geleerd die je nodig hebt om een Git repository te onderhouden of te beheren voor het bijhouden van je eigen bron code. Je hebt de basale taken van tracken en committen van bestanden volbracht, en je hebt de kracht van de staging area onder de knie gekregen en het lichtgewicht topic branchen en mergen. Nu zal je een aantal erg krachtige dingen die Git kan doen onderzoeken die je niet perse elke dag zult doen maar die je op een gegeven moment wel nodig kunt hebben. Revisie Selectie Git laat je op verschillende manieren een specifieke commit, een groep van commits, of een reeks van commits aangeven. Ze zijn niet echt voor de hand liggend, maar zijn zeer nuttig om te kennen. Enkele revisies Je kunt uiteraard refereren aan een commit met de SHA-1 hash die eraan is gegeven, maar er zijn ook meer mens-vriendelijke manieren om aan commits te referen. Deze paragraaf toont de verschillende manieren waarmee je kunt refereren aan een enkele commit. Verkorte SHA-1 Git is slim genoeg om uit te knobbelen welke commit je wilde typen als je de eerste paar karakters geeft, zolang als je verkorte SHA-1 op z’n minst 4 karakters lang is en eenduidig; dus, slechts één object in de huidige repository begint met dat deel-SHA-1. Bijvoorbeeld, om een specifieke commit te zien, stel dat je een git log commando gebruikt en de commit waar je een bepaalde functie hebt toegevoegd identificeert: $ git log commit 734713bc047d87bf7eac9674765ae793478c50d3 Author: Scott Chacon <schacon@gmail.com> Date: Fri Jan 2 18:32:33 2009 -0800 fixed refs handling, added gc auto, updated tests commit d921970aadf03b3cf0e71becdaab3147ba71cdef Merge: 1c002dd... 35cfb2b... Author: Scott Chacon <schacon@gmail.com> Date: Thu Dec 11 15:08:43 2008 -0800 Merge commit 'phedders/rdocs' commit 1c002dd4b536e7479fe34593e72e6c6c1819e53b Author: Scott Chacon <schacon@gmail.com> Date: Thu Dec 11 14:58:32 2008 -0800 added some blame and merge stuff Stel dat we geinteresseerd zijn in de commit waarvan de hash begint met 1c002dd.... . Je kunt deze commit bekijken met elk van de volgend variaties van het git show commando (aangenomen dat de kortere versies uniek zijn): $ git show 1c002dd4b536e7479fe34593e72e6c6c1819e53b $ git show 1c002dd4b536e7479f $ git show 1c002d Git kan met een korte, unieke afkorting van je SHA-1 waarden overweg. Als je --abbrev-commit doorgeeft aan het git log commando, zal de uitvoer de korte waarden gebruiken, maar ze uniek houden; het gebruikt standaard zeven karakters, maar zal ze langer maken als dat nodig is om de SHA-1 eenduidig te houden: $ git log --abbrev-commit --pretty=oneline ca82a6d changed the version number 085bb3b removed unnecessary test code a11bef0 first commit Over het algemeen zijn acht tot tien karakters meer dan genoeg om binnen een project uniek te zijn. Om een voorbeeld te geven: per februari 2019 bevat de Linux kernal (wat een redelijk groot project is) meer dan 875.000 commits en bijna zeven miljoen objecten in de object database, waarin geen twee objecten waavan de SHA-1s in de eerste 12 karakters gelijk zijn. Noot EEN KORTE NOOT OVER SHA-1 Veel mensen zijn bezorgd dat op een gegeven moment ze, door domme toeval, twee objecten in hun repository hebben waarvan de hash dezelfde SHA-1 waarde is. Wat dan? Als het je overkomt dat je een object commit dat naar dezelfde SHA-1 waarde hasht als een vorig ander object in je repository, zal Git het vorige object in je Git database zien, aannemen dat het al was weggeschreven en simpelweg herbruiken. Als je op dat moment dat object weer zou gaan uitchecken, zal je altijd de gegevens van het eerste object krijgen. Echter, je moet beseffen hoe belachelijk onwaarschijnlijk dit scenario is. De SHA-1 cijferruimte is 20 bytes of 160 bits. De hoeveelheid willekeurig gehashde objecten die nodig zijn om een 50% waarschijnlijkheid van een enkele botsing te garanderen is ongeveer 2 80 (de formule om de waarschijnlijkheid van een botsing te bepalen is p = (n (n-1)/2) * (1/2^160)) . 2 80 is 1,2 x 10 24 of 1 miljoen miljard miljard. Dat is 1.200 keer het aantal zandkorrels op de aarde. Hier is een voorbeeld om je een idee te geven wat er nodig is om een SHA-1 botsing te krijgen. Als alle 6,5 miljard mensen op Aarde zouden programmeren, en elke seconde zou elk van hen code opleveren ter grootte aan de gehele Linux kernel historie (6,5 miljoen Git objecten) en deze pushen naar een gigantische Git repository, zou het ongeveer 2 jaar duren voordat de repository genoeg objecten zou bevatten om een 50% waarschijnlijkheid te krijgen van een enkele SHA-1 object botsing. Er is een grotere kans dat elk lid van je programmeerteam wordt aangevallen en gedood door wolven in ongerelateerde gebeurtenissen op dezelfde avond. Branch referenties Een directe manier om naar een specifieke commit te verwijzen is als het de commit is aan de punt van een branch; in dat geval kan je de branchnaam in elke Git commando gebruiken die een referentie naar een commit verwacht. Bijvoorveeld, als je het laatste commit object op een branch wilt laten zien, zijn de volgende commando’s gelijk, aangenomen dat de topic1 -branch wijst naar ca82a6d : $ git show ca82a6dff817ec66f44342007202690a93763949 $ git show topic1 Als je wilt zien naar welke specifieke SHA-1 een branch wijst, of als je wilt zien waar elk van deze voorbeelden op neerkomt in termen van SHA-1s, kan je het Git binnenwerk instrument (plumbing tool) geheten rev-parse gebruiken. Je kunt Git Binnenwerk bekijken voor meer informatie over plumbing tools; het komt erop neer dat rev-parse er is voor onder-water operaties en dat het niet bedoeld is voor het dagelijks gebruik. Dit gezegd hebbende, het kan soms handig zijn als het nodig is om te zien wat er echt gebeurt. Hier kan je rev-parse op je branch laten lopen. $ git rev-parse topic1 ca82a6dff817ec66f44342007202690a93763949 RefLog verkorte namen Een van de dingen die Git op de achtergrond doet als je aan het werk bent is een “reflog” bijhouden — een logboek waarin wordt bijgehouden waar je HEAD en branch referenties in de afgelopen paar maanden zijn geweest. Je kunt de reflog zien door git reflog te gebruiken: $ git reflog 734713b HEAD@{0}: commit: fixed refs handling, added gc auto, updated d921970 HEAD@{1}: merge phedders/rdocs: Merge made by the 'recursive' strategy. 1c002dd HEAD@{2}: commit: added some blame and merge stuff 1c36188 HEAD@{3}: rebase -i (squash): updating HEAD 95df984 HEAD@{4}: commit: # This is a combination of two commits. 1c36188 HEAD@{5}: rebase -i (squash): updating HEAD 7e05da5 HEAD@{6}: rebase -i (pick): updating HEAD Elke keer als de punt van je branch voor welke reden dan ook wordt bijgewerkt, slaat Git die informatie voor je op in deze tijdelijke historie. En je kunt ook aan oudere commits refereren met deze gegevens. Als je bijvoorbeeld de vijfde vorige waarde van de HEAD van je repository wilt zien, kan je de @{5} referentie gebruiken die je in de reflog uitvoer ziet: $ git show HEAD@{5} Je kunt deze syntax ook gebruiken om te zien waar een branch was op een specifieke moment in het verleden. Als je bijvoorbeeld wilt zien waar je master -branch gister was, kan je dit typen $ git show master@{yesterday} Dat toont je waar de punt van de master -branch gister was. Deze techniek werkt alleen voor gegevens die nog steeds in je reflog staan, dus je kunt het niet gebruiken om commits op te zoeken die ouder dan een paar maanden zijn. Om de reflog informatie geformatteerd te tonen zoals de git log uitvoer, kan je git log -g uitvoeren: $ git log -g master commit 734713bc047d87bf7eac9674765ae793478c50d3 Reflog: master@{0} (Scott Chacon <schacon@gmail.com>) Reflog message: commit: fixed refs handling, added gc auto, updated Author: Scott Chacon <schacon@gmail.com> Date: Fri Jan 2 18:32:33 2009 -0800 fixed refs handling, added gc auto, updated tests commit d921970aadf03b3cf0e71becdaab3147ba71cdef Reflog: master@{1} (Scott Chacon <schacon@gmail.com>) Reflog message: merge phedders/rdocs: Merge made by recursive. Author: Scott Chacon <schacon@gmail.com> Date: Thu Dec 11 15:08:43 2008 -0800 Merge commit 'phedders/rdocs' Het is belangrijk op te merken dat de reflog informatie puur lokaal is — het is een log van wat jij gedaan hebt in jouw repository. De referentie zal niet hetzelfde zijn op de kopie van een ander van de repository, en direct nadat je initieel een kloon van een repository gemaakt hebt zal je een lege reflog hebben, omdat er nog geen activiteiten hebben plaatsgevonden op jouw repository. Het uitvoeren van git show HEAD@{2.months.ago} zal alleen werken als je het project op z’n minst 2 maanden geleden hebt gekloond — als je het recentelijker geleden hebt gekloond zal je alleen je eerste lokale commit zien. Tip Zie de reflog als Git’s versie van shell history Als je een UNIX of Linux achtergrond hebt, kan je de reflog zien als Git’s versie van shell history, wat benadrukt dat wat er hier te zien is duidelijk alleen relevant is voor jou en jouw “sessie” en niets te doen heeft met anderen die misschien toevallig op dezelfde machine werken. Voorouder referenties De andere veelgebruikte manier om een commit te specificeren is via zijn voorouders. Als je een ^ (caret) aan het eind van een referentie plaatst, zal Git dit interpreteren als een referentie aan de ouder van deze commit. Stel dat je naar de historie van je project kijkt: $ git log --pretty=format:'%h %s' --graph * 734713b fixed refs handling, added gc auto, updated tests * d921970 Merge commit 'phedders/rdocs' |\ | * 35cfb2b Some rdoc changes * | 1c002dd added some blame and merge stuff |/ * 1c36188 ignore *.gem * 9b29157 add open3_detach to gemspec file list Dan kan je de vorige commit zien door HEAD^ te specificeren, wat “de ouder van HEAD” betekent: $ git show HEAD^ commit d921970aadf03b3cf0e71becdaab3147ba71cdef Merge: 1c002dd... 35cfb2b... Author: Scott Chacon <schacon@gmail.com> Date: Thu Dec 11 15:08:43 2008 -0800 Merge commit 'phedders/rdocs' Noot De caret escapen op Windows Voor cmd.exe op Windows, is ^ een speciaal teken en moet anders behandeld worden. Je kunt het verdubbelen of de referentie naar de commit in quotes zetten: $ git show HEAD^ # werkt NIET op Windows $ git show HEAD^^ # OK $ git show "HEAD^" # OK Je kunt ook een getal aangeven na de ^ om aan te geven welke ouder je wilt; bijvoorbeeld: d921970^2 wat “de tweede ouder van d921970” betekent. Deze syntax is alleen nuttig voor merge commits, waar je meer dan een ouder hebt. De eerste ouderr van een merge commit is van de branch waar je op stond toen je mergede (meestal master ), terwijl de tweede ouder van een merge commit van de branch is die werd gemerged (zeg, topic ): $ git show d921970^ commit 1c002dd4b536e7479fe34593e72e6c6c1819e53b Author: Scott Chacon <schacon@gmail.com> Date: Thu Dec 11 14:58:32 2008 -0800 added some blame and merge stuff $ git show d921970^2 commit 35cfb2b795a55793d7cc56a6cc2060b4bb732548 Author: Paul Hedderly <paul+git@mjr.org> Date: Wed Dec 10 22:22:03 2008 +0000 Some rdoc changes De andere belangrijke voorouder specificatie is de ~ (tilde). Deze refereert ook aan de eerste ouder, dus HEAD~ en HEAD^ zijn aan elkaar gelijk. Het verschil wordt duidelijk wanneer je een getal specificeert. HEAD~2 betekent “de eerste ouder van de eerste ouder”, of “de grootouder” — het loopt het aantal keren terug over de eerste ouders dat je specificeert. Even weer als voorbeeld, in de historie van hiervoor, HEAD~3 zou dit opleveren: $ git show HEAD~3 commit 1c3618887afb5fbcbea25b7c013f4e2114448b8d Author: Tom Preston-Werner <tom@mojombo.com> Date: Fri Nov 7 13:47:59 2008 -0500 ignore *.gem Dit kan ook als HEAD~~~ worden geschreven, wat wederom de eerste ouder van de eerste ouder van de eerste ouder aanduidt: $ git show HEAD~~~ commit 1c3618887afb5fbcbea25b7c013f4e2114448b8d Author: Tom Preston-Werner <tom@mojombo.com> Date: Fri Nov 7 13:47:59 2008 -0500 ignore *.gem Je kunt deze syntaxen ook combineren; je kunt de tweede ouder van de vorige referentie krijgen (aangenomen dat het een merge commit was) door HEAD~3^2 te gebruiken, en zo voort. Commit reeksen Nu dat je individuele commits kunt aanwijzen, laten we eens kijken hoe je een reeks van commits kunt aanduiden. Dit is in het bijzonder nuttig voor het beheren van je branches — als je veel branches hebt, kan je reeks-specificaties gebruiken om vragen te beantwoorden als “Welk werk zit op deze branch die ik nog niet gemerged heb in mijn hoofdbranch?”. Tweevoudige punt De meest gebruikte reeks specificatie is de tweevoudige punt (double-dot) syntax. Dit vraagt Git gewoon om een reeks commits op te halen die bereikbaar zijn van de ene commit maar niet vanaf een andere. Bijvoorbeeld, stel dat je een commit historie hebt die eruit ziet als Voorbeeld historie voor reeks selectie. . Figuur 137. Voorbeeld historie voor reeks selectie. Je wilt zien wat er in je experiment -branch zit wat nog niet in je master -branch gemerged is. Je kunt Git vragen om een log te laten zien van alleen die commits met master..experiment  — hiermee wordt bedoeld “alle commits bereikbaar voor experiment die niet bereikbaar zijn voor master.” Om het kort en duidelijk te houden in deze voorbeelden, gebruik ik de letters van de commit objecten uit het diagram in plaats van de eigenlijke log uitvoer in de volgorde dat ze getoond zouden zijn: $ git log master..experiment D C Als je, aan de andere kant, je het tegenovergestelde wilt zien — alle commits in master die niet in experiment zitten, kan je de branch namen omdraaien. experiment..master laat je alles in master zien wat niet vanuit experiment bereikbaar is: $ git log experiment..master F E Dit is nuttig als je de experiment -branch bij wilt houden en alvast wilt zien wat je op het punt staat in te mergen. Waar deze syntax ook vaak voor wordt gebruikt is om te zien wat je op het punt staat te pushen naar een remote: $ git log origin/master..HEAD Dit commando laat je alle commits in je huidige branch zien die niet in de master -branch zitten op je origin -remote. Als je een git push laat lopen en je huidige branch trackt origin/master , zijn de commits die worden getoond door git log origin/master..HEAD de commits die naar de server zullen worden gestuurd. Je kunt ook een kant van deze sytax weglaten om Git te laten aannemen dat hier HEAD wordt bedoeld. Bijvoorbeeld, kan je dezelfde resultaten bereiken als in het vorige voorbeeld door git log origin/master.. te typen — Git gebruikt HEAD als een van de twee kanten ontbreekt. Meerdere punten De twee-punten syntax is nuttig als een afkorting; maar misschien wil je meer dan twee branches aanwijzen om je revisie aan te geven, zoals het zien welke commits er zijn in een willekeurig aantal branches die niet in de branch zitten waar je nu op zit. Git staat je toe dit te doen door ofwel het ^ karakter te gebruiken of --not voor elke referentie waarvan je niet de bereikbare commits wilt zien. Dus deze drie commando’s zijn gelijkwaardig: $ git log refA..refB $ git log ^refA refB $ git log refB --not refA Dit is nuttig omdat je met deze syntax meer dan twee referenties in je query kunt aangeven, wat je niet kunt doen met de dubbele-punt syntax. Als je bijvoorbeeld alle commits wilt zien die bereikbaar zijn vanaf refA of refB maar niet van refC , kan je een van deze intypen: $ git log refA refB ^refC $ git log refA refB --not refC Dit vormt een hele krachtige revisie-uitvraag-systeem die je kan helpen om uit te vinden wat er in je branches zit. Drievoudige punt De laatste belangrijke reeks-selectie syntax is de drievoudige punt (triple dot) syntax, welke alle commits aanduidt die door een van beide referenties bereikbaar is maar niet door beide. Kijk even terug naar het voorbeeld van commit historie in Voorbeeld historie voor reeks selectie. . Als je wilt zien wat in master of experiment zit maar geen gedeelde referenties kan je dit laten lopen $ git log master...experiment F E D C Wederom, dit geeft je een normale log uitvoer, maar laat je alleen de commit informatie zien voor deze vier commits, getoond op de reguliere commit datum-volgorde. Een gebruikelijke optie om te gebruiken bij het log commando in dit geval is --left-right , welke je laat zien welke zijde van de reeks elke commit in zit. Dit helpt om de gegevens meer bruikbaar te maken: $ git log --left-right master...experiment < F < E > D > C Met deze instrumenten kan je eenvoudiger Git laten weten welke commit of commits je wilt onderzoeken. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://scholar.google.com/citations?view_op=view_org&hl=ko&oe=ASCII&org=15204255840385501358
:root{--gm3-sys-color-on-surface-rgb:31,31,31}@media screen and (prefers-color-scheme:dark){:root{--gm3-sys-color-on-surface-rgb:227,227,227}}:root{--wf-harmonize-filter-light:none;--wf-harmonize-filter-dark:none}.q4Wquf,.nfoC7c{display:block;height:25vh;position:relative}@media (min-width:600px){.q4Wquf,.nfoC7c{height:150px}}.q4Wquf.Irjbwb{height:auto}@media screen and (prefers-color-scheme:dark){.q4Wquf:not(.GtvzYd){display:none}}.nfoC7c{margin:0;overflow:hidden}.PwpMUe,.lVUmD{display:block;height:100%;margin:0 auto;width:100%}.St9mde{display:block;-webkit-filter:var(--wf-harmonize-filter-light);filter:var(--wf-harmonize-filter-light);height:100%;max-width:100%;min-height:110px;position:relative;-webkit-transform:translate(-43%,-3%);-ms-transform:translate(-43%,-3%);transform:translate(-43%,-3%);width:auto;z-index:3}@media screen and (prefers-color-scheme:dark){.St9mde{-webkit-filter:var(--wf-harmonize-filter-dark);filter:var(--wf-harmonize-filter-dark)}}.PwpMUe,.lVUmD,.St9mde{-o-object-fit:contain;object-fit:contain}.wsArZ[data-ss-mode="1"] .q4Wquf,.wsArZ[data-ss-mode="1"] .St9mde{height:auto;width:100%}.wsArZ[data-ss-mode="1"] .St9mde{max-width:400px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .q4Wquf,.NQ5OL .St9mde{height:auto;width:100%}.NQ5OL .St9mde{max-width:400px}}.q4Wquf.NWba7e,.q4Wquf.NWba7e .St9mde{height:auto}.q4Wquf.NWba7e .St9mde{height:auto;max-width:312px;width:100%}.q4Wquf.NWba7e.zpCp3 .St9mde{max-width:unset}.q4Wquf.IiQozc .St9mde{margin:0 auto;-webkit-transform:none;-ms-transform:none;transform:none}.q4Wquf.Irjbwb .St9mde{height:auto;width:100%}.q4Wquf.EEeaqf .St9mde{max-height:144px;max-width:144px}.SnAaEd{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(233,233,233,0)),color-stop(62.22%,rgba(233,233,233,0)),color-stop(40.22%,rgb(233,233,233)),to(rgba(233,233,233,0)));background-image:-webkit-linear-gradient(top,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);background-image:linear-gradient(to bottom,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);height:100%;left:0;overflow:hidden;position:absolute;right:0;top:0;z-index:2}@media screen and (prefers-color-scheme:dark){.SnAaEd{display:none}}.SnAaEd::after,.SnAaEd::before{content:"";display:block;height:100%;min-width:110px;position:absolute;right:-10%;-webkit-transform:rotate(-104deg);-ms-transform:rotate(-104deg);transform:rotate(-104deg);width:25vh;z-index:2}@media (min-width:600px){.SnAaEd::after,.SnAaEd::before{width:150px}}.SnAaEd::before{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(243,243,243,0)),to(rgba(243,243,243,.9)));background-image:-webkit-linear-gradient(top,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);background-image:linear-gradient(to bottom,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);bottom:-10%}.SnAaEd::after{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(255,255,255,0)),to(rgba(255,255,255,.9)));background-image:-webkit-linear-gradient(top,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);background-image:linear-gradient(to bottom,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);bottom:-80%}.wsArZ[data-ss-mode="1"] .SnAaEd~.St9mde{width:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .SnAaEd~.St9mde{width:auto}}.RHNWk .St9mde{height:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .RHNWk .St9mde{width:115px}}.cf660d .St9mde{-webkit-transform:translate(-9%,-3%);-ms-transform:translate(-9%,-3%);transform:translate(-9%,-3%)}.tUhwwc .St9mde{margin:auto;max-height:230px;right:0;top:-3%;-webkit-transform:none;-ms-transform:none;transform:none}.Jkvqxd .St9mde{-webkit-transform:translate(9%,-3%);-ms-transform:translate(9%,-3%);transform:translate(9%,-3%)}.onc8Ic .St9mde{-webkit-transform:translate(var( --c-ps-s,24px ),0);-ms-transform:translate(var( --c-ps-s,24px ),0);transform:translate(var( --c-ps-s,24px ),0)}.WA89Yb .St9mde{-webkit-transform:translate(0,0);-ms-transform:translate(0,0);transform:translate(0,0)}.wsArZ[data-ss-mode="1"] .XEN8Yb .St9mde{max-width:115px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .XEN8Yb .St9mde{max-width:115px}}@media (min-width:1240px) and (orientation:landscape),all and (min-width:1600px){.KyPKed .NQ5OL .XEN8Yb .St9mde{max-width:115px}}.IsSr6b .St9mde{max-width:300px}.mmskdd .St9mde{-webkit-transform:none;-ms-transform:none;transform:none}@-webkit-keyframes quantumWizBoxInkSpread{0%{-webkit-transform:translate(-50%,-50%) scale(.2);transform:translate(-50%,-50%) scale(.2)}100%{-webkit-transform:translate(-50%,-50%) scale(2.2);transform:translate(-50%,-50%) scale(2.2)}}@keyframes quantumWizBoxInkSpread{0%{-webkit-transform:translate(-50%,-50%) scale(.2);transform:translate(-50%,-50%) scale(.2)}100%{-webkit-transform:translate(-50%,-50%) scale(2.2);transform:translate(-50%,-50%) scale(2.2)}}@-webkit-keyframes quantumWizIconFocusPulse{0%{-webkit-transform:translate(-50%,-50%) scale(1.5);transform:translate(-50%,-50%) scale(1.5);opacity:0}100%{-webkit-transform:translate(-50%,-50%) scale(2);transform:translate(-50%,-50%) scale(2);opacity:1}}@keyframes quantumWizIconFocusPulse{0%{-webkit-transform:translate(-50%,-50%) scale(1.5);transform:translate(-50%,-50%) scale(1.5);opacity:0}100%{-webkit-transform:translate(-50%,-50%) scale(2);transform:translate(-50%,-50%) scale(2);opacity:1}}@-webkit-keyframes quantumWizRadialInkSpread{0%{-webkit-transform:scale(1.5);transform:scale(1.5);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}@keyframes quantumWizRadialInkSpread{0%{-webkit-transform:scale(1.5);transform:scale(1.5);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}@-webkit-keyframes quantumWizRadialInkFocusPulse{0%{-webkit-transform:scale(2);transform:scale(2);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}@keyframes quantumWizRadialInkFocusPulse{0%{-webkit-transform:scale(2);transform:scale(2);opacity:0}100%{-webkit-transform:scale(2.5);transform:scale(2.5);opacity:1}}:root{--wf-tfs:calc(var(--c-tfs,32)/16*1rem);--wf-tfs-bp2:calc(var(--c-tfs,36)/16*1rem);--wf-tfs-bp3:calc(var(--c-tfs,36)/16*1rem);--wf-tfs-bp5:calc(var(--c-tfs,44)/16*1rem);--wf-stfs:calc(var(--c-stfs,16)/16*1rem);--wf-stfs-bp5:calc(var(--c-stfs,16)/16*1rem)}:root{--wf-harmonize-filter-light:none;--wf-harmonize-filter-dark:none}.Dzz9Db,.GpMPBe{display:block;height:25vh;position:relative}@media (min-width:600px){.Dzz9Db,.GpMPBe{height:150px}}@media screen and (prefers-color-scheme:dark){.Dzz9Db:not(.GtvzYd){display:none}}.Dzz9Db.Irjbwb{height:auto}.GpMPBe{margin:0;overflow:hidden}.UFQPDd,.JNOvdd{display:block;height:100%;margin:0 auto;-o-object-fit:contain;object-fit:contain;width:100%}.f4ZpM{display:block;-webkit-filter:var(--wf-harmonize-filter-light);filter:var(--wf-harmonize-filter-light);height:100%;max-width:100%;min-height:110px;position:relative;-webkit-transform:translate(-43%,-3%);-ms-transform:translate(-43%,-3%);transform:translate(-43%,-3%);width:auto;z-index:3}@media screen and (prefers-color-scheme:dark){.f4ZpM{-webkit-filter:var(--wf-harmonize-filter-dark);filter:var(--wf-harmonize-filter-dark)}}.wsArZ[data-ss-mode="1"] .Dzz9Db,.wsArZ[data-ss-mode="1"] .f4ZpM{height:auto;width:100%}.wsArZ[data-ss-mode="1"] .f4ZpM{max-width:400px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .Dzz9Db,.NQ5OL .f4ZpM{height:auto;width:100%}.NQ5OL .f4ZpM{max-width:400px}}.Dzz9Db.utFBGf,.Dzz9Db.utFBGf .f4ZpM{height:auto}.Dzz9Db.utFBGf .f4ZpM{height:auto;max-width:312px;width:100%}.Dzz9Db.utFBGf.zpCp3 .f4ZpM{max-width:unset}.Dzz9Db.IiQozc .f4ZpM{margin:0 auto;-webkit-transform:none;-ms-transform:none;transform:none}.Dzz9Db.Irjbwb .f4ZpM{height:auto;width:100%}.Dzz9Db.EEeaqf .f4ZpM{max-height:144px;max-width:144px}.nPt1pc{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(233,233,233,0)),color-stop(62.22%,rgba(233,233,233,0)),color-stop(40.22%,rgb(233,233,233)),to(rgba(233,233,233,0)));background-image:-webkit-linear-gradient(top,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);background-image:linear-gradient(to bottom,rgba(233,233,233,0) 0,rgba(233,233,233,0) 62.22%,rgb(233,233,233) 40.22%,rgba(233,233,233,0) 100%);height:100%;left:0;overflow:hidden;position:absolute;right:0;top:0;z-index:2}@media screen and (prefers-color-scheme:dark){.nPt1pc{display:none}}.nPt1pc::after,.nPt1pc::before{content:"";display:block;height:100%;min-width:110px;position:absolute;right:-10%;-webkit-transform:rotate(-104deg);-ms-transform:rotate(-104deg);transform:rotate(-104deg);width:25vh;z-index:2}@media (min-width:600px){.nPt1pc::after,.nPt1pc::before{width:150px}}.nPt1pc::before{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(243,243,243,0)),to(rgba(243,243,243,.9)));background-image:-webkit-linear-gradient(top,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);background-image:linear-gradient(to bottom,rgba(243,243,243,0) 0,rgba(243,243,243,.9) 100%);bottom:-10%}.nPt1pc::after{background-image:-webkit-gradient(linear,left top,left bottom,from(rgba(255,255,255,0)),to(rgba(255,255,255,.9)));background-image:-webkit-linear-gradient(top,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);background-image:linear-gradient(to bottom,rgba(255,255,255,0) 0,rgba(255,255,255,.9) 100%);bottom:-80%}.wsArZ[data-ss-mode="1"] .nPt1pc~.f4ZpM{width:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .nPt1pc~.f4ZpM{width:auto}}.ZS7CGc .f4ZpM{height:auto}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .ZS7CGc .f4ZpM{width:115px}}.qiRZ5e .f4ZpM{-webkit-transform:translate(-9%,-3%);-ms-transform:translate(-9%,-3%);transform:translate(-9%,-3%)}.vIv7Gf .f4ZpM{margin:auto;max-height:230px;right:0;top:-3%;-webkit-transform:none;-ms-transform:none;transform:none}.nvYXVd .f4ZpM{-webkit-transform:translate(9%,-3%);-ms-transform:translate(9%,-3%);transform:translate(9%,-3%)}.uOhnzd .f4ZpM{-webkit-transform:translate(24px,0);-ms-transform:translate(24px,0);transform:translate(24px,0)}.MsYMaf .f4ZpM{-webkit-transform:translate(0,0);-ms-transform:translate(0,0);transform:translate(0,0)}.wsArZ[data-ss-mode="1"] .YIi9qf .f4ZpM{max-width:115px}@media (min-width:600px) and (orientation:landscape),all and (min-width:1600px){.NQ5OL .YIi9qf .f4ZpM{max-width:115px}}.QG3Xbe .f4ZpM{max-width:300px}.F6gtje .f4ZpM{-webkit-transform:none;-ms-transform:none;transform:none}@-webkit-keyframes mdc-ripple-fg-radius-in{from{-webkit-animation-timing-function:cubic-bezier(.4,0,.2,1);animation-timing-function:cubic-bezier(.4,0,.2,1);-webkit-transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1);transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1)}to{-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}}@keyframes mdc-ripple-fg-radius-in{from{-webkit-animation-timing-function:cubic-bezier(.4,0,.2,1);animation-timing-function:cubic-bezier(.4,0,.2,1);-webkit-transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1);transform:translate(var(--mdc-ripple-fg-translate-start,0)) scale(1)}to{-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}}@-webkit-keyframes mdc-ripple-fg-opacity-in{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:0}to{opacity:var(--mdc-ripple-fg-opacity,0)}}@keyframes mdc-ripple-fg-opacity-in{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:0}to{opacity:var(--mdc-ripple-fg-opacity,0)}}@-webkit-keyframes mdc-ripple-fg-opacity-out{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:var(--mdc-ripple-fg-opacity,0)}to{opacity:0}}@keyframes mdc-ripple-fg-opacity-out{from{-webkit-animation-timing-function:linear;animation-timing-function:linear;opacity:var(--mdc-ripple-fg-opacity,0)}to{opacity:0}}.VfPpkd-ksKsZd-XxIAqe{--mdc-ripple-fg-size:0;--mdc-ripple-left:0;--mdc-ripple-top:0;--mdc-ripple-fg-scale:1;--mdc-ripple-fg-translate-end:0;--mdc-ripple-fg-translate-start:0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity;position:relative;outline:none;overflow:hidden}.VfPpkd-ksKsZd-XxIAqe::before,.VfPpkd-ksKsZd-XxIAqe::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.VfPpkd-ksKsZd-XxIAqe::before{-webkit-transition:opacity 15ms linear,background-color 15ms linear;transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index,1)}.VfPpkd-ksKsZd-XxIAqe::after{z-index:0;z-index:var(--mdc-ripple-z-index,0)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d::before{-webkit-transform:scale(var(--mdc-ripple-fg-scale,1));-ms-transform:scale(var(--mdc-ripple-fg-scale,1));transform:scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d::after{top:0;left:0;-webkit-transform:scale(0);-ms-transform:scale(0);transform:scale(0);-webkit-transform-origin:center center;-ms-transform-origin:center center;transform-origin:center center}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-lJfZMc::after{-webkit-animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards;animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-OmS1vf::after{-webkit-animation:mdc-ripple-fg-opacity-out .15s;animation:mdc-ripple-fg-opacity-out .15s;-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));-ms-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-ksKsZd-XxIAqe::before,.VfPpkd-ksKsZd-XxIAqe::after{top:-50%;left:-50%;width:200%;height:200%}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded],.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd{overflow:visible}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded]::before,.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded]::after,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd::before,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd::after{top:0;left:0;width:100%;height:100%}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded].VfPpkd-ksKsZd-mWPk3d::before,.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded].VfPpkd-ksKsZd-mWPk3d::after,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd.VfPpkd-ksKsZd-mWPk3d::before,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd.VfPpkd-ksKsZd-mWPk3d::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0);width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-ksKsZd-XxIAqe[data-mdc-ripple-is-unbounded].VfPpkd-ksKsZd-mWPk3d::after,.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd.VfPpkd-ksKsZd-mWPk3d::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-ksKsZd-XxIAqe::before,.VfPpkd-ksKsZd-XxIAqe::after{background-color:#000;background-color:var(--mdc-ripple-color,#000)}.VfPpkd-ksKsZd-XxIAqe:hover::before,.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE::before{opacity:.04;opacity:var(--mdc-ripple-hover-opacity,.04)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe::before,.VfPpkd-ksKsZd-XxIAqe:not(.VfPpkd-ksKsZd-mWPk3d):focus::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-focus-opacity,.12)}.VfPpkd-ksKsZd-XxIAqe:not(.VfPpkd-ksKsZd-mWPk3d)::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-ksKsZd-XxIAqe:not(.VfPpkd-ksKsZd-mWPk3d):active::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-press-opacity,.12)}.VfPpkd-ksKsZd-XxIAqe.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.12)}.VfPpkd-Bz112c-LgbsSe{font-size:24px;width:48px;height:48px;padding:12px}.VfPpkd-Bz112c-LgbsSe.VfPpkd-Bz112c-LgbsSe-OWXEXe-e5LLRc-SxQuSe .VfPpkd-Bz112c-Jh9lGc{width:40px;height:40px;margin-top:4px;margin-bottom:4px;margin-right:4px;margin-left:4px}.VfPpkd-Bz112c-LgbsSe.VfPpkd-Bz112c-LgbsSe-OWXEXe-e5LLRc-SxQuSe .VfPpkd-Bz112c-J1Ukfc-LhBDec{max-height:40px;max-width:40px}.VfPpkd-Bz112c-LgbsSe:disabled{color:rgba(0,0,0,.38);color:var(--mdc-theme-text-disabled-on-light,rgba(0,0,0,.38))}.VfPpkd-Bz112c-LgbsSe svg,.VfPpkd-Bz112c-LgbsSe img{width:24px;height:24px}.VfPpkd-Bz112c-LgbsSe{display:inline-block;position:relative;-webkit-box-sizing:border-box;box-sizing:border-box;border:none;outline:none;background-color:transparent;fill:currentColor;color:inherit;text-decoration:none;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;z-index:0;overflow:visible}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-RLmnJb{position:absolute;top:50%;height:48px;left:50%;width:48px;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%)}@media screen and (forced-colors:active){.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Bz112c-J1Ukfc-LhBDec,.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Bz112c-J1Ukfc-LhBDec{display:block}}.VfPpkd-Bz112c-LgbsSe:disabled{cursor:default;pointer-events:none}.VfPpkd-Bz112c-LgbsSe[hidden]{display:none}.VfPpkd-Bz112c-LgbsSe-OWXEXe-KVuj8d-Q3DXx{-webkit-box-align:center;-webkit-align-items:center;align-items:center;display:-webkit-inline-box;display:-webkit-inline-flex;display:inline-flex;-webkit-box-pack:center;-webkit-justify-content:center;justify-content:center}.VfPpkd-Bz112c-J1Ukfc-LhBDec{pointer-events:none;border:2px solid transparent;border-radius:6px;-webkit-box-sizing:content-box;box-sizing:content-box;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:100%;width:100%;display:none}@media screen and (forced-colors:active){.VfPpkd-Bz112c-J1Ukfc-LhBDec{border-color:CanvasText}}.VfPpkd-Bz112c-J1Ukfc-LhBDec::after{content:"";border:2px solid transparent;border-radius:8px;display:block;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:calc(100% + 4px);width:calc(100% + 4px)}@media screen and (forced-colors:active){.VfPpkd-Bz112c-J1Ukfc-LhBDec::after{border-color:CanvasText}}.VfPpkd-Bz112c-kBDsod{display:inline-block}.VfPpkd-Bz112c-kBDsod.VfPpkd-Bz112c-kBDsod-OWXEXe-IT5dJd,.VfPpkd-Bz112c-LgbsSe-OWXEXe-IT5dJd .VfPpkd-Bz112c-kBDsod{display:none}.VfPpkd-Bz112c-LgbsSe-OWXEXe-IT5dJd .VfPpkd-Bz112c-kBDsod.VfPpkd-Bz112c-kBDsod-OWXEXe-IT5dJd{display:inline-block}.VfPpkd-Bz112c-mRLv6{height:100%;left:0;outline:none;position:absolute;top:0;width:100%}.VfPpkd-Bz112c-LgbsSe{--mdc-ripple-fg-size:0;--mdc-ripple-left:0;--mdc-ripple-top:0;--mdc-ripple-fg-scale:1;--mdc-ripple-fg-translate-end:0;--mdc-ripple-fg-translate-start:0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transition:opacity 15ms linear,background-color 15ms linear;transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index,1)}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{z-index:0;z-index:var(--mdc-ripple-z-index,0)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transform:scale(var(--mdc-ripple-fg-scale,1));-ms-transform:scale(var(--mdc-ripple-fg-scale,1));transform:scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::after{top:0;left:0;-webkit-transform:scale(0);-ms-transform:scale(0);transform:scale(0);-webkit-transform-origin:center center;-ms-transform-origin:center center;transform-origin:center center}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd .VfPpkd-Bz112c-Jh9lGc::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-lJfZMc .VfPpkd-Bz112c-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards;animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-OmS1vf .VfPpkd-Bz112c-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-opacity-out .15s;animation:mdc-ripple-fg-opacity-out .15s;-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));-ms-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{top:0;left:0;width:100%;height:100%}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0);width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Bz112c-Jh9lGc::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc::after{background-color:#000;background-color:var(--mdc-ripple-color,#000)}.VfPpkd-Bz112c-LgbsSe:hover .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Bz112c-Jh9lGc::before{opacity:.04;opacity:var(--mdc-ripple-hover-opacity,.04)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-focus-opacity,.12)}.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-Bz112c-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-press-opacity,.12)}.VfPpkd-Bz112c-LgbsSe.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.12)}.VfPpkd-Bz112c-LgbsSe:disabled:hover .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe:disabled.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Bz112c-Jh9lGc::before{opacity:0;opacity:var(--mdc-ripple-hover-opacity,0)}.VfPpkd-Bz112c-LgbsSe:disabled.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Bz112c-Jh9lGc::before,.VfPpkd-Bz112c-LgbsSe:disabled:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Bz112c-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-focus-opacity,0)}.VfPpkd-Bz112c-LgbsSe:disabled:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-Bz112c-LgbsSe:disabled:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Bz112c-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-press-opacity,0)}.VfPpkd-Bz112c-LgbsSe:disabled.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0)}.VfPpkd-Bz112c-LgbsSe .VfPpkd-Bz112c-Jh9lGc{height:100%;left:0;pointer-events:none;position:absolute;top:0;width:100%;z-index:-1}.VfPpkd-dgl2Hf-ppHlrf-sM5MNb{display:inline}.VfPpkd-LgbsSe{position:relative;display:-webkit-inline-box;display:-webkit-inline-flex;display:inline-flex;-webkit-box-align:center;-webkit-align-items:center;align-items:center;-webkit-box-pack:center;-webkit-justify-content:center;justify-content:center;-webkit-box-sizing:border-box;box-sizing:border-box;min-width:64px;border:none;outline:none;line-height:inherit;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-appearance:none;overflow:visible;vertical-align:middle;background:transparent}.VfPpkd-LgbsSe .VfPpkd-BFbNVe-bF1uUb{width:100%;height:100%;top:0;left:0}.VfPpkd-LgbsSe::-moz-focus-inner{padding:0;border:0}.VfPpkd-LgbsSe:active{outline:none}.VfPpkd-LgbsSe:hover{cursor:pointer}.VfPpkd-LgbsSe:disabled{cursor:default;pointer-events:none}.VfPpkd-LgbsSe[hidden]{display:none}.VfPpkd-LgbsSe .VfPpkd-kBDsod{margin-left:0;margin-right:8px;display:inline-block;position:relative;vertical-align:top}[dir=rtl] .VfPpkd-LgbsSe .VfPpkd-kBDsod,.VfPpkd-LgbsSe .VfPpkd-kBDsod[dir=rtl]{margin-left:8px;margin-right:0}.VfPpkd-LgbsSe .VfPpkd-UdE5de-uDEFge{font-size:0;position:absolute;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);top:50%;left:50%;line-height:normal}.VfPpkd-LgbsSe .VfPpkd-vQzf8d{position:relative}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec{pointer-events:none;border:2px solid transparent;border-radius:6px;-webkit-box-sizing:content-box;box-sizing:content-box;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:calc(100% + 4px);width:calc(100% + 4px);display:none}@media screen and (forced-colors:active){.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec{border-color:CanvasText}}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec::after{content:"";border:2px solid transparent;border-radius:8px;display:block;position:absolute;top:50%;left:50%;-webkit-transform:translate(-50%,-50%);-ms-transform:translate(-50%,-50%);transform:translate(-50%,-50%);height:calc(100% + 4px);width:calc(100% + 4px)}@media screen and (forced-colors:active){.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec::after{border-color:CanvasText}}@media screen and (forced-colors:active){.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-J1Ukfc-LhBDec,.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-J1Ukfc-LhBDec{display:block}}.VfPpkd-LgbsSe .VfPpkd-RLmnJb{position:absolute;top:50%;height:48px;left:0;right:0;-webkit-transform:translateY(-50%);-ms-transform:translateY(-50%);transform:translateY(-50%)}.VfPpkd-vQzf8d+.VfPpkd-kBDsod{margin-left:8px;margin-right:0}[dir=rtl] .VfPpkd-vQzf8d+.VfPpkd-kBDsod,.VfPpkd-vQzf8d+.VfPpkd-kBDsod[dir=rtl]{margin-left:0;margin-right:8px}svg.VfPpkd-kBDsod{fill:currentColor}.VfPpkd-LgbsSe-OWXEXe-dgl2Hf{margin-top:6px;margin-bottom:6px}.VfPpkd-LgbsSe{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;text-decoration:none}.VfPpkd-LgbsSe{padding:0 8px 0 8px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ{-webkit-transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);padding:0 16px 0 16px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-LgbsSe-OWXEXe-Bz112c-UbuQg{padding:0 12px 0 16px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-LgbsSe-OWXEXe-Bz112c-M1Soyc{padding:0 16px 0 12px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb{-webkit-transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1);transition:box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);padding:0 16px 0 16px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-LgbsSe-OWXEXe-Bz112c-UbuQg{padding:0 12px 0 16px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-LgbsSe-OWXEXe-Bz112c-M1Soyc{padding:0 16px 0 12px}.VfPpkd-LgbsSe-OWXEXe-INsAgc{border-style:solid;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1)}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc{border-style:solid;border-color:transparent}.VfPpkd-LgbsSe{--mdc-ripple-fg-size:0;--mdc-ripple-left:0;--mdc-ripple-top:0;--mdc-ripple-fg-scale:1;--mdc-ripple-fg-translate-end:0;--mdc-ripple-fg-translate-start:0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before{-webkit-transition:opacity 15ms linear,background-color 15ms linear;transition:opacity 15ms linear,background-color 15ms linear;z-index:1}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{z-index:0}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Jh9lGc::before{-webkit-transform:scale(var(--mdc-ripple-fg-scale,1));-ms-transform:scale(var(--mdc-ripple-fg-scale,1));transform:scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Jh9lGc::after{top:0;left:0;-webkit-transform:scale(0);-ms-transform:scale(0);transform:scale(0);-webkit-transform-origin:center center;-ms-transform-origin:center center;transform-origin:center center}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-ZNMTqd .VfPpkd-Jh9lGc::after{top:var(--mdc-ripple-top,0);left:var(--mdc-ripple-left,0)}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-lJfZMc .VfPpkd-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards;animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-Tv8l5d-OmS1vf .VfPpkd-Jh9lGc::after{-webkit-animation:mdc-ripple-fg-opacity-out .15s;animation:mdc-ripple-fg-opacity-out .15s;-webkit-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));-ms-transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1));transform:translate(var(--mdc-ripple-fg-translate-end,0)) scale(var(--mdc-ripple-fg-scale,1))}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{top:-50%;left:-50%;width:200%;height:200%}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d .VfPpkd-Jh9lGc::after{width:var(--mdc-ripple-fg-size,100%);height:var(--mdc-ripple-fg-size,100%)}.VfPpkd-Jh9lGc{position:absolute;-webkit-box-sizing:content-box;box-sizing:content-box;overflow:hidden;z-index:0;top:0;left:0;bottom:0;right:0}.VfPpkd-LgbsSe{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px}.VfPpkd-LgbsSe:not(:disabled){color:#6200ee}.VfPpkd-LgbsSe:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::before{background-color:#6200ee}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc::after{background-color:#6200ee}.VfPpkd-LgbsSe:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.04}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-text-button-pressed-state-layer-opacity,0.12)}.VfPpkd-LgbsSe .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(:disabled){background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:disabled{background-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(:disabled){color:#fff}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-Jh9lGc::before{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-Jh9lGc::after{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.08}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe-OWXEXe-k8QpJ:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-k8QpJ.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-filled-button-pressed-state-layer-opacity,0.24)}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe-OWXEXe-k8QpJ .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px;-webkit-box-shadow:0 3px 1px -2px rgba(0,0,0,.2),0 2px 2px 0 rgba(0,0,0,.14),0 1px 5px 0 rgba(0,0,0,.12);box-shadow:0 3px 1px -2px rgba(0,0,0,.2),0 2px 2px 0 rgba(0,0,0,.14),0 1px 5px 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(:disabled){background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:disabled{background-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(:disabled){color:#fff}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-Jh9lGc::before{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-Jh9lGc::after{background-color:#fff}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.08}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-protected-button-pressed-state-layer-opacity,0.24)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-MV7yeb.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe,.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(.VfPpkd-ksKsZd-mWPk3d):focus{-webkit-box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12);box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:hover{-webkit-box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12);box-shadow:0 2px 4px -1px rgba(0,0,0,.2),0 4px 5px 0 rgba(0,0,0,.14),0 1px 10px 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:not(:disabled):active{-webkit-box-shadow:0 5px 5px -3px rgba(0,0,0,.2),0 8px 10px 1px rgba(0,0,0,.14),0 3px 14px 2px rgba(0,0,0,.12);box-shadow:0 5px 5px -3px rgba(0,0,0,.2),0 8px 10px 1px rgba(0,0,0,.14),0 3px 14px 2px rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-MV7yeb:disabled{-webkit-box-shadow:0 0 0 0 rgba(0,0,0,.2),0 0 0 0 rgba(0,0,0,.14),0 0 0 0 rgba(0,0,0,.12);box-shadow:0 0 0 0 rgba(0,0,0,.2),0 0 0 0 rgba(0,0,0,.14),0 0 0 0 rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc{font-family:Roboto,sans-serif;font-size:.875rem;letter-spacing:.0892857143em;font-weight:500;text-transform:uppercase;height:36px;border-radius:4px;padding:0 15px 0 15px;border-width:1px}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(:disabled){color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-INsAgc:disabled{color:rgba(0,0,0,.38)}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-kBDsod{font-size:1.125rem;width:1.125rem;height:1.125rem}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc::before{background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc::after{background-color:#6200ee}.VfPpkd-LgbsSe-OWXEXe-INsAgc:hover .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.04}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-outlined-button-pressed-state-layer-opacity,0.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-J1Ukfc-LhBDec{border-radius:2px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-J1Ukfc-LhBDec::after{border-radius:4px}.VfPpkd-LgbsSe-OWXEXe-INsAgc:not(:disabled){border-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc:disabled{border-color:rgba(0,0,0,.12)}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-LgbsSe-OWXEXe-Bz112c-UbuQg{padding:0 11px 0 15px}.VfPpkd-LgbsSe-OWXEXe-INsAgc.VfPpkd-LgbsSe-OWXEXe-Bz112c-M1Soyc{padding:0 15px 0 11px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-Jh9lGc{top:-1px;left:-1px;bottom:-1px;right:-1px;border-width:1px}.VfPpkd-LgbsSe-OWXEXe-INsAgc .VfPpkd-RLmnJb{left:-1px;width:calc(100% + 2px)}.nCP5yc{font-family:"Google Sans",Roboto,Arial,sans-serif;font-size:.875rem;letter-spacing:.0107142857em;font-weight:500;text-transform:none;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);-webkit-box-shadow:none;box-shadow:none}.nCP5yc .VfPpkd-Jh9lGc{height:100%;position:absolute;overflow:hidden;width:100%;z-index:0}.nCP5yc:not(:disabled){background-color:rgb(26,115,232);background-color:var(--gm-fillbutton-container-color,rgb(26,115,232))}.nCP5yc:not(:disabled){color:#fff;color:var(--gm-fillbutton-ink-color,#fff)}.nCP5yc:disabled{background-color:rgba(60,64,67,.12);background-color:var(--gm-fillbutton-disabled-container-color,rgba(60,64,67,.12))}.nCP5yc:disabled{color:rgba(60,64,67,.38);color:var(--gm-fillbutton-disabled-ink-color,rgba(60,64,67,.38))}.nCP5yc .VfPpkd-Jh9lGc::before,.nCP5yc .VfPpkd-Jh9lGc::after{background-color:rgb(32,33,36);background-color:var(--gm-fillbutton-state-color,rgb(32,33,36))}.nCP5yc:hover .VfPpkd-Jh9lGc::before,.nCP5yc.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.16;opacity:var(--mdc-ripple-hover-opacity,.16)}.nCP5yc.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.nCP5yc:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.24;opacity:var(--mdc-ripple-focus-opacity,.24)}.nCP5yc:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.nCP5yc:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.2;opacity:var(--mdc-ripple-press-opacity,.2)}.nCP5yc.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.2)}.nCP5yc .VfPpkd-BFbNVe-bF1uUb{opacity:0}.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:#fff}@media (-ms-high-contrast:active),screen and (forced-colors:active){.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.nCP5yc .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:CanvasText}}.nCP5yc:hover{-webkit-box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);-webkit-box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15));box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15))}.nCP5yc:hover .VfPpkd-BFbNVe-bF1uUb{opacity:0}.nCP5yc:active{-webkit-box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 2px 6px 2px rgba(60,64,67,.15);box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 2px 6px 2px rgba(60,64,67,.15);-webkit-box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 2px 6px 2px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15));box-shadow:0 1px 2px 0 var(--gm-fillbutton-keyshadow-color,rgba(60,64,67,.3)),0 2px 6px 2px var(--gm-fillbutton-ambientshadow-color,rgba(60,64,67,.15))}.nCP5yc:active .VfPpkd-BFbNVe-bF1uUb{opacity:0}.nCP5yc:disabled{-webkit-box-shadow:none;box-shadow:none}.nCP5yc:disabled:hover .VfPpkd-Jh9lGc::before,.nCP5yc:disabled.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:0;opacity:var(--mdc-ripple-hover-opacity,0)}.nCP5yc:disabled.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.nCP5yc:disabled:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-focus-opacity,0)}.nCP5yc:disabled:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.nCP5yc:disabled:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-press-opacity,0)}.nCP5yc:disabled.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0)}.nCP5yc:disabled .VfPpkd-BFbNVe-bF1uUb{opacity:0}.Rj2Mlf{font-family:"Google Sans",Roboto,Arial,sans-serif;font-size:.875rem;letter-spacing:.0107142857em;font-weight:500;text-transform:none;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);-webkit-box-shadow:none;box-shadow:none}.Rj2Mlf .VfPpkd-Jh9lGc{height:100%;position:absolute;overflow:hidden;width:100%;z-index:0}.Rj2Mlf:not(:disabled){color:rgb(26,115,232);color:var(--gm-hairlinebutton-ink-color,rgb(26,115,232))}.Rj2Mlf:not(:disabled){border-color:rgb(218,220,224);border-color:var(--gm-hairlinebutton-outline-color,rgb(218,220,224))}.Rj2Mlf:not(:disabled):hover{border-color:rgb(218,220,224);border-color:var(--gm-hairlinebutton-outline-color,rgb(218,220,224))}.Rj2Mlf:not(:disabled).VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe,.Rj2Mlf:not(:disabled):not(.VfPpkd-ksKsZd-mWPk3d):focus{border-color:rgb(23,78,166);border-color:var(--gm-hairlinebutton-outline-color--stateful,rgb(23,78,166))}.Rj2Mlf:not(:disabled):active,.Rj2Mlf:not(:disabled):focus:active{border-color:rgb(218,220,224);border-color:var(--gm-hairlinebutton-outline-color,rgb(218,220,224))}.Rj2Mlf:disabled{color:rgba(60,64,67,.38);color:var(--gm-hairlinebutton-disabled-ink-color,rgba(60,64,67,.38))}.Rj2Mlf:disabled{border-color:rgba(60,64,67,.12);border-color:var(--gm-hairlinebutton-disabled-outline-color,rgba(60,64,67,.12))}.Rj2Mlf:hover:not(:disabled),.Rj2Mlf.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe:not(:disabled),.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d):focus:not(:disabled),.Rj2Mlf:active:not(:disabled){color:rgb(23,78,166);color:var(--gm-hairlinebutton-ink-color--stateful,rgb(23,78,166))}.Rj2Mlf .VfPpkd-BFbNVe-bF1uUb{opacity:0}.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:rgb(26,115,232)}@media (-ms-high-contrast:active),screen and (forced-colors:active){.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.Rj2Mlf .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:CanvasText}}.Rj2Mlf .VfPpkd-Jh9lGc::before,.Rj2Mlf .VfPpkd-Jh9lGc::after{background-color:rgb(26,115,232);background-color:var(--gm-hairlinebutton-state-color,rgb(26,115,232))}.Rj2Mlf:hover .VfPpkd-Jh9lGc::before,.Rj2Mlf.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:.04;opacity:var(--mdc-ripple-hover-opacity,.04)}.Rj2Mlf.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-focus-opacity,.12)}.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.Rj2Mlf:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:.12;opacity:var(--mdc-ripple-press-opacity,.12)}.Rj2Mlf.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0.12)}.Rj2Mlf:disabled:hover .VfPpkd-Jh9lGc::before,.Rj2Mlf:disabled.VfPpkd-ksKsZd-XxIAqe-OWXEXe-ZmdkE .VfPpkd-Jh9lGc::before{opacity:0;opacity:var(--mdc-ripple-hover-opacity,0)}.Rj2Mlf:disabled.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe .VfPpkd-Jh9lGc::before,.Rj2Mlf:disabled:not(.VfPpkd-ksKsZd-mWPk3d):focus .VfPpkd-Jh9lGc::before{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-focus-opacity,0)}.Rj2Mlf:disabled:not(.VfPpkd-ksKsZd-mWPk3d) .VfPpkd-Jh9lGc::after{-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.Rj2Mlf:disabled:not(.VfPpkd-ksKsZd-mWPk3d):active .VfPpkd-Jh9lGc::after{-webkit-transition-duration:75ms;transition-duration:75ms;opacity:0;opacity:var(--mdc-ripple-press-opacity,0)}.Rj2Mlf:disabled.VfPpkd-ksKsZd-mWPk3d{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity,0)}.b9hyVd{font-family:"Google Sans",Roboto,Arial,sans-serif;font-size:.875rem;letter-spacing:.0107142857em;font-weight:500;text-transform:none;-webkit-transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1);transition:border .28s cubic-bezier(.4,0,.2,1),box-shadow .28s cubic-bezier(.4,0,.2,1),-webkit-box-shadow .28s cubic-bezier(.4,0,.2,1);border-width:0;-webkit-box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);box-shadow:0 1px 2px 0 rgba(60,64,67,.3),0 1px 3px 1px rgba(60,64,67,.15);-webkit-box-shadow:0 1px 2px 0 var(--gm-protectedbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-protectedbutton-ambientshadow-color,rgba(60,64,67,.15));box-shadow:0 1px 2px 0 var(--gm-protectedbutton-keyshadow-color,rgba(60,64,67,.3)),0 1px 3px 1px var(--gm-protectedbutton-ambientshadow-color,rgba(60,64,67,.15))}.b9hyVd .VfPpkd-Jh9lGc{height:100%;position:absolute;overflow:hidden;width:100%;z-index:0}.b9hyVd:not(:disabled){background-color:#fff;background-color:var(--gm-protectedbutton-container-color,#fff)}.b9hyVd:not(:disabled){color:rgb(26,115,232);color:var(--gm-protectedbutton-ink-color,rgb(26,115,232))}.b9hyVd:disabled{background-color:rgba(60,64,67,.12);background-color:var(--gm-protectedbutton-disabled-container-color,rgba(60,64,67,.12))}.b9hyVd:disabled{color:rgba(60,64,67,.38);color:var(--gm-protectedbutton-disabled-ink-color,rgba(60,64,67,.38))}.b9hyVd:hover:not(:disabled),.b9hyVd.VfPpkd-ksKsZd-mWPk3d-OWXEXe-AHe6Kc-XpnDCe:not(:disabled),.b9hyVd:not(.VfPpkd-ksKsZd-mWPk3d):focus:not(:disabled),.b9hyVd:active:not(:disabled){color:rgb(23,78,166);color:var(--gm-protectedbutton-ink-color--stateful,rgb(23,78,166))}.b9hyVd .VfPpkd-BFbNVe-bF1uUb{opacity:0}.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-Bd00G{stroke:rgb(26,115,232)}@media (-ms-high-contrast:active),screen and (forced-colors:active){.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-uI4vCe-LkdAo,.b9hyVd .VfPpkd-UdE5de-uDEFge .VfPpkd-JGcpL-IdXvz-LkdAo-B
2026-01-13T09:29:18
https://www.linkedin.com/jobs/bmc-software-jobs-worldwide?f_C=308106%2C1597&trk=top-card_top-card-primary-button-top-card-primary-cta
75 Bmc Software jobs in Worldwide Skip to main content LinkedIn Bmc Software in Worldwide Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Sign in Create an account Company 2 Clear text BMC Software (75) CorreLog, Inc. BMC Helix (59) hackajob (18) Oracle (7) Done Any time Any time (75) Past month (74) Past week (42) Past 24 hours (6) Done Job type Full-time (75) Done Experience level Associate (2) Mid-Senior level (11) Done Remote On-site (51) Hybrid (16) Remote (8) Done Reset Get notified about new Bmc Software jobs in Worldwide . Sign in to create job alert BMC Software jobs 75 Bmc Software Jobs in Worldwide Corporate Initiatives Manager - REMOTE Corporate Initiatives Manager - REMOTE BMC Software United States 2 weeks ago Staff Accountant Staff Accountant BMC Software Pune, Maharashtra, India 1 week ago Business Development Representative- Houston, TX Business Development Representative- Houston, TX BMC Software Houston, TX 1 week ago ** ** ******** ** ** ******** *** ******** ****, ***********, ***** 1 month ago ******** - *** *** - *** ******** - *** *** - *** *** ******** ****** ****** 4 days ago ********* ******* ******* - ********* ********* ******* ******* - ********* *** ******** ******* ********* **** 5 days ago ****** **** ********* - ****** ****** **** ********* - ****** *** ******** ****** 2 weeks ago Sign in to view all job postings Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to view more jobs Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy .
2026-01-13T09:29:18
https://web3.okx.com/dex-swap
DEX Swap: Trade Crypto with Low Fees & High Liquidity | OKX Wallet Skip to main content Market Meme Pump Tracker Signal Copy trading Swap Portfolio Referral More Products DeFi Marketplace Security Build X Layer Learn about X Layer X Layer Explorer Bridge Developer docs Testnet faucet GitHub Wallet API Discover Wallet API API docs Manage API keys Blockchain explorer Connect wallet in DApps Boost X Launch Participate and earn tokens X Campaign Join to win exciting rewards Rewards center Claim bonuses and airdrops Alert Language Currency Color preferences Download OKX Wallet All about Web3 Announcements Back English 简体中文 繁體中文 Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Suomi Svenska Türkçe Back Back Learn Support center Swap Bridge Swap FAQ What is OKX DEX Swap? OKX DEX Swap is a decentralized exchange (DEX) aggregator that allows you to trade crypto seamlessly across multiple blockchains. Our X Routing algorithm finds the best prices by comparing liquidity pools, splitting orders, and optimizing for fees and slippage. How does OKX DEX ensure the best swap prices? OKX DEX uses advanced algorithms to aggregate liquidity from multiple DEXs and liquidity providers (LPs). By splitting orders and considering factors such as price, slippage, and network fees, we ensure you get the best possible rates. Why do trades fail? 1. Insufficient network fee. During times of network congestion, network fees may skyrocket. Miners prioritize transactions with higher network fees, so if your transaction isn’t picked up by a miner for an extended period, it can result in transaction failure. Therefore, when initiating a transaction, it’s recommended to use the default values recommended by the platform. 2. Exceeded slippage. In volatile markets, liquidity depths can change. If the trading amount falls below the required minimum, the contract may halt the transaction, leading to a failed transaction due to exceeded slippage. 3.Duplicate transactions. If a user initiates multiple identical transactions simultaneously, but their balance is only sufficient to cover the first transaction, subsequent transactions will fail. What to do when I don’t have enough balance for the network fee? You can buy the corresponding tokens on the OKX Exchange and withdraw them to your wallet address, or swap tokens on OKX DEX to cover the network fees. What is slippage? Slippage refers to the price difference between the expected price of a trade and its actual execution price. Slippage is an implicit cost of trading and is inevitable in most trades, especially in a DEX. Slippage becomes more prominent during market volatility or when there is insufficient liquidity on the trading platform. How can slippage be avoided? Slippage cannot be completely avoided, but its impact can be minimized in three ways: 1. The best way is to choose assets with high trading volume and liquidity. 2. You can opt for transactions with smaller amounts, which are more likely to be executed, instead of placing large orders directly. 3. Consider adjusting the slippage tolerance during price fluctuations, where slippage may exceed the preset limit when your transaction is executed. In this case, increasing slippage could avoid transaction failure, but it also means you can't get a good price. Please proceed with caution. Why trading some tokens requires a higher slippage setting? Some tokens have a higher slippage tolerance due to the presence of transaction taxes, such as SAFEMOON, which are used for liquidity pool buy-back, token burning, and rewarding token holders. To help you trade these tokens, OKX DEX has launched automated slippage to improve the success rate of these trades at the current optimal slippage level. Swap with the best price Freely choose trading routes from all major DEX aggregators, while X Routing continues to find the best quotes across 400+ DEXs and 30+ networks. Fast quotes Anti-MEV ©2017 - 2026 WEB3.OKX.COM English 简体中文 繁體中文 Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Suomi Svenska Türkçe More about OKX Wallet Download Learn About us Careers Contact us Terms of Service Privacy Notice X (formerly Twitter) Product Tokens Swap Marketplace Earn Discover Build Explorer Security Support Support center Channel verification Announcements DEX fee schedule Connect with OKX Bitcoin wallet Ethereum wallet Solana wallet
2026-01-13T09:29:18
https://doc.rust-lang.org/cargo/reference/source-replacement.html#configuration
Source Replacement - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Source Replacement This document is about redirecting communication with registries or repositories of git-based dependencies to another data source, such as a server mirroring the original registry or an exact local copy. If you want to patch individual dependencies, see overriding dependencies section of this documentation. If you want to control how Cargo makes network requests, see [http] and [net] configuration. A source is a provider that contains crates that may be included as dependencies for a package. Cargo supports the ability to replace one source with another to express strategies such as: Vendoring — custom sources can be defined which represent crates on the local filesystem. These sources are subsets of the source that they’re replacing and can be checked into packages if necessary. Mirroring — sources can be replaced with an equivalent version which acts as a cache for crates.io itself. Cargo has a core assumption about source replacement that the source code is exactly the same from both sources. Note that this also means that a replacement source is not allowed to have crates which are not present in the original source. As a consequence, source replacement is not appropriate for situations such as patching a dependency or a private registry. Cargo supports patching dependencies through the usage of the [patch] key , and private registry support is described in the Registries chapter . When using source replacement, running commands that need to contact the registry directly 1 requires passing the --registry option. This helps avoid any ambiguity about which registry to contact, and will use the authentication token for the specified registry. Configuration Configuration of replacement sources is done through .cargo/config.toml and the full set of available keys are: # The `source` table is where all keys related to source-replacement # are stored. [source] # Under the `source` table are a number of other tables whose keys are a # name for the relevant source. For example this section defines a new # source, called `my-vendor-source`, which comes from a directory # located at `vendor` relative to the directory containing this `.cargo/config.toml` # file [source.my-vendor-source] directory = "vendor" # The crates.io default source for crates is available under the name # "crates-io", and here we use the `replace-with` key to indicate that it's # replaced with our source above. # # The `replace-with` key can also reference an alternative registry name # defined in the `[registries]` table. [source.crates-io] replace-with = "my-vendor-source" # Each source has its own table where the key is the name of the source [source.the-source-name] # Indicate that `the-source-name` will be replaced with `another-source`, # defined elsewhere replace-with = "another-source" # Several kinds of sources can be specified (described in more detail below): registry = "https://example.com/path/to/index" local-registry = "path/to/registry" directory = "path/to/vendor" # Git sources can optionally specify a branch/tag/rev as well git = "https://example.com/path/to/repo" # branch = "master" # tag = "v1.0.1" # rev = "313f44e8" Registry Sources A “registry source” is one that works like crates.io itself. It’s an index that conforms to the specification at https://doc.rust-lang.org/cargo/reference/registry-index.html with a configuration file indicating where to download crates from. Registry sources can use either git or sparse HTTP protocol : # Git protocol registry = "ssh://git@example.com/path/to/index.git" # Sparse HTTP protocol registry = "sparse+https://example.com/path/to/index" # HTTPS git protocol registry = "https://example.com/path/to/index" Local Registry Sources A “local registry source” is intended to be a subset of another registry source, but available on the local filesystem (aka vendoring). Local registries are downloaded ahead of time, typically sync’d with a Cargo.lock , and are made up of a set of *.crate files and an index like the normal registry is. The primary way to manage and create local registry sources is through the cargo-local-registry subcommand, available on crates.io and can be installed with cargo install cargo-local-registry . Local registries are contained within one directory and contain a number of *.crate files downloaded from crates.io as well as an index directory with the same format as the crates.io-index project (populated with just entries for the crates that are present). Directory Sources A “directory source” is similar to a local registry source where it contains a number of crates available on the local filesystem, suitable for vendoring dependencies. Directory sources are primarily managed by the cargo vendor subcommand. Directory sources are distinct from local registries though in that they contain the unpacked version of *.crate files, making it more suitable in some situations to check everything into source control. A directory source is just a directory containing a number of other directories which contain the source code for crates (the unpacked version of *.crate files). Currently no restriction is placed on the name of each directory. Each crate in a directory source also has an associated metadata file indicating the checksum of each file in the crate to protect against accidental modifications. Git sources Git sources represent repositories used by git-based dependencies . They’re used to specify which git-based dependencies should be replaced with alternative sources. Git sources are not related to the git registries , and can’t be used to replace registry sources. Examples of such commands are in Publishing Commands . ↩
2026-01-13T09:29:18
https://www.linkedin.com/products/zensar-ruptive/
Ruptive | LinkedIn Skip to main content LinkedIn Zensar Technologies in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Ruptive Digital Experience Platforms (DXP) by Zensar Technologies See who's skilled in this Add as skill Learn more Report this product About Ruptive is the world’s first Human Experience Innovation platform. Designed to ignite the creation of dynamic personas, unmistakably hard-to-forget experiences, and a culture of innovation from the outside in—this is how you stay disruptive. Ruptive is an exclusive service provided by Indigo Slate, a Zensar Company. Media Products media viewer No more previous content The Disruptive Experience Makers No more next content Similar products HCL Digital Experience HCL Digital Experience Digital Experience Platforms (DXP) Progress Sitefinity Progress Sitefinity Digital Experience Platforms (DXP) MosChip DigitalSky GenAIoT™ MosChip DigitalSky GenAIoT™ Digital Experience Platforms (DXP) Liferay DXP Liferay DXP Digital Experience Platforms (DXP) Optimizely One Optimizely One Digital Experience Platforms (DXP) Bloomreach Experience Bloomreach Experience Digital Experience Platforms (DXP) Sign in to see more Show more Show less Zensar Technologies products Cloud transformation Cloud transformation Enterprise Cloud Platforms Digital Applications Digital Applications Digital Adoption Platforms LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://git-scm.com/book/ru/v2/%d0%92%d0%b2%d0%b5%d0%b4%d0%b5%d0%bd%d0%b8%d0%b5-%d0%a7%d1%82%d0%be-%d1%82%d0%b0%d0%ba%d0%be%d0%b5-Git%3F
Git - Что такое Git? About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Введение 1.1 О системе контроля версий 1.2 Краткая история Git 1.3 Что такое Git? 1.4 Командная строка 1.5 Установка Git 1.6 Первоначальная настройка Git 1.7 Как получить помощь? 1.8 Заключение 2. Основы Git 2.1 Создание Git-репозитория 2.2 Запись изменений в репозиторий 2.3 Просмотр истории коммитов 2.4 Операции отмены 2.5 Работа с удалёнными репозиториями 2.6 Работа с тегами 2.7 Псевдонимы в Git 2.8 Заключение 3. Ветвление в Git 3.1 О ветвлении в двух словах 3.2 Основы ветвления и слияния 3.3 Управление ветками 3.4 Работа с ветками 3.5 Удалённые ветки 3.6 Перебазирование 3.7 Заключение 4. Git на сервере 4.1 Протоколы 4.2 Установка Git на сервер 4.3 Генерация открытого SSH ключа 4.4 Настраиваем сервер 4.5 Git-демон 4.6 Умный HTTP 4.7 GitWeb 4.8 GitLab 4.9 Git-хостинг 4.10 Заключение 5. Распределённый Git 5.1 Распределённый рабочий процесс 5.2 Участие в проекте 5.3 Сопровождение проекта 5.4 Заключение 6. GitHub 6.1 Настройка и конфигурация учётной записи 6.2 Внесение собственного вклада в проекты 6.3 Сопровождение проекта 6.4 Управление организацией 6.5 Создание сценариев GitHub 6.6 Заключение 7. Инструменты Git 7.1 Выбор ревизии 7.2 Интерактивное индексирование 7.3 Припрятывание и очистка 7.4 Подпись 7.5 Поиск 7.6 Перезапись истории 7.7 Раскрытие тайн reset 7.8 Продвинутое слияние 7.9 Rerere 7.10 Обнаружение ошибок с помощью Git 7.11 Подмодули 7.12 Создание пакетов 7.13 Замена 7.14 Хранилище учётных данных 7.15 Заключение 8. Настройка Git 8.1 Конфигурация Git 8.2 Атрибуты Git 8.3 Хуки в Git 8.4 Пример принудительной политики Git 8.5 Заключение 9. Git и другие системы контроля версий 9.1 Git как клиент 9.2 Переход на Git 9.3 Заключение 10. Git изнутри 10.1 Сантехника и Фарфор 10.2 Объекты Git 10.3 Ссылки в Git 10.4 Pack-файлы 10.5 Спецификации ссылок 10.6 Протоколы передачи данных 10.7 Обслуживание репозитория и восстановление данных 10.8 Переменные окружения 10.9 Заключение A1. Приложение A: Git в других окружениях A1.1 Графические интерфейсы A1.2 Git в Visual Studio A1.3 Git в Visual Studio Code A1.4 Git в Eclipse A1.5 Git в IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.6 Git в Sublime Text A1.7 Git в Bash A1.8 Git в Zsh A1.9 Git в PowerShell A1.10 Заключение A2. Приложение B: Встраивание Git в ваши приложения A2.1 Git из командной строки A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Приложение C: Команды Git A3.1 Настройка и конфигурация A3.2 Клонирование и создание репозиториев A3.3 Основные команды A3.4 Ветвление и слияния A3.5 Совместная работа и обновление проектов A3.6 Осмотр и сравнение A3.7 Отладка A3.8 Внесение исправлений A3.9 Работа с помощью электронной почты A3.10 Внешние системы A3.11 Администрирование A3.12 Низкоуровневые команды 2nd Edition 1.3 Введение - Что такое Git? Что такое Git? Что же такое Git, если говорить коротко? Очень важно понять эту часть материала, потому что если вы поймёте, что такое Git и основы того, как он работает, тогда, возможно, вам будет гораздо проще его использовать. Пока вы изучаете Git, попробуйте забыть всё, что вы знаете о других системах контроля версий, таких как Subversion и Perforce. Это позволит вам избежать определённых проблем при использовании инструмента. Git хранит и использует информацию совсем иначе по сравнению с другими системами, даже несмотря на то, что интерфейс пользователя достаточно похож, и понимание этих различий поможет вам избежать путаницы во время использования. Снимки, а не различия Основное отличие Git от любой другой системы контроля версий (включая Subversion и её собратьев) — это подход к работе со своими данными. Концептуально, большинство других систем хранят информацию в виде списка изменений в файлах. Эти системы (CVS, Subversion, Perforce, Bazaar и т. д.) представляют хранимую информацию в виде набора файлов и изменений, сделанных в каждом файле, по времени (обычно это называют контролем версий, основанным на различиях ). Рисунок 4. Хранение данных как набора изменений относительно первоначальной версии каждого из файлов Git не хранит и не обрабатывает данные таким способом. Вместо этого, подход Git к хранению данных больше похож на набор снимков миниатюрной файловой системы. Каждый раз, когда вы делаете коммит, то есть сохраняете состояние своего проекта в Git, система запоминает, как выглядит каждый файл в этот момент, и сохраняет ссылку на этот снимок. Для увеличения эффективности, если файлы не были изменены, Git не запоминает эти файлы вновь, а только создаёт ссылку на предыдущую версию идентичного файла, который уже сохранён. Git представляет свои данные как, скажем, поток снимков . Рисунок 5. Хранение данных как снимков проекта во времени Это очень важное различие между Git и почти любой другой системой контроля версий. Git переосмысливает практически все аспекты контроля версий, которые были скопированы из предыдущего поколения большинством других систем. Это делает Git больше похожим на миниатюрную файловую систему с удивительно мощными утилитами, надстроенными над ней, нежели просто на VCS. Когда мы будем рассматривать управление ветками в главе Ветвление в Git , мы увидим, какие преимущества вносит такой подход к работе с данными в Git. Почти все операции выполняются локально Для работы большинства операций в Git достаточно локальных файлов и ресурсов — в основном, системе не нужна никакая информация с других компьютеров в вашей сети. Если вы привыкли к централизованным системам контроля версий, где большинство операций страдают от задержек из-за работы с сетью, то этот аспект Git заставит вас думать, что боги скорости наделили Git несказанной мощью. Так как вся история проекта хранится прямо на вашем локальном диске, большинство операций кажутся чуть ли не мгновенными. Для примера, чтобы посмотреть историю проекта, Git не нужно соединяться с сервером для её получения и отображения — система просто считывает данные напрямую из локальной базы данных. Это означает, что вы увидите историю проекта практически моментально. Если вам необходимо посмотреть изменения, сделанные между текущей версией файла и версией, созданной месяц назад, Git может найти файл месячной давности и локально вычислить изменения, вместо того, чтобы запрашивать удалённый сервер выполнить эту операцию, либо вместо получения старой версии файла с сервера и выполнения операции локально. Это также означает, что есть лишь небольшое количество действий, которые вы не сможете выполнить, если вы находитесь оффлайн или не имеете доступа к VPN в данный момент. Если вы в самолёте или в поезде и хотите немного поработать, вы сможете создавать коммиты без каких-либо проблем (в вашу локальную копию, помните?): когда будет возможность подключиться к сети, все изменения можно будет синхронизировать. Если вы ушли домой и не можете подключиться через VPN, вы всё равно сможете работать. Добиться такого же поведения во многих других системах либо очень сложно, либо вовсе невозможно. В Perforce, для примера, если вы не подключены к серверу, вам не удастся сделать многого; в Subversion и CVS вы можете редактировать файлы, но вы не сможете сохранить изменения в базу данных (потому что вы не подключены к БД). Всё это может показаться не таким уж и значимым, но вы удивитесь, какое большое значение это может иметь. Целостность Git В Git для всего вычисляется хеш-сумма, и только потом происходит сохранение. В дальнейшем обращение к сохранённым объектам происходит по этой хеш-сумме. Это значит, что невозможно изменить содержимое файла или каталога так, чтобы Git не узнал об этом. Данная функциональность встроена в Git на низком уровне и является неотъемлемой частью его философии. Вы не потеряете информацию во время её передачи и не получите повреждённый файл без ведома Git. Механизм, которым пользуется Git при вычислении хеш-сумм, называется SHA-1 хеш. Это строка длиной в 40 шестнадцатеричных символов (0-9 и a-f), она вычисляется на основе содержимого файла или структуры каталога. SHA-1 хеш выглядит примерно так: 24b9da6552252987aa493b52f8696cd6d3b00373 Вы будете постоянно встречать хеши в Git, потому что он использует их повсеместно. На самом деле, Git сохраняет все объекты в свою базу данных не по имени, а по хеш-сумме содержимого объекта. Git обычно только добавляет данные Когда вы производите какие-либо действия в Git, практически все из них только добавляют новые данные в базу Git. Очень сложно заставить систему удалить данные либо сделать что-то, что нельзя впоследствии отменить. Как и в любой другой системе контроля версий, вы можете потерять или испортить свои изменения, пока они не зафиксированы, но после того, как вы зафиксируете снимок в Git, будет очень сложно что-либо потерять, особенно, если вы регулярно синхронизируете свою базу с другим репозиторием. Всё это превращает использование Git в одно удовольствие, потому что мы знаем, что можем экспериментировать, не боясь серьёзных проблем. Для более глубокого понимания того, как Git хранит свои данные и как вы можете восстановить данные, которые кажутся утерянными, см. Операции отмены . Три состояния Теперь слушайте внимательно. Это самая важная вещь, которую нужно запомнить о Git, если вы хотите, чтобы остаток процесса обучения прошёл гладко. У Git есть три основных состояния, в которых могут находиться ваши файлы: изменён (modified), индексирован (staged) и зафиксирован (committed): К изменённым относятся файлы, которые поменялись, но ещё не были зафиксированы. Индексированный — это изменённый файл в его текущей версии, отмеченный для включения в следующий коммит. Зафиксированный значит, что файл уже сохранён в вашей локальной базе. Мы подошли к трём основным секциям проекта Git: рабочая копия (working tree), область индексирования (staging area) и каталог Git (Git directory). Рисунок 6. Рабочая копия, область индексирования и каталог Git Рабочая копия является снимком одной версии проекта. Эти файлы извлекаются из сжатой базы данных в каталоге Git и помещаются на диск, для того чтобы их можно было использовать или редактировать. Область индексирования — это файл, обычно находящийся в каталоге Git, в нём содержится информация о том, что попадёт в следующий коммит. Её техническое название на языке Git — «индекс», но фраза «область индексирования» также работает. Каталог Git — это то место, где Git хранит метаданные и базу объектов вашего проекта. Это самая важная часть Git и это та часть, которая копируется при клонировании репозитория с другого компьютера. Базовый подход в работе с Git выглядит так: Изменяете файлы вашей рабочей копии. Выборочно добавляете в индекс только те изменения, которые должны попасть в следующий коммит, добавляя тем самым снимки только этих изменений в индекс. Когда вы делаете коммит, используются файлы из индекса как есть, и этот снимок сохраняется в ваш каталог Git. Если определённая версия файла есть в каталоге Git, эта версия считается зафиксированной (committed). Если файл был изменён и добавлен в индекс, значит, он индексирован (staged). И если файл был изменён с момента последнего распаковывания из репозитория, но не был добавлен в индекс, он считается изменённым (modified). В главе Основы Git вы узнаете больше об этих состояниях и какую пользу вы можете извлечь из них или как полностью пропустить часть с индексом. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://www.linkedin.com/products/topdesk/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click
TOPdesk | LinkedIn Skip to main content LinkedIn TOPdesk in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in TOPdesk Service Desk Software by TOPdesk See who's skilled in this Add as skill Try now Report this product About TOPdesk is the only service management solution that combines a ready-to-go tool with dedicated in-house experts. We don’t just create software for consistently great service delivery. We make sure you can achieve your goals, step by step. Whether you want to focus on self-service or automate your ticket routing, our people have loads of best practices to share. Once set up, you can easily adjust the tool yourself, so you can continue to learn and improve in your own time. Need more help? With 25+ years' experience helping organizations like yours, our 900 in-house specialists in 11 countries are rooting for your success and just a phone call away. Let’s make service happen together! This product is intended for Information Technology Manager Director of Information Technology Technical Manager Chief Information Officer Application Manager Senior Information Technology Manager Information Communication Technology Manager Service Desk Manager Senior Information Technology Technician Information Technology Service Desk Manager Media Products media viewer No more previous content Process requests faster with TOPdesk’s Incident Management and Self-Service Portal In your organization, you want to keep everything running as smoothly as possible. But people have requests and things break down. How do you manage these requests and malfunctions? In TOPdesk, your users log their tickets in the Self-Service Portal. You manage and process these requests in the Incident Management module. In this short video, you’ll see how these modules work. Self-service Portal TOPdesk’s Self-Service Portal is a knowledge base, web shop and customer portal in one. It’s a place where your customers quickly find what they need, when they need it – without your help. And above all: it reduces the repetitive calls to your service desk, giving you time for the work you truly enjoy doing. Incident Management With so many questions for your IT department, managing your information is essential. TOPdesk's Incident Management system lets you easily register all incoming tickets. Simply open an Incident card, enter the most important details, and save. Is the incident resolved? Your customers are notified automatically. Dashboard and reporting Do you know how your IT department is performing? And whether you're meeting your SLAs? If you deliver great services, you'll need the numbers to prove it. TOPdesk’s Dashboard and Reporting feature helps you visualize your department’s performance. You’ll get one clear overview of your key metrics, helping you actively step up your service. In this video, we’ll show you how this feature works. The path towards service excellence Our expertise goes beyond software. Having helped thousands of customers, we’ve seen what works and what doesn’t. Whether you’re a small service desk looking to make quick changes, or a complex organization with big plans – we’ll partner with you to make your service delivery more successful. No more next content Featured customers of TOPdesk KLM Royal Dutch Airlines Airlines and Aviation 813,437 followers De Goudse Insurance 11,361 followers Ctac IT Services and IT Consulting 13,875 followers Hörmann (UK) Ltd Building Materials 7,725 followers Sword Group IT Services and IT Consulting 113,091 followers Cebeo Wholesale 15,692 followers DSM Chemical Manufacturing 444,503 followers University of Memphis Higher Education 146,205 followers Keele University Higher Education 82,092 followers Royal FloraHolland Transportation, Logistics, Supply Chain and Storage 37,141 followers Delft University of Technology Research Services 374,937 followers stad Sint-Truiden Government Administration 1,403 followers Unidesk Information Technology & Services 1,509 followers Armagh City, Banbridge and Craigavon Borough Council Government Administration 5,890 followers Carpenters Group Legal Services 8,356 followers Collège Boréal Higher Education 11,715 followers Kent Community Health NHS Foundation Trust Hospitals and Health Care 10,045 followers OGD ict-diensten IT Services and IT Consulting 9,458 followers Aston University Higher Education 148,803 followers District School Board of Niagara Education Administration Programs 4,767 followers Show more Show less Similar products Jira Service Management Jira Service Management Service Desk Software Freshservice Freshservice Service Desk Software Intercom Intercom Service Desk Software Atomicwork Atomicwork Service Desk Software SolarWinds Service Desk SolarWinds Service Desk Service Desk Software Service Desk Service Desk Service Desk Software Sign in to see more Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://sre.google/sre-book/evolving-sre-engagement-model/#id-lM2uPS8FgT0Tdukn-marker
Google SRE - Production Readiness Review: Engagement Insight Chapter 32 - The Evolving SRE Engagement Model Table of Contents Foreword Preface Part I - Introduction 1. Introduction 2. The Production Environment at Google, from the Viewpoint of an SRE Part II - Principles 3. Embracing Risk 4. Service Level Objectives 5. Eliminating Toil 6. Monitoring Distributed Systems 7. The Evolution of Automation at Google 8. Release Engineering 9. Simplicity Part III - Practices 10. Practical Alerting 11. Being On-Call 12. Effective Troubleshooting 13. Emergency Response 14. Managing Incidents 15. Postmortem Culture: Learning from Failure 16. Tracking Outages 17. Testing for Reliability 18. Software Engineering in SRE 19. Load Balancing at the Frontend 20. Load Balancing in the Datacenter 21. Handling Overload 22. Addressing Cascading Failures 23. Managing Critical State: Distributed Consensus for Reliability 24. Distributed Periodic Scheduling with Cron 25. Data Processing Pipelines 26. Data Integrity: What You Read Is What You Wrote 27. Reliable Product Launches at Scale Part IV - Management 28. Accelerating SREs to On-Call and Beyond 29. Dealing with Interrupts 30. Embedding an SRE to Recover from Operational Overload 31. Communication and Collaboration in SRE 32. The Evolving SRE Engagement Model Part V - Conclusions 33. Lessons Learned from Other Industries 34. Conclusion Appendix A. Availability Table Appendix B. A Collection of Best Practices for Production Services Appendix C. Example Incident State Document Appendix D. Example Postmortem Appendix E. Launch Coordination Checklist Appendix F. Example Production Meeting Minutes Bibliography The Evolving SRE Engagement Model Written by Acacio Cruz and Ashish Bhambhani Edited by Betsy Beyer and Tim Harvey SRE Engagement: What, How, and Why We've discussed in most of the rest of this book what happens when SRE is already in charge of a service. Few services begin their lifecycle enjoying SRE support, so there needs to be a process for evaluating a service, making sure that it merits SRE support, negotiating how to improve any deficits that bar SRE support, and actually instituting SRE support. We call this process onboarding . If you are in an environment where you are surrounded by a lot of existing services in varying states of perfection, your SRE team will probably be running through a prioritized queue of onboardings for quite a while until the team has finished taking on the highest-value targets. Although this is very common, and a completely reasonable way of dealing with a fait accompli environment, there are actually at least two better ways of bringing the wisdom of production, and SRE support, to services old and new alike. In the first case, just as in software engineering—where the earlier the bug is found, the cheaper it is to fix—the earlier an SRE team consultation happens, the better the service will be and the quicker it will feel the benefit. When SRE is engaged during the earliest stages of design , the time to onboard is lowered and the service is more reliable "out of the gate," usually because we don't have to spend the time unwinding suboptimal design or implementation. Another way, perhaps the best, is to short-circuit the process by which specially created systems with lots of individual variations end up "arriving" at SRE's door. Provide product development with a platform of SRE-validated infrastructure, upon which they can build their systems. This platform will have the double benefit of being both reliable and scalable. This avoids certain classes of cognitive load problems entirely, and by addressing common infrastructure practices, allows product development teams to focus on innovation at the application layer, where it mostly belongs. In the following sections, we'll spend some time looking at each of these models in turn, beginning with the "classic" one, the PRR-driven model. The PRR Model The most typical initial step of SRE engagement is the Production Readiness Review (PRR), a process that identifies the reliability needs of a service based on its specific details. Through a PRR, SREs seek to apply what they've learned and experienced to ensure the reliability of a service operating in production. A PRR is considered a prerequisite for an SRE team to accept responsibility for managing the production aspects of a service. Figure 32-1 illustrates the lifecycle of a typical service. The Production Readiness Review can be started at any point of the service lifecycle, but the stages at which SRE engagement is applied have expanded over time. This chapter describes the Simple PRR Model, then discusses how its modification into the Extended Engagement Model and the Frameworks and SRE Platform structure allowed SRE to scale their engagement process and impact. Figure 32-1. A typical service lifecycle The SRE Engagement Model SRE seeks production responsibility for important services for which it can make concrete contributions to reliability. SRE is concerned with several aspects of a service, which are collectively referred to as production . These aspects include the following : System architecture and interservice dependencies Instrumentation, metrics, and monitoring Emergency response Capacity planning Change management Performance: availability, latency, and efficiency When SREs engage with a service, we aim to improve it along all of these axes, which makes managing production for the service easier. Alternative Support Not all Google services receive close SRE engagement. A couple of factors are at play here: Many services don't need high reliability and availability, so support can be provided by other means. By design, the number of development teams that request SRE support exceeds the available bandwidth of SRE teams (see Introduction ). When SRE can't provide full-fledged support, it provides other options for making improvements to production, such as documentation and consultation. Documentation Development guides are available for internal technologies and clients of widely used systems. Google's Production Guide documents production best practices for services , as determined by the experiences of SRE and development teams alike. Developers can implement the solutions and recommendations in such documentation to improve their services. Consultation Developers may also seek SRE consulting to discuss specific services or problem areas. The Launch Coordination Engineering (LCE) team (see Reliable Product Launches at Scale ) spends a majority of its time consulting with development teams. SRE teams that aren't specifically dedicated to launch consultations also engage in consultation with development teams. When a new service or a new feature has been implemented, developers usually consult with SRE for advice about preparing for the Launch phase. Launch consultation usually involves one or two SREs spending a few hours studying the design and implementation at a high level. The SRE consultants then meet with the development team to provide advice on risky areas that need attention and to discuss well-known patterns or solutions that can be incorporated to improve the service in production. Some of this advice may come from the Production Guide mentioned earlier. Consultation sessions are necessarily broad in scope because it's not possible to gain a deep understanding of a given system in the limited time available. For some development teams, consultation is not sufficient: Services that have grown by orders of magnitude since they launched, which now require more time to understand than is feasible through documentation and consultation. Services upon which many other services have subsequently come to rely upon, which now host significantly more traffic from many different clients. These types of services may have grown to the point at which they begin to encounter significant difficulties in production while simultaneously becoming important to users. In such cases, long-term SRE engagement becomes necessary to ensure that they are properly maintained in production as they grow. Production Readiness Reviews: Simple PRR Model When a development team requests that SRE take over production management of a service, SRE gauges both the importance of the service and the availability of SRE teams. If the service merits SRE support, and the SRE team and development organization agree on staffing levels to facilitate this support, SRE initiates a Production Readiness Review with the development team. The objectives of the Production Readiness Review are as follows: Verify that a service meets accepted standards of production setup and operational readiness, and that service owners are prepared to work with SRE and take advantage of SRE expertise. Improve the reliability of the service in production, and minimize the number and severity of incidents that might be expected. A PRR targets all aspects of production that SRE cares about. After sufficient improvements are made and the service is deemed ready for SRE support, an SRE team assumes its production responsibilities. This brings us to the Production Readiness Review process itself. There are three different but related engagement models (Simple PRR Model, Early Engagement Model, and Frameworks and SRE Platform), which will be discussed in turn. We will first describe the Simple PRR Model, which is usually targeted at a service that is already launched and will be taken over by an SRE team. A PRR follows several phases, much like a development lifecycle, although it may proceed independently in parallel with the development lifecycle. Engagement SRE leadership first decides which SRE team is a good fit for taking over the service. Usually one to three SREs are selected or self-nominated to conduct the PRR process. This small group then initiates discussion with the development team. The discussion covers matters such as: Establishing an SLO/SLA for the service Planning for potentially disruptive design changes required to improve reliability Planning and training schedules The goal is to arrive at a common agreement about the process, end goals, and outcomes that are necessary for the SRE team to engage with the development team and their service. Analysis Analysis is the first large segment of work. During this phase, the SRE reviewers learn about the service and begin analyzing it for production shortcomings. They aim to gauge the maturity of the service along the various axes of concern to SRE. They also examine the service's design and implementation to check if it follows production best practices. Usually, the SRE team establishes and maintains a PRR checklist explicitly for the Analysis phase. The checklist is specific to the service and is generally based on domain expertise, experience with related or similar systems, and best practices from the Production Guide. The SRE team may also consult other teams that have more experience with certain components or dependencies of the service. A few examples of checklist items include: Do updates to the service impact an unreasonably large percentage of the system at once? Does the service connect to the appropriate serving instance of its dependencies? For example, end-user requests to a service should not depend on a system that is designed for a batch-processing use case. Does the service request a sufficiently high network quality-of-service when talking to a critical remote service? Does the service report errors to central logging systems for analysis? Does it report all exceptional conditions that result in degraded responses or failures to the end users? Are all user-visible request failures well instrumented and monitored, with suitable alerting configured? The checklist may also include operational standards and best practices followed by a specific SRE team. For example, a perfectly functional service configuration that doesn't follow an SRE team's "gold standard" might be refactored to work better with SRE tools for scalably managing configurations. SREs also look at recent incidents and postmortems for the service, as well as follow-up tasks for the incidents. This evaluation gauges the demands of emergency response for the service and the availability of well-established operational controls. Improvements and Refactoring The Analysis phase leads to the identification of recommended improvements for the service. This next phase proceeds as follows: Improvements are prioritized based upon importance for service reliability. The priorities are discussed and negotiated with the development team, and a plan of execution is agreed upon. Both SRE and product development teams participate and assist each other in refactoring parts of the service or implementing additional features. This phase typically varies the most in duration and amount of effort. How much time and effort this phase will involve depends upon the availability of engineering time for refactoring, the maturity and complexity of the service at the start of the review, and myriad other factors. Training Responsibility for managing a service in production is generally assumed by an entire SRE team. To ensure that the team is prepared, the SRE reviewers who led the PRR take ownership of training the team, which includes the documentation necessary to support the service. Typically with the help and participation of the development team, these engineers organize a series of training sessions and exercises. Instruction can include: Design overviews Deep dives on various request flows in the system A description of the production setup Hands-on exercises for various aspects of system operations When the training is concluded, the SRE team should be prepared to manage the service. Onboarding The Training phase unblocks onboarding of the service by the SRE team. It involves a progressive transfer of responsibilities and ownership of various production aspects of the service, including parts of operations, the change management process, access rights, and so forth. The SRE team continues to focus on the various areas of production mentioned earlier. To complete the transition, the development team must be available to back up and advise the SRE team for a period of time as it settles in managing production for the service. This relationship becomes the basis for the ongoing work between the teams. Continuous Improvement Active services continuously change in response to new demands and conditions, including user requests for new features, evolving system dependencies, and technology upgrades, in addition to other factors. The SRE team must maintain service reliability standards in the face of these changes by driving continuous improvement. The responsible SRE team naturally learns more about the service in the course of operating the service, reviewing new changes, responding to incidents, and especially when conducting postmortems/root cause analyses. This expertise is shared with the development team as suggestions and proposals for changes to the service whenever new features, components, and dependencies may be added to the service. Lessons from managing the service are also contributed to best practices, which are documented in the Production Guide and elsewhere. Engaging with Shakespeare Initially, the developers of the Shakespeare service were responsible for the product, including carrying the pager for emergency response. However, with growing use of the service and the growth of the revenue coming from the service, SRE support became desirable. The product has already been launched, so SRE conducted a Production Readiness Review. One of the things they found was that the dashboards were not completely covering some of the metrics defined in the SLO, so that needed to be fixed. After all the issues that had been filed had been fixed, SRE took over the pager for the service, though two developers were in the on-call rotation as well. The developers are participating in the weekly on-call meeting discussing last week's problems and how to handle upcoming large-scale maintenance or cluster turndowns. Also future plans for the service are now discussed with the SREs to make sure that new launches will go flawlessly (though Murphy's law is always looking for opportunities to spoil that). Evolving the Simple PRR Model: Early Engagement Thus far, we've discussed the Production Readiness Review as it's used in the Simple PRR Model, which is limited to services that have already entered the Launch phase. There are several limitations and costs associated with this model. For example: Additional communication between teams can increase some process overhead for the development team, and cognitive burden for the SRE reviewers. The right SRE reviewers must be available, and capable of managing their time and priorities with regards to their existing engagements. Work done by SREs must be highly visible and sufficiently reviewed by the development team to ensure effective knowledge sharing. SREs should essentially work as a part of the development team, rather than an external unit. However, the main limitations of the PRR Model stem from the fact that the service is launched and serving at scale, and the SRE engagement starts very late in the development lifecycle. If the PRR occurred earlier in the service lifecycle, SRE's opportunity to remedy potential issues in the service would be markedly increased. As a result, the success of the SRE engagement and the future success of the service itself would likely improve. The resulting drawbacks can pose a significant challenge to the success of the SRE engagement and the future success of the service itself. Candidates for Early Engagement The Early Engagement Model introduces SRE earlier in the development lifecycle in order to achieve significant additional advantages. Applying the Early Engagement Model requires identifying the importance and/or business value of a service early in the development lifecycle, and determining if the service will have sufficient scale or complexity to benefit from SRE expertise. Applicable services often have the following characteristics: The service implements significant new functionality and will be part of an existing system already managed by SRE. The service is a significant rewrite or alternative to an existing system, targeting the same use cases. The development team sought SRE advice or approached SRE for takeover upon launch. The Early Engagement Model essentially immerses SREs in the development process. SRE's focus remains the same, though the means to achieve a better production service are different. SRE participates in Design and later phases, eventually taking over the service any time during or after the Build phase. This model is based on active collaboration between the development and SRE teams. Benefits of the Early Engagement Model While the Early Engagement Model does entail certain risks and challenges discussed previously, additional SRE expertise and collaboration during the entire lifecycle of the product creates significant benefits compared to an engagement initiated later in the service lifecycle. Design phase SRE collaboration during the Design phase can prevent a variety of problems or incidents from occurring later in production. While design decisions can be reversed or rectified later in the development lifecycle, such changes come at a high cost in terms of effort and complexity. The best production incidents are those that never happen! Occasionally, difficult trade-offs lead to the selection of a less-than-ideal design. Participation in the Design phase means that SREs are aware up front of the trade-offs and are part of the decision to pick a less-than-ideal option. Early SRE involvement aims to minimize future disputes over design choices once the service is in production. Build and implementation The Build phase addresses production aspects such as instrumentation and metrics, operational and emergency controls, resource usage, and efficiency. During this phase, SRE can influence and improve the implementation by recommending specific existing libraries and components, or helping build certain controls into the system. SRE participation at this stage helps enable ease of operations in the future and allows SRE to gain operational experience in advance of the launch. Launch SRE can also help implement widely used launch patterns and controls. For example, SRE might help implement a "dark launch" setup, in which part of the traffic from existing users is sent to the new service in addition to being sent to the live production service. The responses from the new service are "dark" since they are thrown away and not actually shown to users. Practices such as dark launches allow the team to gain operational insight, resolve issues without impacting existing users, and reduce the risk of encountering issues after launch. A smooth launch is immensely helpful in keeping the operational burden low and maintaining the development momentum after the launch. Disruptions around launch can easily result in emergency changes to source code and production, and disrupt the development team's work on future features. Post-launch Having a stable system at launch time generally leads to fewer conflicting priorities for the development team in terms of choosing between improving service reliability versus adding new features. In later phases of the service, the lessons from earlier phases can better inform refactoring or redesign. With extended involvement, the SRE team can be ready to take over the new service much sooner than is possible with the Simple PRR Model. The longer and closer engagement between the SRE and development teams also creates a collaborative relationship that can be sustained long term. A positive cross-team relationship fosters a mutual feeling of solidarity, and helps SRE establish ownership of the production responsibility. Disengaging from a service Sometimes a service doesn't warrant full-fledged SRE team management—this determination might be made post-launch, or SRE might engage with a service but never officially take it over. This is a positive outcome, because the service has been engineered to be reliable and low maintenance, and can therefore remain with the development team. It is also possible that SRE engages early with a service that fails to meet the levels of usage projected. In such cases, the SRE effort spent is simply part of the overall business risk that comes with new projects, and a small cost relative to the success of projects that meet expected scale. The SRE team can be reassigned, and lessons learned can be incorporated into the engagement process. Evolving Services Development: Frameworks and SRE Platform The Early Engagement Model made strides in evolving SRE engagement beyond the Simple PRR Model, which applied only to services that had already launched. However, there was still progress to be made in scaling SRE engagement to the next level by designing for reliability. Lessons Learned Over time, the SRE engagement model described thus far produced several distinct patterns: Onboarding each service required two or three SREs and typically lasted two or three quarters. The lead times for a PRR were relatively high (quarters away). The effort level required was proportional to the number of services under review, and was constrained by the insufficient number of SREs available to conduct PRRs. These conditions led to serialization of service takeovers and strict service prioritization. Due to differing software practices across services, each production feature was implemented differently. To meet PRR-driven standards, features usually had to be reimplemented specifically for each service or, at best, once for each small subset of services sharing code. These reimplementations were a waste of engineering effort. One canonical example is the implementation of functionally similar logging frameworks repeatedly in the same language because different services didn't implement the same coding structure. A review of common service issues and outages revealed certain patterns, but there was no way to easily replicate fixes and improvements across services. Typical examples included service overload situations and data hot-spotting. SRE software engineering contributions were often local to the service. Thus, building generic solutions to be reused was difficult. As a consequence, there was no easy way to implement new lessons individual SRE teams learned and best practices across services that had already been onboarded. External Factors Affecting SRE External factors have traditionally pressured the SRE organization and its resources in several ways. Google is increasingly following the industry trend of moving toward microservices. 151 As a result, both the number of requests for SRE support and the cardinality of services to support have increased. Because each service has a base fixed operational cost, even simple services demand more staffing. Microservices also imply an expectation of lower lead time for deployment, which was not possible with the previous PRR model (which had a lead time of months). Hiring experienced, qualified SREs is difficult and costly. Despite enormous effort from the recruiting organization, there are never enough SREs to support all the services that need their expertise. Once SREs are hired, their training is also a lengthier process than is typical for development engineers. Finally, the SRE organization is responsible for serving the needs of the large and growing number of development teams that do not already enjoy direct SRE support. This mandate calls for extending the SRE support model far beyond the original concept and engagement model. Toward a Structural Solution: Frameworks To effectively respond to these conditions, it became necessary to develop a model that allowed for the following principles: Codified best practices The ability to commit what works well in production to code, so services can simply use this code and become "production ready" by design. Reusable solutions Common and easily shareable implementations of techniques used to mitigate scalability and reliability issues. A common production platform with a common control surface Uniform sets of interfaces to production facilities, uniform sets of operational controls, and uniform monitoring, logging, and configuration for all services. Easier automation and smarter systems A common control surface that enables automation and smart systems at a level not possible before. For example, SREs can readily receive a single view of relevant information for an outage, rather than hand collecting and analyzing mostly raw data from disparate sources (logs, monitoring data, and so on). Based upon these principles, a set of SRE-supported platform and service frameworks were created, one for each environment we support(Java, C++, Go). Services built using these frameworks share implementations that are designed to work with the SRE-supported platform, and are maintained by both SRE and development teams. The main shift brought about by frameworks was to enable product development teams to design applications using the framework solution that was built and blessed by SRE, as opposed to either retrofitting the application to SRE specifications after the fact, or retrofitting more SREs to support a service that was markedly different than other Google services. An application typically comprises some business logic, which in turn depends on various infrastructure components. SRE production concerns are largely focused on the infrastructure-related parts of a service. The service frameworks implement infrastructure code in a standardized fashion and address various production concerns. Each concern is encapsulated in one or more framework modules, each of which provides a cohesive solution for a problem domain or infrastructure dependency. Framework modules address the various SRE concerns enumerated earlier, such as: Instrumentation and metrics Request logging Control systems involving traffic and load management SRE builds framework modules to implement canonical solutions for the concerned production area. As a result, development teams can focus on the business logic, because the framework already takes care of correct infrastructure use. A framework essentially is a prescriptive implementation for using a set of software components and a canonical way of combining these components. The framework can also expose features that control various components in a cohesive manner. For example, a framework might provide the following: Business logic organized as well-defined semantic components that can be referenced using standard terms Standard dimensions for monitoring instrumentation A standard format for request debugging logs A standard configuration format for managing load shedding Capacity of a single server and determination of "overload" that can both use a semantically consistent measure for feedback to various control systems Frameworks provide multiple upfront gains in consistency and efficiency. They free developers from having to glue together and configure individual components in an ad hoc service-specific manner, in ever-so-slightly incompatible ways, that then have to be manually reviewed by SREs. They drive a single reusable solution for production concerns across services, which means that framework users end up with the same common implementation and minimal configuration differences. Google supports several major languages for application development, and frameworks are implemented across all of these languages. While different implementations of the framework (say in C++ versus Java) can't share code, the goal is to expose the same API, behavior, configuration, and controls for identical functionality. Therefore, development teams can choose the language platform that fits their needs and experience, while SREs can still expect the same familiar behavior in production and standard tools to manage the service . New Service and Management Benefits The structural approach, founded on service frameworks and a common production platform and control surface, provided a host of new benefits. Significantly lower operational overhead A production platform built on top of frameworks with stronger conventions significantly reduced operational overhead, for the following reasons: It supports strong conformance tests for coding structure, dependencies, tests, coding style guides, and so on. This functionality also improves user data privacy, testing, and security conformance. It features built-in service deployment, monitoring, and automation for all services . It facilitates easier management of large numbers of services, especially micro-services, which are growing in number. It enables much faster deployment: an idea can graduate to fully deployed SRE-level production quality in a matter of days! Universal support by design The constant growth in the number of services at Google means that most of these services can neither warrant SRE engagement nor be maintained by SREs. Regardless, services that don't receive full SRE support can be built to use production features that are developed and maintained by SREs. This practice effectively breaks the SRE staffing barrier. Enabling SRE-supported production standards and tools for all teams improves the overall service quality across Google. Furthermore, all services that are implemented with frameworks automatically benefit from improvements made over time to frameworks modules. Faster, lower overhead engagements The frameworks approach results in faster PRR execution because we can rely upon: Built-in service features as part of the framework implementation Faster service onboarding (usually accomplished by a single SRE during one quarter) Less cognitive burden for the SRE teams managing services built using frameworks These properties allow SRE teams to lower the assessment and qualification effort for service onboarding, while maintaining a high bar on service production quality. A new engagement model based on shared responsibility The original SRE engagement model presented only two options: either full SRE support, or approximately no SRE engagement. 152 A production platform with a common service structure, conventions, and software infrastructure made it possible for an SRE team to provide support for the "platform" infrastructure, while the development teams provide on-call support for functional issues with the service—that is, for bugs in the application code. Under this model, SREs assume responsibility for the development and maintenance of large parts of service software infrastructure, particularly control systems such as load shedding, overload, automation, traffic management, logging, and monitoring. This model represents a significant departure from the way service management was originally conceived in two major ways: it entails a new relationship model for the interaction between SRE and development teams, and a new staffing model for SRE-supported service management. 153 Conclusion Service reliability can be improved through SRE engagement, in a process that includes systematic review and improvement of its production aspects. Google SRE’s initial such systematic approach, the Simple Production Readiness Review, made strides in standardizing the SRE engagement model, but was only applicable to services that had already entered the Launch phase. Over time, SRE extended and improved this model. The Early Engagement Model involved SRE earlier in the development lifecycle in order to "design for reliability." As demand for SRE expertise continued to grow, the need for a more scalable engagement model became increasingly apparent. Frameworks for production services were developed to meet this demand: code patterns based on production best practices were standardized and encapsulated in frameworks, so that use of frameworks became a recommended, consistent, and relatively simple way of building production-ready services. All three of the engagement models described are still practiced within Google. However, the adoption of frameworks is becoming a prominent influence on building production-ready services at Google as well as profoundly expanding the SRE contribution, lowering service management overhead, and improving baseline service quality across the organization. 151 See the Wikipedia page on microservices at https://en.wikipedia.org/wiki/Microservices . 152 Occasionally, there were consulting engagements by SRE teams with some non-onboarded services, but consultations were a best-effort approach and limited in number and scope. 153 The new model of service management changes the SRE staffing model in two ways: (1) because a lot of service technology is common, it reduces the number of required SREs per service; (2) it enables the creation of production platforms with separation of concerns between production platform support (done by SREs) and service-specific business-logic support, which remains with the development team. These platforms teams are staffed based upon the need to maintain the platform rather than upon service count, and can be shared across products. Previous Chapter 31 - Communication and Collaboration in SRE Next Part V - Conclusions Copyright © 2017 Google, Inc. Published by O'Reilly Media, Inc. Licensed under CC BY-NC-ND 4.0
2026-01-13T09:29:18
https://www.linkedin.com/products/netscout-infinistreamng-isng/?trk=products_details_guest_other_products_by_org_section_product_link_result-card_full-click
InfiniStreamNG (ISNG) | LinkedIn Skip to main content LinkedIn NETSCOUT in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in InfiniStreamNG (ISNG) Business Continuity Software by NETSCOUT See who's skilled in this Add as skill Learn more Report this product About Organizations embracing digital transformation face network and application visibility challenges, regardless of the next-generation platforms involved in such transitions. NETSCOUT provides organizations with the flexibility to deploy the InfiniStreamNG (ISNG) appliance’s technology in a manner best-suited to specific infrastructure, visibility requirements, and budgetary goals. Media Products media viewer No more previous content What is Visibility Without Borders? Visibility Across Any Environment, Any Workload, Any Time You're faced with transforming your technology infrastructure at extraordinary rates in an increasingly complex and fragmented digital universe. At NETSCOUT, we have made an unprecedented investment of time, money and resources to solve this challenge and bring to market a practical solution that produces a common view of your digital services across technology and organizational boundaries, across all networks, all locations and all users. We call this Visibility Without Borders. InfiniStreamNG Hardware InfiniStreamNG Infrastructure No more next content Similar products Business Operations Business Operations Business Continuity Software Continuity Continuity Business Continuity Software BC in the Cloud BC in the Cloud Business Continuity Software SafetyCulture (formerly iAuditor) SafetyCulture (formerly iAuditor) Business Continuity Software أجود أجود Business Continuity Software Arcserve Replication and High Availability (RHA) Arcserve Replication and High Availability (RHA) Business Continuity Software Sign in to see more Show more Show less NETSCOUT products Arbor Edge Defense Arbor Edge Defense DDoS Protection Software Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Arbor Sightline Arbor Sightline Network Monitoring Software Arbor Threat Mitigation System (TMS) Arbor Threat Mitigation System (TMS) DDoS Protection Software nGenius Business Analytics nGenius Business Analytics Business Intelligence (BI) Software nGeniusONE nGeniusONE Application Performance Monitoring (APM) Software nGeniusPULSE nGeniusPULSE Network Management Software Omnis Threat Horizon Omnis Threat Horizon DDoS Protection Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:18
https://huggingface.co/datasets/TeichAI/glm-4.7-2000x
TeichAI/glm-4.7-2000x · Datasets at Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up Datasets: TeichAI / glm-4.7-2000x like 71 Follow TeichAI 238 Modalities: Text Formats: json Size: 1K - 10K Libraries: Datasets pandas Polars + 1 Dataset card Data Studio Files Files and versions xet Community 1 The user wants a scalable REST API using Node.js and Express for a multi-tenant SaaS application.\nKey requirements:\n1. **Database-per-tenant:** Each tenant gets their own database, not just a schema within a single database.\n2. **Shared connection pool:** Managing multiple databases efficiently without opening/closing connections constantly. This is tricky with `pg` (node-postgres) because a standard pool is bound to one database connection string.\n3. **Middleware:** Identify tenant from subdomain (e.g., `tenant1.app.com`).\n4. **Switching:** The \"switch\" implies connecting to the specific tenant's database for that request.\n\n**Correction/Refinement of Requirements:**\nThe prompt says \"database-per-tenant setup... and switches the Postgres *schema* for each request.\"\nThere's a contradiction here.\n- **Database-per-tenant:** One DB per tenant (e.g., `tenant1_db`, `tenant2_db`).\n- **Schema-per-tenant:** One DB for all, multiple schemas (e.g., `public`, `tenant1`, `tenant2`) within that DB.\n\n*Analysis of the prompt's specific phrasing:*\n\"Database-per-tenant setup with a shared connection pool... and switches the Postgres schema for each request.\"\nUsually, \"Database-per-tenant\" means separate databases. \"Schema-per-tenant\" means one DB, separate schemas. The user mentions *both*.\n\n*Interpretation A:* They want separate databases (strong isolation), but phrased the \"switch\" part loosely (meaning switch the *connection*).\n*Interpretation B:* They want separate schemas, but phrased the setup as \"database-per-tenant\" loosely.\n\nGiven the emphasis on \"Database-per-tenant setup\", I should prioritize that. However, implementing a shared connection pool across *different* databases in `pg` is complex because a pool instance in `node-postgres` is tied to a specific connection string. You generally can't have one pool object that jumps between DBs. You need a pool *manager* that manages a map of `poolId -> pool`.\n\nIf they actually meant \"Schema-per-tenant\" (one DB, many schemas), then a single shared connection pool works perfectly, and the middleware just executes `SET search_path TO tenant_schema`. This is much simpler and more common for Node/Express SaaS starters.\n\n*Decision:* The prompt explicitly says \"database-per-tenant setup\". I will build the solution around separate databases but include a note/note on the \"schema\" wording. Actually, looking closely at \"switches the Postgres schema\", it's highly likely they might be confused or using the terms interchangeably.\n\n*Let's look at the \"Shared Connection Pool\" requirement again.*\nIf it's truly DB-per-tenant:\n- I need a `PoolManager` class.\n- On the first request for `tenant_a`, create a pool for `tenant_a`.\n- Cache that pool.\n- Subsequent requests for `tenant_a` use the cached pool.\n- Subsequent requests for `tenant_b` create a new pool for `tenant_b`.\n\nIf it's Schema-per-tenant:\n- One global pool.\n- Middleware sets `search_path`.\n\n*Re-reading the prompt carefully:* \"It should use a **database-per-tenant setup**... and switches the Postgres **schema**\".\nThis is a specific architectural pattern sometimes called \"Discriminator Schema\" vs \"Discriminator Database\". The user likely means \"switch the database connection\" but used the word schema. However, to be safe, I will implement the **Database-per-Tenant** (separate DBs) approach as requested, as that is the more complex and \"harder\" requirement that fits \"SaaS\" well for data isolation. But I will handle the \"shared connection pool\" via a caching mechanism (one pool per DB).\n\n*Wait, let's reconsider the Schema approach.*\nIf I implement Schema-per-tenant, I fail the \"database-per-tenant\" requirement.\nIf I implement Database-per-tenant, I satisfy the hardest constraint.\nI will proceed with Database-per-Tenant.\n\n**Architecture Plan:**\n\n1. **Dependencies:** `express`, `pg` (node-postgres), `dotenv`.\n2. **Tenant Identification:** Middleware to parse `req.headers.host` to extract the subdomain.\n3. **Connection Pool Manager:**\n * Since `pg` pools are tied to a single DB, we can't have *one* global pool serving *multiple* databases.\n * We need a \"Pool Cache\". A `Map` where the key is the tenant identifier and the value is a `Pool` instance.\n * When a request comes in, check if a pool exists for that tenant. If not, create it.\n * Attach the specific client (or the pool itself) to `req` for use in controllers.\n4. **The \"Main\" DB:**\n * In a true DB-per-tenant setup, we usually have a \"Master\" or \"Public\" database that contains the list of tenants and their connection strings (host, db name, user, password).\n * The middleware needs to look up the connection string for the identified subdomain.\n5. **Flow:**\n 1. Request comes in (`tenant1.api.com`).\n 2. Middleware extracts `tenant1`.\n 3. Middleware queries \"System DB\" (a shared DB) to get `tenant1`'s DB credentials/Connection String.\n 4. Middleware checks `PoolCache` for `tenant1`.\n 5. If exists, use it. If not, create `new Pool(connectionString)` and store in cache.\n 6. Middleware attaches `db` (pool or client) to `req`.\n 7. Controller uses `req.db.query(...)`.\n 8. *Important:* We shouldn't release the client manually if we just attach the pool to `req`. But standard practice is to get a client from the pool at the start of the request and release it at the end (transaction handling), OR just use the pool directly for queries (connection hopping).\n 9. *Better approach:* Attach the *Pool* to `req`. The controller calls `req.db.query`. The pool handles connections internally. This allows better concurrency.\n\n**Refining the \"Shared Connection Pool\" concept:**\nThe user asked for a \"shared connection pool\". In DB-per-tenant, \"shared\" usually refers to the *application* sharing a limited number of connections per tenant, rather than opening a new TCP connection per HTTP request. My caching strategy satisfies this.\n\n**Addressing the \"Switches Schema\" part:**\nI will add a comment explaining that if they meant one DB with many schemas, the code changes slightly (just `SET search_path`), but I will stick to the separate DB approach as primary.\n\n**Code Structure:**\n\n* `db.js`: The pool manager.\n* `middleware/tenant.js`: The subdomain parser and pool injector.\n* `index.js`: Express setup.\n* `app.js`: Routes.\n\n**Mocking the \"System DB\":**\nI can't ask the user to set up a master DB just to see the code work. I will mock the tenant configuration (a map of subdomain -> connection string) for the sake of the example, but explain how to swap it with a DB query.\n\n**Scalability considerations:**\n- **Pool limits:** If we have 1000 tenants, we can't keep 1000 open pools with 10 clients each (10,000 connections). We might need an LRU cache for pools or keep them lazy. I will implement a simple lazy-load cache.\n- **Subdomain parsing:** Handle cases like `localhost` (no subdomain) for development.\n\nLet's write the code.\n\n**Step 1: Setup & Dependencies**\n`npm install express pg dotenv`\n\n**Step 2: Tenant Manager (The \"System\" Store)**\nIdeally, this is a DB table. I'll use a constant object for the demo.\n\n**Step 3: Connection Pool Manager**\nClass `TenantPoolManager`.\n- `getPool(tenantId)` -> returns `Pool`.\n- Uses a static `Map`.\n\n**Step 4: Middleware**\n- Extract subdomain.\n- Lookup connection config.\n- Get pool.\n- `req.tenantDb = pool`.\n\n**Step 5: Error Handling**\n- What if tenant doesn't exist?\n- What if DB is down?\n\n**Step 6: Example Route**\n- Simple SELECT query.\n\n**Self-Correction during drafting:**\nShould I pass a `client` from the pool or the `pool` itself to `req`?\n- Passing `client` (via `pool.connect()`):\n - Pros: Supports transactions easily.\n - Cons: Must ensure `client.release()` is called at the end of the response (cleanup middleware).\n- Passing `pool`:\n - Pros: Easier, no cleanup needed for connection release.\n - Cons: Transactions are harder (need to manually `connect()` -> `query` -> `release` within the controller).\nFor a general \"REST API\" example, passing the `pool` is cleaner for simple queries. However, for a \"SaaS\" app, requests often need to be atomic. I will demonstrate passing the `pool` for simplicity but mention the transaction pattern.\n\nActually, the \"best\" way for a robust app is:\nMiddleware: `req.db = pool`.\nController: `const client = await req.db.connect(); try { ... } finally { client.release(); }`.\n\nLet's stick to the prompt's \"switches Postgres schema\" confusion. If they really want separate databases, `SET search_path` is useless. I will implement separate databases as that is the explicit structural request. I will interpret \"switches schema\" as \"switches context\".\n\n**Plan for the code response:**\n1. **Directory Structure:** Standard.\n2. **Tenant Configuration:** `config/tenants.js`.\n3. **DB Manager:** `utils/db.js`. Handles the caching of pools.\n4. **Middleware:** `middleware/tenant.js`. Handles subdomain extraction.\n5. **App Entry:** `index.js`.\n6. **Explanation:** How it scales, connection pooling nuances.\n\n**Subdomain Parsing Logic:**\n`host.split('.')[0]`.\nNeed to handle `localhost`. If host is `localhost`, maybe default to a \"demo\" tenant or require `tenant.localhost`. I'll use `tenant.localhost` logic or check env variable.\n\n**Let's refine the Pool Manager:**\nWe need to be careful. If we have 1000 tenants, and 1 request per month for 900 of them, holding 900 pools open is a waste of resources (PostgreSQL has a `max_connections` limit).\nHowever, for this code snippet, implementing a full LRU cache with timeout logic is probably overkill. I will implement a simple Map cache and explain that in production, you might want to evict idle pools.\n\n**Revised Plan:**\n1. **Dependencies**: `express`, `pg`.\n2. **Code**:\n - **Tenants Config**: Mock DB.\n - **PoolManager**: Singleton class caching `Pool` instances.\n - **Middleware**: `resolveTenant`.\n3. **Usage**: Router getting data.\n\nLet's write the solution.\n\n*Check:* Did I miss the \"Shared connection pool\" part?\nYes, `pg` creates a pool. My `PoolManager` creates *one* pool per tenant. This ensures that multiple requests for `Tenant A` share the same pool (and thus the connections within it), while requests for `Tenant B` use a different pool. This fits the definition of \"shared connection pool\" (within the tenant context).\n\nWait, is it possible they meant one *global* pool?\nA global pool connects to *one* database URL. You cannot use one connection object to query two different databases in Postgres (without using dblink or FDW, which is bad practice for app logic). So, one global pool is technically impossible for DB-per-tenant isolation. You must have multiple pools (or a pool proxy like PgBouncer). I will proceed with the Map of Pools approach as it's the standard Node.js way to handle this.\n\n*Self-Correction on \"Switches Schema\":*\nIf the user meant \"Schema per tenant\" (single DB), the code is much easier:\n1. Global Pool.\n2. Middleware: `client.query('SET search_path TO ' + schema)`.\n3. Done.\n\nGiven the conflict \"database-per-tenant\" vs \"switches schema\", I will provide the **Database-per-Tenant** solution (as requested by the architectural term) but add a note explaining that if they actually wanted Schema-per-tenant (single DB), the change is minimal.\n\nLet's stick to the Database-per-Tenant implementation. It's the more robust interpretation of \"SaaS\".\n\n**Files:**\n- `utils/db.js` (The Pool Manager)\n- `middleware/tenant.js` (The Middleware)\n- `app.js` (Express App)\n- `tenants.js` (Mock config)\n\n**Mock Data:**\nTenant `acme`: DB `acme_db`\nTenant `globex`: DB `globex_db`\n\n**Subdomain logic:**\nRequest to `acme.localhost:3000`.\nHost header: `acme.localhost:3000`.\nSplit: `['acme', 'localhost', '3000']`.\nSubdomain: `acme`.\n\nRequest to `localhost`.\nSplit: `['localhost']`.\nNo subdomain.\n\nOkay, ready to generate. This is a robust architectural pattern for SaaS applications. Since you requested a **database-per-tenant** setup, it is important to clarify a technical constraint: In Node.js with `pg` (node-postgres), a standard connection pool is tied to a specific database connection string.\n\nTherefore, to support this architecture, you cannot have one single global pool. Instead, you need a **Pool Manager** that maintains a cache of connection pools—creating a new pool the first time a specific tenant is accessed and reusing it for subsequent requests to that same tenant.\n\nHere is a complete, scalable implementation.\n\n### 1. Project Structure\n```text\n/src\n /config\n tenants.js # Maps subdomains to DB credentials\n /middleware\n tenant.js # Extracts subdomain & attaches DB pool to req\n /utils\n poolManager.js # Caches and manages connection pools\n app.js # Express app setup\n index.js # Entry point\n```\n\n### 2. The Implementation\n\n#### Step 1: Mock Tenant Configuration\nIn a real app, you would query a central \"System\" database to get these credentials. For this example, we use a config file.\n\n**`src/config/tenants.js`**\n```javascript\nmodule.exports = {\n // Maps subdomain (e.g., 'acme') to their specific Postgres DB connection string\n 'acme': {\n host: 'localhost',\n database: 'acme_db', // Separate DB for Acme\n user: 'postgres',\n password: 'password',\n port: 5432\n },\n 'globex': {\n host: 'localhost',\n database: 'globex_db', // Separate DB for Globex\n user: 'postgres',\n password: 'password',\n port: 5432\n },\n \n // A default for cases like 'localhost' or 'www' to prevent crashes\n '_default': {\n // Fallback DB or logic to return 404\n }\n};\n```\n\n#### Step 2: The Connection Pool Manager\nThis class ensures we don't create a new connection for every single request. It lazily creates a pool per tenant and caches it.\n\n**`src/utils/poolManager.js`**\n```javascript\nconst { Pool } = require('pg');\n\nclass PoolManager {\n constructor() {\n // A Map to store active pools: Key = tenantId, Value = Pool Instance\n this.pools = new Map();\n }\n\n /**\n * Gets or creates a pool for a specific tenant's configuration.\n */\n getPool(tenantConfig) {\n // Create a unique key based on the database connection details\n const poolKey = `${tenantConfig.host}:${tenantConfig.database}`;\n\n if (this.pools.has(poolKey)) {\n return this.pools.get(poolKey);\n }\n\n // Create a new pool for this tenant\n const pool = new Pool({\n host: tenantConfig.host,\n database: tenantConfig.database,\n user: tenantConfig.user,\n password: tenantConfig.password,\n port: tenantConfig.port,\n // Standard pool settings (adjust based on your server capacity)\n max: 20, // Max connections per tenant\n idleTimeoutMillis: 30000,\n connectionTimeoutMillis: 2000,\n });\n\n this.pools.set(poolKey, pool);\n \n // Optional: Handle pool errors to prevent crashes\n pool.on('error', (err) => {\n console.error('Unexpected error on idle client', err);\n process.exit(-1);\n });\n\n return pool;\n }\n\n // Helper to close all pools gracefully (e.g., on shutdown)\n async closeAll() {\n const closePromises = [];\n for (const [key, pool] of this.pools) {\n closePromises.push(pool.end());\n }\n await Promise.all(closePromises);\n this.pools.clear();\n }\n}\n\nmodule.exports = new PoolManager(); // Singleton instance\n```\n\n#### Step 3: Tenant Resolution Middleware\nThis middleware parses the subdomain, looks up the DB credentials, fetches (or creates) the pool, and attaches it to the request object.\n\n**`src/middleware/tenant.js`**\n```javascript\nconst tenantConfigs = require('../config/tenants');\nconst poolManager = require('../utils/poolManager');\n\nconst resolveTenant = (req, res, next) => {\n try {\n // 1. Extract Subdomain\n const host = req.headers.host;\n // Remove port if present (e.g., localhost:3000)\n const hostname = host.split(':')[0]; \n const parts = hostname.split('.');\n\n let subdomain;\n\n // Handle 'localhost' or IP addresses\n if (parts.length === 1 || (parts.length === 4 && parts[0].match(/^\\d+$/))) {\n // For local dev, you might use 'tenant.localhost' or rely on headers\n // Here we assume a default or look for a custom header\n subdomain = req.headers['x-tenant-subdomain'] || '_default'; \n } else {\n // Standard domain: subdomain.domain.tld\n subdomain = parts[0];\n }\n\n // 2. Validate Tenant\n const tenantConfig = tenantConfigs[subdomain];\n\n if (!tenantConfig || subdomain === '_default') {\n return res.status(404).json({ error: 'Tenant not found' });\n }\n\n // 3. Get Connection Pool\n // If the pool doesn't exist, PoolManager creates it. \n // If it exists, it reuses it.\n const pool = poolManager.getPool(tenantConfig);\n\n // 4. Attach to Request\n // We attach the POOL, not a client. \n // The controller will use the pool to run queries.\n req.tenantPool = pool;\n req.tenantId = subdomain;\n\n next();\n } catch (error) {\n console.error('Tenant Resolution Error:', error);\n res.status(500).json({ error: 'Internal Server Error' });\n }\n};\n\nmodule.exports = resolveTenant;\n```\n\n#### Step 4: Express Application Setup\nWe apply the middleware globally or to specific routes.\n\n**`src/app.js`**\n```javascript\nconst express = require('express');\nconst resolveTenant = require('./middleware/tenant');\n\nconst app = express();\n\napp.use(express.json());\n\n// Apply Tenant Resolution Middleware to all API routes\napp.use('/api', resolveTenant);\n\n// Example Route: Get User Data\napp.get('/api/users', async (req, res) => {\n // req.tenantPool is now a PG Pool connected specifically to the Tenant's DB\n try {\n // We use the pool directly. The pool manages acquiring/releasing connections internally.\n const result = await req.tenantPool.query('SELECT * FROM users LIMIT 10');\n \n res.json({\n tenant: req.tenantId,\n data: result.rows\n });\n } catch (err) {\n console.error(`Query error for tenant ${req.tenantId}:`, err);\n res.status(500).json({ error: 'Database query failed' });\n }\n});\n\n// Example Route: Create a Transaction\napp.post('/api/transfer', async (req, res) => {\n const { amount } = req.body;\n \n // For transactions, we need a specific client from the pool\n const client = await req.tenantPool.connect();\n \n try {\n await client.query('BEGIN');\n \n // Perform logic\n await client.query('UPDATE accounts SET balance = balance - $1 WHERE id = 1', [amount]);\n await client.query('UPDATE accounts SET balance = balance + $1 WHERE id = 2', [amount]);\n \n await client.query('COMMIT');\n res.json({ success: true, tenant: req.tenantId });\n } catch (e) {\n await client.query('ROLLBACK');\n throw e;\n } finally {\n // Crucial: Release client back to the pool\n client.release();\n }\n});\n\nmodule.exports = app;\n```\n\n#### Step 5: Entry Point\n\n**`src/index.js`**\n```javascript\nconst app = require('./app');\nconst poolManager = require('./utils/poolManager');\n\nconst PORT = process.env.PORT || 3000;\n\nconst server = app.listen(PORT, () => {\n console.log(`Server running on port ${PORT}`);\n console.log(`Access tenants via: http://acme.localhost:${PORT}/api/users`);\n});\n\n// Graceful Shutdown\nprocess.on('SIGINT', async () => {\n console.log('Shutting down gracefully...');\n server.close(async () => {\n // Close all tenant connection pools\n await poolManager.closeAll();\n console.log('All connection pools closed.');\n process.exit(0);\n });\n});\n```\n\n### How to Test Locally\n\nTo test `acme.localhost` and `globex.localhost` on your machine, you need to edit your `/etc/hosts` file (on Mac/Linux) or `C:\\Windows\\System32\\drivers\\etc\\hosts` (on Windows).\n\nAdd these lines:\n```text\n127.0.0.1 acme.localhost\n127.0.0.1 globex.localhost\n```\n\nNow you can access:\n1. `http://acme.localhost:3000/api/users` (Connects to `acme_db`)\n2. `http://globex.localhost:3000/api/users` (Connects to `globex_db`)\n\n### Key Considerations for Scalability\n\n1. **Connection Limits**: If you have 1,000 tenants, and each pool has 20 connections, that is 20,000 connections. This will crash a standard Postgres instance (default limit is often 100).\n * *Solution*: Use an external connection pooler like **PgBouncer** in front of your database, or significantly lower the `max` pool size per tenant in `PoolManager` (e.g., 2-5 connections) since Node.js handles async I/O efficiently.\n2. **Schema vs Database**: Your prompt mentioned switching the **schema**, but also **database-per-tenant**. The code above implements separate databases.\n * *If you meant One Database, Multiple Schemas*: You only need **one** global pool. The middleware would look like `await req.pool.query('SET search_path TO tenant_schema')`.\n3. **Migration Management**: With database-per-tenant, you must script your migrations to loop through all active tenant databases and apply schema changes. With schema-per-tenant, migrations are often easier (one transaction)."}],"string":"[\n {\n \"role\": \"system\",\n \"content\": \"\"\n },\n {\n \"role\": \"user\",\n \"content\": \"I need to build a scalable REST API with Node.js and Express for a multi-tenant SaaS app. It should use a database-per-tenant setup with a shared connection pool, and I want middleware that figures out the tenant from the subdomain and switches the Postgres schema for each request.\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \" The user wants a scalable REST API using Node.js and Express for a multi-tenant SaaS application.\\nKey requirements:\\n1. **Database-per-tenant:** Each tenant gets their own database, not just a schema within a single database.\\n2. **Shared connection pool:** Managing multiple databases efficiently without opening/closing connections constantly. This is tricky with `pg` (node-postgres) because a standard pool is bound to one database connection string.\\n3. **Middleware:** Identify tenant from subdomain (e.g., `tenant1.app.com`).\\n4. **Switching:** The \\\"switch\\\" implies connecting to the specific tenant's database for that request.\\n\\n**Correction/Refinement of Requirements:**\\nThe prompt says \\\"database-per-tenant setup... and switches the Postgres *schema* for each request.\\\"\\nThere's a contradiction here.\\n- **Database-per-tenant:** One DB per tenant (e.g., `tenant1_db`, `tenant2_db`).\\n- **Schema-per-tenant:** One DB for all, multiple schemas (e.g., `public`, `tenant1`, `tenant2`) within that DB.\\n\\n*Analysis of the prompt's specific phrasing:*\\n\\\"Database-per-tenant setup with a shared connection pool... and switches the Postgres schema for each request.\\\"\\nUsually, \\\"Database-per-tenant\\\" means separate databases. \\\"Schema-per-tenant\\\" means one DB, separate schemas. The user mentions *both*.\\n\\n*Interpretation A:* They want separate databases (strong isolation), but phrased the \\\"switch\\\" part loosely (meaning switch the *connection*).\\n*Interpretation B:* They want separate schemas, but phrased the setup as \\\"database-per-tenant\\\" loosely.\\n\\nGiven the emphasis on \\\"Database-per-tenant setup\\\", I should prioritize that. However, implementing a shared connection pool across *different* databases in `pg` is complex because a pool instance in `node-postgres` is tied to a specific connection string. You generally can't have one pool object that jumps between DBs. You need a pool *manager* that manages a map of `poolId -> pool`.\\n\\nIf they actually meant \\\"Schema-per-tenant\\\" (one DB, many schemas), then a single shared connection pool works perfectly, and the middleware just executes `SET search_path TO tenant_schema`. This is much simpler and more common for Node/Express SaaS starters.\\n\\n*Decision:* The prompt explicitly says \\\"database-per-tenant setup\\\". I will build the solution around separate databases but include a note/note on the \\\"schema\\\" wording. Actually, looking closely at \\\"switches the Postgres schema\\\", it's highly likely they might be confused or using the terms interchangeably.\\n\\n*Let's look at the \\\"Shared Connection Pool\\\" requirement again.*\\nIf it's truly DB-per-tenant:\\n- I need a `PoolManager` class.\\n- On the first request for `tenant_a`, create a pool for `tenant_a`.\\n- Cache that pool.\\n- Subsequent requests for `tenant_a` use the cached pool.\\n- Subsequent requests for `tenant_b` create a new pool for `tenant_b`.\\n\\nIf it's Schema-per-tenant:\\n- One global pool.\\n- Middleware sets `search_path`.\\n\\n*Re-reading the prompt carefully:* \\\"It should use a **database-per-tenant setup**... and switches the Postgres **schema**\\\".\\nThis is a specific architectural pattern sometimes called \\\"Discriminator Schema\\\" vs \\\"Discriminator Database\\\". The user likely means \\\"switch the database connection\\\" but used the word schema. However, to be safe, I will implement the **Database-per-Tenant** (separate DBs) approach as requested, as that is the more complex and \\\"harder\\\" requirement that fits \\\"SaaS\\\" well for data isolation. But I will handle the \\\"shared connection pool\\\" via a caching mechanism (one pool per DB).\\n\\n*Wait, let's reconsider the Schema approach.*\\nIf I implement Schema-per-tenant, I fail the \\\"database-per-tenant\\\" requirement.\\nIf I implement Database-per-tenant, I satisfy the hardest constraint.\\nI will proceed with Database-per-Tenant.\\n\\n**Architecture Plan:**\\n\\n1. **Dependencies:** `express`, `pg` (node-postgres), `dotenv`.\\n2. **Tenant Identification:** Middleware to parse `req.headers.host` to extract the subdomain.\\n3. **Connection Pool Manager:**\\n * Since `pg` pools are tied to a single DB, we can't have *one* global pool serving *multiple* databases.\\n * We need a \\\"Pool Cache\\\". A `Map` where the key is the tenant identifier and the value is a `Pool` instance.\\n * When a request comes in, check if a pool exists for that tenant. If not, create it.\\n * Attach the specific client (or the pool itself) to `req` for use in controllers.\\n4. **The \\\"Main\\\" DB:**\\n * In a true DB-per-tenant setup, we usually have a \\\"Master\\\" or \\\"Public\\\" database that contains the list of tenants and their connection strings (host, db name, user, password).\\n * The middleware needs to look up the connection string for the identified subdomain.\\n5. **Flow:**\\n 1. Request comes in (`tenant1.api.com`).\\n 2. Middleware extracts `tenant1`.\\n 3. Middleware queries \\\"System DB\\\" (a shared DB) to get `tenant1`'s DB credentials/Connection String.\\n 4. Middleware checks `PoolCache` for `tenant1`.\\n 5. If exists, use it. If not, create `new Pool(connectionString)` and store in cache.\\n 6. Middleware attaches `db` (pool or client) to `req`.\\n 7. Controller uses `req.db.query(...)`.\\n 8. *Important:* We shouldn't release the client manually if we just attach the pool to `req`. But standard practice is to get a client from the pool at the start of the request and release it at the end (transaction handling), OR just use the pool directly for queries (connection hopping).\\n 9. *Better approach:* Attach the *Pool* to `req`. The controller calls `req.db.query`. The pool handles connections internally. This allows better concurrency.\\n\\n**Refining the \\\"Shared Connection Pool\\\" concept:**\\nThe user asked for a \\\"shared connection pool\\\". In DB-per-tenant, \\\"shared\\\" usually refers to the *application* sharing a limited number of connections per tenant, rather than opening a new TCP connection per HTTP request. My caching strategy satisfies this.\\n\\n**Addressing the \\\"Switches Schema\\\" part:**\\nI will add a comment explaining that if they meant one DB with many schemas, the code changes slightly (just `SET search_path`), but I will stick to the separate DB approach as primary.\\n\\n**Code Structure:**\\n\\n* `db.js`: The pool manager.\\n* `middleware/tenant.js`: The subdomain parser and pool injector.\\n* `index.js`: Express setup.\\n* `app.js`: Routes.\\n\\n**Mocking the \\\"System DB\\\":**\\nI can't ask the user to set up a master DB just to see the code work. I will mock the tenant configuration (a map of subdomain -> connection string) for the sake of the example, but explain how to swap it with a DB query.\\n\\n**Scalability considerations:**\\n- **Pool limits:** If we have 1000 tenants, we can't keep 1000 open pools with 10 clients each (10,000 connections). We might need an LRU cache for pools or keep them lazy. I will implement a simple lazy-load cache.\\n- **Subdomain parsing:** Handle cases like `localhost` (no subdomain) for development.\\n\\nLet's write the code.\\n\\n**Step 1: Setup & Dependencies**\\n`npm install express pg dotenv`\\n\\n**Step 2: Tenant Manager (The \\\"System\\\" Store)**\\nIdeally, this is a DB table. I'll use a constant object for the demo.\\n\\n**Step 3: Connection Pool Manager**\\nClass `TenantPoolManager`.\\n- `getPool(tenantId)` -> returns `Pool`.\\n- Uses a static `Map`.\\n\\n**Step 4: Middleware**\\n- Extract subdomain.\\n- Lookup connection config.\\n- Get pool.\\n- `req.tenantDb = pool`.\\n\\n**Step 5: Error Handling**\\n- What if tenant doesn't exist?\\n- What if DB is down?\\n\\n**Step 6: Example Route**\\n- Simple SELECT query.\\n\\n**Self-Correction during drafting:**\\nShould I pass a `client` from the pool or the `pool` itself to `req`?\\n- Passing `client` (via `pool.connect()`):\\n - Pros: Supports transactions easily.\\n - Cons: Must ensure `client.release()` is called at the end of the response (cleanup middleware).\\n- Passing `pool`:\\n - Pros: Easier, no cleanup needed for connection release.\\n - Cons: Transactions are harder (need to manually `connect()` -> `query` -> `release` within the controller).\\nFor a general \\\"REST API\\\" example, passing the `pool` is cleaner for simple queries. However, for a \\\"SaaS\\\" app, requests often need to be atomic. I will demonstrate passing the `pool` for simplicity but mention the transaction pattern.\\n\\nActually, the \\\"best\\\" way for a robust app is:\\nMiddleware: `req.db = pool`.\\nController: `const client = await req.db.connect(); try { ... } finally { client.release(); }`.\\n\\nLet's stick to the prompt's \\\"switches Postgres schema\\\" confusion. If they really want separate databases, `SET search_path` is useless. I will implement separate databases as that is the explicit structural request. I will interpret \\\"switches schema\\\" as \\\"switches context\\\".\\n\\n**Plan for the code response:**\\n1. **Directory Structure:** Standard.\\n2. **Tenant Configuration:** `config/tenants.js`.\\n3. **DB Manager:** `utils/db.js`. Handles the caching of pools.\\n4. **Middleware:** `middleware/tenant.js`. Handles subdomain extraction.\\n5. **App Entry:** `index.js`.\\n6. **Explanation:** How it scales, connection pooling nuances.\\n\\n**Subdomain Parsing Logic:**\\n`host.split('.')[0]`.\\nNeed to handle `localhost`. If host is `localhost`, maybe default to a \\\"demo\\\" tenant or require `tenant.localhost`. I'll use `tenant.localhost` logic or check env variable.\\n\\n**Let's refine the Pool Manager:**\\nWe need to be careful. If we have 1000 tenants, and 1 request per month for 900 of them, holding 900 pools open is a waste of resources (PostgreSQL has a `max_connections` limit).\\nHowever, for this code snippet, implementing a full LRU cache with timeout logic is probably overkill. I will implement a simple Map cache and explain that in production, you might want to evict idle pools.\\n\\n**Revised Plan:**\\n1. **Dependencies**: `express`, `pg`.\\n2. **Code**:\\n - **Tenants Config**: Mock DB.\\n - **PoolManager**: Singleton class caching `Pool` instances.\\n - **Middleware**: `resolveTenant`.\\n3. **Usage**: Router getting data.\\n\\nLet's write the solution.\\n\\n*Check:* Did I miss the \\\"Shared connection pool\\\" part?\\nYes, `pg` creates a pool. My `PoolManager` creates *one* pool per tenant. This ensures that multiple requests for `Tenant A` share the same pool (and thus the connections within it), while requests for `Tenant B` use a different pool. This fits the definition of \\\"shared connection pool\\\" (within the tenant context).\\n\\nWait, is it possible they meant one *global* pool?\\nA global pool connects to *one* database URL. You cannot use one connection object to query two different databases in Postgres (without using dblink or FDW, which is bad practice for app logic). So, one global pool is technically impossible for DB-per-tenant isolation. You must have multiple pools (or a pool proxy like PgBouncer). I will proceed with the Map of Pools approach as it's the standard Node.js way to handle this.\\n\\n*Self-Correction on \\\"Switches Schema\\\":*\\nIf the user meant \\\"Schema per tenant\\\" (single DB), the code is much easier:\\n1. Global Pool.\\n2. Middleware: `client.query('SET search_path TO ' + schema)`.\\n3. Done.\\n\\nGiven the conflict \\\"database-per-tenant\\\" vs \\\"switches schema\\\", I will provide the **Database-per-Tenant** solution (as requested by the architectural term) but add a note explaining that if they actually wanted Schema-per-tenant (single DB), the change is minimal.\\n\\nLet's stick to the Database-per-Tenant implementation. It's the more robust interpretation of \\\"SaaS\\\".\\n\\n**Files:**\\n- `utils/db.js` (The Pool Manager)\\n- `middleware/tenant.js` (The Middleware)\\n- `app.js` (Express App)\\n- `tenants.js` (Mock config)\\n\\n**Mock Data:**\\nTenant `acme`: DB `acme_db`\\nTenant `globex`: DB `globex_db`\\n\\n**Subdomain logic:**\\nRequest to `acme.localhost:3000`.\\nHost header: `acme.localhost:3000`.\\nSplit: `['acme', 'localhost', '3000']`.\\nSubdomain: `acme`.\\n\\nRequest to `localhost`.\\nSplit: `['localhost']`.\\nNo subdomain.\\n\\nOkay, ready to generate. This is a robust architectural pattern for SaaS applications. Since you requested a **database-per-tenant** setup, it is important to clarify a technical constraint: In Node.js with `pg` (node-postgres), a standard connection pool is tied to a specific database connection string.\\n\\nTherefore, to support this architecture, you cannot have one single global pool. Instead, you need a **Pool Manager** that maintains a cache of connection pools—creating a new pool the first time a specific tenant is accessed and reusing it for subsequent requests to that same tenant.\\n\\nHere is a complete, scalable implementation.\\n\\n### 1. Project Structure\\n```text\\n/src\\n /config\\n tenants.js # Maps subdomains to DB credentials\\n /middleware\\n tenant.js # Extracts subdomain & attaches DB pool to req\\n /utils\\n poolManager.js # Caches and manages connection pools\\n app.js # Express app setup\\n index.js # Entry point\\n```\\n\\n### 2. The Implementation\\n\\n#### Step 1: Mock Tenant Configuration\\nIn a real app, you would query a central \\\"System\\\" database to get these credentials. For this example, we use a config file.\\n\\n**`src/config/tenants.js`**\\n```javascript\\nmodule.exports = {\\n // Maps subdomain (e.g., 'acme') to their specific Postgres DB connection string\\n 'acme': {\\n host: 'localhost',\\n database: 'acme_db', // Separate DB for Acme\\n user: 'postgres',\\n password: 'password',\\n port: 5432\\n },\\n 'globex': {\\n host: 'localhost',\\n database: 'globex_db', // Separate DB for Globex\\n user: 'postgres',\\n password: 'password',\\n port: 5432\\n },\\n \\n // A default for cases like 'localhost' or 'www' to prevent crashes\\n '_default': {\\n // Fallback DB or logic to return 404\\n }\\n};\\n```\\n\\n#### Step 2: The Connection Pool Manager\\nThis class ensures we don't create a new connection for every single request. It lazily creates a pool per tenant and caches it.\\n\\n**`src/utils/poolManager.js`**\\n```javascript\\nconst { Pool } = require('pg');\\n\\nclass PoolManager {\\n constructor() {\\n // A Map to store active pools: Key = tenantId, Value = Pool Instance\\n this.pools = new Map();\\n }\\n\\n /**\\n * Gets or creates a pool for a specific tenant's configuration.\\n */\\n getPool(tenantConfig) {\\n // Create a unique key based on the database connection details\\n const poolKey = `${tenantConfig.host}:${tenantConfig.database}`;\\n\\n if (this.pools.has(poolKey)) {\\n return this.pools.get(poolKey);\\n }\\n\\n // Create a new pool for this tenant\\n const pool = new Pool({\\n host: tenantConfig.host,\\n database: tenantConfig.database,\\n user: tenantConfig.user,\\n password: tenantConfig.password,\\n port: tenantConfig.port,\\n // Standard pool settings (adjust based on your server capacity)\\n max: 20, // Max connections per tenant\\n idleTimeoutMillis: 30000,\\n connectionTimeoutMillis: 2000,\\n });\\n\\n this.pools.set(poolKey, pool);\\n \\n // Optional: Handle pool errors to prevent crashes\\n pool.on('error', (err) => {\\n console.error('Unexpected error on idle client', err);\\n process.exit(-1);\\n });\\n\\n return pool;\\n }\\n\\n // Helper to close all pools gracefully (e.g., on shutdown)\\n async closeAll() {\\n const closePromises = [];\\n for (const [key, pool] of this.pools) {\\n closePromises.push(pool.end());\\n }\\n await Promise.all(closePromises);\\n this.pools.clear();\\n }\\n}\\n\\nmodule.exports = new PoolManager(); // Singleton instance\\n```\\n\\n#### Step 3: Tenant Resolution Middleware\\nThis middleware parses the subdomain, looks up the DB credentials, fetches (or creates) the pool, and attaches it to the request object.\\n\\n**`src/middleware/tenant.js`**\\n```javascript\\nconst tenantConfigs = require('../config/tenants');\\nconst poolManager = require('../utils/poolManager');\\n\\nconst resolveTenant = (req, res, next) => {\\n try {\\n // 1. Extract Subdomain\\n const host = req.headers.host;\\n // Remove port if present (e.g., localhost:3000)\\n const hostname = host.split(':')[0]; \\n const parts = hostname.split('.');\\n\\n let subdomain;\\n\\n // Handle 'localhost' or IP addresses\\n if (parts.length === 1 || (parts.length === 4 && parts[0].match(/^\\\\d+$/))) {\\n // For local dev, you might use 'tenant.localhost' or rely on headers\\n // Here we assume a default or look for a custom header\\n subdomain = req.headers['x-tenant-subdomain'] || '_default'; \\n } else {\\n // Standard domain: subdomain.domain.tld\\n subdomain = parts[0];\\n }\\n\\n // 2. Validate Tenant\\n const tenantConfig = tenantConfigs[subdomain];\\n\\n if (!tenantConfig || subdomain === '_default') {\\n return res.status(404).json({ error: 'Tenant not found' });\\n }\\n\\n // 3. Get Connection Pool\\n // If the pool doesn't exist, PoolManager creates it. \\n // If it exists, it reuses it.\\n const pool = poolManager.getPool(tenantConfig);\\n\\n // 4. Attach to Request\\n // We attach the POOL, not a client. \\n // The controller will use the pool to run queries.\\n req.tenantPool = pool;\\n req.tenantId = subdomain;\\n\\n next();\\n } catch (error) {\\n console.error('Tenant Resolution Error:', error);\\n res.status(500).json({ error: 'Internal Server Error' });\\n }\\n};\\n\\nmodule.exports = resolveTenant;\\n```\\n\\n#### Step 4: Express Application Setup\\nWe apply the middleware globally or to specific routes.\\n\\n**`src/app.js`**\\n```javascript\\nconst express = require('express');\\nconst resolveTenant = require('./middleware/tenant');\\n\\nconst app = express();\\n\\napp.use(express.json());\\n\\n// Apply Tenant Resolution Middleware to all API routes\\napp.use('/api', resolveTenant);\\n\\n// Example Route: Get User Data\\napp.get('/api/users', async (req, res) => {\\n // req.tenantPool is now a PG Pool connected specifically to the Tenant's DB\\n try {\\n // We use the pool directly. The pool manages acquiring/releasing connections internally.\\n const result = await req.tenantPool.query('SELECT * FROM users LIMIT 10');\\n \\n res.json({\\n tenant: req.tenantId,\\n data: result.rows\\n });\\n } catch (err) {\\n console.error(`Query error for tenant ${req.tenantId}:`, err);\\n res.status(500).json({ error: 'Database query failed' });\\n }\\n});\\n\\n// Example Route: Create a Transaction\\napp.post('/api/transfer', async (req, res) => {\\n const { amount } = req.body;\\n \\n // For transactions, we need a specific client from the pool\\n const client = await req.tenantPool.connect();\\n \\n try {\\n await client.query('BEGIN');\\n \\n // Perform logic\\n await client.query('UPDATE accounts SET balance = balance - $1 WHERE id = 1', [amount]);\\n await client.query('UPDATE accounts SET balance = balance + $1 WHERE id = 2', [amount]);\\n \\n await client.query('COMMIT');\\n res.json({ success: true, tenant: req.tenantId });\\n } catch (e) {\\n await client.query('ROLLBACK');\\n throw e;\\n } finally {\\n // Crucial: Release client back to the pool\\n client.release();\\n }\\n});\\n\\nmodule.exports = app;\\n```\\n\\n#### Step 5: Entry Point\\n\\n**`src/index.js`**\\n```javascript\\nconst app = require('./app');\\nconst poolManager = require('./utils/poolManager');\\n\\nconst PORT = process.env.PORT || 3000;\\n\\nconst server = app.listen(PORT, () => {\\n console.log(`Server running on port ${PORT}`);\\n console.log(`Access tenants via: http://acme.localhost:${PORT}/api/users`);\\n});\\n\\n// Graceful Shutdown\\nprocess.on('SIGINT', async () => {\\n console.log('Shutting down gracefully...');\\n server.close(async () => {\\n // Close all tenant connection pools\\n await poolManager.closeAll();\\n console.log('All connection pools closed.');\\n process.exit(0);\\n });\\n});\\n```\\n\\n### How to Test Locally\\n\\nTo test `acme.localhost` and `globex.localhost` on your machine, you need to edit your `/etc/hosts` file (on Mac/Linux) or `C:\\\\Windows\\\\System32\\\\drivers\\\\etc\\\\hosts` (on Windows).\\n\\nAdd these lines:\\n```text\\n127.0.0.1 acme.localhost\\n127.0.0.1 globex.localhost\\n```\\n\\nNow you can access:\\n1. `http://acme.localhost:3000/api/users` (Connects to `acme_db`)\\n2. `http://globex.localhost:3000/api/users` (Connects to `globex_db`)\\n\\n### Key Considerations for Scalability\\n\\n1. **Connection Limits**: If you have 1,000 tenants, and each pool has 20 connections, that is 20,000 connections. This will crash a standard Postgres instance (default limit is often 100).\\n * *Solution*: Use an external connection pooler like **PgBouncer** in front of your database, or significantly lower the `max` pool size per tenant in `PoolManager` (e.g., 2-5 connections) since Node.js handles async I/O efficiently.\\n2. **Schema vs Database**: Your prompt mentioned switching the **schema**, but also **database-per-tenant**. The code above implements separate databases.\\n * *If you meant One Database, Multiple Schemas*: You only need **one** global pool. The middleware would look like `await req.pool.query('SET search_path TO tenant_schema')`.\\n3. **Migration Management**: With database-per-tenant, you must script your migrations to loop through all active tenant databases and apply schema changes. With schema-per-tenant, migrations are often easier (one transaction).\"\n }\n]"}}},{"rowIdx":1,"cells":{"messages":{"kind":"list like","value":[{"role":"system","content":""},{"role":"user","content":"Can you help me create a custom React hook called useFormValidator for a trading form that needs to be super fast? I want it to use Zod for validation but run the validation in a Web Worker so it doesn't block the UI when users are typing quickly."},{"role":"assistant","content":" 1. **Analyze the Request:**\n * **Goal:** Create a custom React hook `useFormValidator`.\n * **Context:** A trading form (implies high frequency, critical performance).\n * **Key Requirement 1:** Fast performance (\"super fast\").\n * **Key Requirement 2:** Use Zod for validation.\n * **Key Requirement 3:** Offload validation to a Web Worker to prevent UI blocking (non-blocking main thread).\n * **Input:** User typing quickly.\n\n2. **Deconstruct the Architecture:**\n * *React Side:* Needs to hold state (values, errors, isValid, isSubmitting). Needs to communicate with the worker.\n * *Worker Side:* Needs the Zod schema. Needs to receive data. Needs to perform validation. Needs to send back results.\n * *Zod Schema:* Needs to be defined outside but passed into the worker context.\n\n3. **Challenge: Transferring Zod Schemas to Workers:**\n * Zod schemas are complex objects (classes, closures). You can't simply `postMessage` a Zod schema instance because it contains functions and circular references that aren't cloneable.\n * *Solution A:* Create the worker from a Blob URL string containing the schema. (Bad for flexibility, hard to maintain).\n * *Solution B:* Define the schema inside a separate worker file and import it there. (Standard approach).\n * *Solution C:* Send the raw data to the worker, and the worker *has* the schema hardcoded.\n * *Refinement for a Reusable Hook:* The hook needs to accept a schema. To make this dynamic and worker-friendly, we likely need a factory function that creates the worker script on the fly (using `Blob` and `URL.createObjectURL`) or we ask the user to provide a specific worker instance.\n * *Decision:* To make it \"custom\" and reusable, the best approach for a generic `useFormValidator` that takes a schema is to create a dedicated Worker file that handles the logic, or (simpler for this example) define the worker logic as a string/blob inside the hook generation process or assume a standard worker structure.\n * *Better Decision for clarity/robustness:* I will define a separate worker file (conceptually) but for the sake of a copy-pasteable answer, I'll likely use the `Blob` approach or just show the code structure for a separate file.\n * *Wait, Zod serialization issue:* If I define the schema in the main thread, I can't send it to the worker. The schema *must* live in the worker, or I must parse the schema definition (JSON) and rebuild it (which Zod supports via `.parse()` but creating the schema object is still tricky).\n * *Best Practical Approach:* The user provides a Zod schema in the main thread. The worker must be initialized with the *logic* to validate using that schema.\n * *Actually, the most robust pattern for dynamic Zod + Workers:*\n 1. Main thread has the schema.\n 2. We create a worker from a Blob.\n 3. Inside the Blob code, we import Zod (requires a bundler setup or importing from a CDN within the Blob). This is complex.\n 4. *Alternative:* The worker is pre-defined. The hook accepts the worker path.\n 5. *Let's stick to the most modern/reliable pattern for this specific prompt:*\n * Create a `validation.worker.js`.\n * This worker imports Zod and accepts a serialized version of the schema or simply has the schema hardcoded?\n * *Dynamic approach:* Since the prompt asks for a generic hook `useFormValidator`, hardcoding the schema in the worker defeats the purpose of a generic hook.\n * *Workaround:* We can't pass the schema object.\n * *Let's assume:* The user is okay with defining the schema *inside* the worker for this specific example, or we use a \"Factory\" approach where we pass the schema file/code to the hook.\n * *Let's go with the Factory approach:* We'll create the worker using a `Blob` and `URL.createObjectURL`. Inside the Blob's code, we will need to import Zod. This implies using `importScripts` or a bundled output.\n\n * *Let's try a simpler, highly effective approach for the answer:*\n 1. Define the Zod schema in a separate file (or same file).\n 2. Create a Worker that imports that schema.\n 3. The hook manages the Worker instance.\n\n * *Wait, let's look at the \"Super fast\" requirement.* Creating/destroying workers on every keystroke is slow. We need a *persistent* worker.\n * *Debouncing:* Even with a worker, validating on every keystroke might be overkill. We should probably debounce the input, but the prompt specifically asks for Web Workers to handle the speed. So we rely on the worker.\n\n4. **Drafting the Solution Steps:**\n\n * **Step 1: The Worker.** Needs to handle incoming `data`, validate against a `schema`, and return `result`.\n * **Step 2: The Hook (`useFormValidator`).**\n * State: `values`, `errors`, `isValidating`.\n * Ref: To hold the worker instance.\n * Effect: To initialize the worker on mount, clean up on unmount.\n * Handler: `handleChange` updates state and posts message to worker.\n * Effect: Listen for worker messages to update `errors`.\n\n5. **Addressing the Zod-in-Worker Problem (Crucial Technica
2026-01-13T09:29:18
https://git-scm.com/book/nl/v2/GitHub-Een-organisatie-beheren
Git - Een organisatie beheren About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 6.4 GitHub - Een organisatie beheren Een organisatie beheren Aanvullend aan accounts voor personen heeft GitHub ook zogenoemde Organisaties (Organizations). Net als persoonlijke acounts, hebben Organizational accounts een namespace waar al hun projecten bestaan, maar veel andere dingen zijn anders. Deze accounts vertegenwoordigen een groep van mensen met gedeelde eigenaarschap van projecten, en er zijn veel instrumenten om subgroepen met die mensen te beheren. Normaalgesproken worden deze accounts gebruikt voor Open Source groepen (zoals “perl” of “rails”) of bedrijven (zoals “google” of “twitter”). Grondbeginselen van Organizations Een organizatie is vrij eenvoudig te maken; gewoon op het “+” ikoon rechtsboven op elke GitHub pagina, en “New organization” in het menu kiezen. Figuur 126. De “New organization” menu keuze. Eerst moet je jouw organisatie een naam geven en een e-mail adres geven die als primaire contact adres dient voor de groep. Daarna kan je, als je wilt, andere gebruikers uitnodigen om mede-eigenaren te zijn van het account. Volg deze stappen en je ben al snel de eigenaar van een gloednieuwe organisatie. Net als persoonlijke acounts, zijn organisaties gratis als alles wat je van plan bent er op te slaan open source zal zijn. Als een eigenaar in een organisatie, als je een repository forkt, hebt je de keuze om deze naar de namespace van je organisatie te forken. Als je nieuwe repositories maakt kan je ze onder je eigen persoonlijke account aanmaken of onder een van de organisaties waar je een eigenaar van bent. Je “volgt” ook automatisch elke nieuwe repository die onder deze organisaties wordt aangemaakt. Net als in Jouw avatar , kan je een avatar uploaden voor je organisatie om het een persoonlijk tintje te geven. En ook net zoals bij persoonlijke accounts heb je een landingspagina voor de organisatie die een lijst bevat van al je repositories en die door andere mensen kunnen worden bekeken. Laten we nu eens een aantal andere zaken behandelen die wat anders zijn bij een account van een organisatie. Teams Organisaties worden geassocieerd met individuele mensen door middel van teams, die eenvoudigweg een verzameling van individuele persoonlijke accounts en repositories zijn binnen die organisatie en welk soort van toegang deze mensen in deze repositories hebben. Als voorbeeld, stel dat je bedrijf die repositories heeft: frontend , backend , en deployscripts . Je zou je HTML/CSS/Javascript ontwikkelaars toegang willen geven tot frontend en misschien backend , en de mensen van Operations toegang tot backend en deployscripts . Met teams wordt dit makkelijk, zonder de medewerkers voor elke individuele repository te beheren. De Organization pagina laat je een eenvoudig dashboard van al de repositories, gebruikers en teams zien die onder deze organisatie vallen. Figuur 127. De Organization pagina. Om je teams te beheren, kan je op de Teams kolom aan de rechterkant van de pagina in De Organization pagina. klikken. Dit leidt je naar een pagina die je kunt gebruiken om leden aan het team toe te voegen, repositories aan het team toe te voegen of de instellingen en toegangsniveaus voor het team te beheren. Elk team kan alleen lees-, lees-/schrijf- of beheertoegang tot de repositories hebben. Je kunt dat niveau wijzigen door de “Settings” knop in De Team pagina. te klikken. Figuur 128. De Team pagina. Als je iemand uitnodigt bij een team, krijgen ze een e-mail waarin staat dat je ze hebt uitgenodigd. Daarbij, werken team @vermeldingen (zoals @acmecorp/frontend ) vrijwel gelijk als bij individuele gebruikers behalve dat alle leden van het team dan geabonneerd zijn op die thread. Dit is handig als je de aandacht wilt van iemand in een team, maar je weet niet precies wie te vragen. Een gebruiker kan tot een willekeurig aantal teams behoren, dus beperk jezelf niet tot alleen tot toegangs-beheer teams. Thematische teams zoals ux , css , of refactoring zijn handig voor bepaalde type vragen, en andere zoals wetgeving en kleurenblindheid voor compleet andere. Audit Log Organisaties geven eigenaren ook toegang tot alle informatie over wat binnen die organisatie is gebeurd. Je kunt naar de Audit Log tab gaan en zien welke gebeurtenissen er hebben plaatsgevonden op een organisationeel niveau, wie ze in gang heeft gezet en waar in de wereld dit gebeurde. Figuur 129. De audit log. Je kunt ook specifieke gebeurtenissen filteren, specifieke plaatsen of specifieke mensen. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:18
https://es-la.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fmessenger.com%252F%26amp%253Bh%3DAT0NYKzIfH0K_3n8zEg2FDOZcaDpnHRvH0R8dh-lPmKwTCHeq_8KnftluFblL_25g--6zmzfczgAJeQwSsawTrtkNWyUyAgtdjIzSYKuD5W1hnR8LO8GCp5d3SZTsqtKSfCJcbVVJU2fV45q
Facebook Facebook Correo o teléfono Contraseña ¿Olvidaste tu cuenta? Crear cuenta nueva Se te bloqueó temporalmente Se te bloqueó temporalmente Parece que hiciste un uso indebido de esta función al ir muy rápido. Se te bloqueó su uso temporalmente. Back Español 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Registrarte Iniciar sesión Messenger Facebook Lite Video Meta Pay Tienda de Meta Meta Quest Ray-Ban Meta Meta AI Más contenido de Meta AI Instagram Threads Centro de información de votación Política de privacidad Centro de privacidad Información Crear anuncio Crear página Desarrolladores Empleo Cookies Opciones de anuncios Condiciones Ayuda Importación de contactos y no usuarios Configuración Registro de actividad Meta © 2026
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/reference/config.html#net
Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar.
2026-01-13T09:29:19
https://uk.linkedin.com/company/optalysys?trk=organization_guest_main-feed-card_feed-actor-name
Optalysys | LinkedIn Skip to main content LinkedIn Articles People Learning Jobs Games Sign in Register now Optalysys Computer Hardware Manufacturing Leeds, England 3,604 followers Enabling a world of secure computing. Follow View all 61 employees Report this company About us 🔒 Enabling a world of secure computing. Harnessing the power of light, we push the boundaries of what is possible in data security by bringing speed to always-encrypted data technology. Through optical computing, we accelerate Fully Homomorphic Encryption (FHE) beyond the capabilities of electrical computing, making secure data sharing and collaboration a reality for everyone, everywhere. Website http://www.optalysys.com External link for Optalysys Industry Computer Hardware Manufacturing Company size 51-200 employees Headquarters Leeds, England Type Privately Held Founded 2013 Specialties deep learning, hardware, processors, optical, Quantum Resilient Encryption, FHE, Photonic computing, Full Homomorphic Encryption, and Cryptography Locations Primary New Station Street Platform Leeds, England LS1 4JB, GB Get directions Victoria Street Runway East Bristol, England BS1 6PU, GB Get directions Employees at Optalysys Peter White Phil Cheetham Tim Wilkinson Tim Whitfield See all employees Updates Optalysys 3,604 followers 2w Report this post ❄️ Season's greetings from the team here at Optalysys ❄️ 36 1 Comment Like Comment Share Optalysys 3,604 followers 1mo Edited Report this post 💂🏼♀️ Greetings from London: we're here at Economic Crime Congress 2025 with 600+ leaders of government, industry, and civil society focusing on real-world collaboration... 📍 Find us on stand 6 - come for the swag bag, stay for the privacy-enhancing technologies chat. And head to the Connect Zone at 1.20pm for Nic Lawrence 's talk on that exact topic! UK Finance | #ECC2025 | QEII Centre | Joseph Wilson | Robbyn Snow 30 2 Comments Like Comment Share Optalysys reposted this Nic Lawrence 1mo Report this post Tomorrow I will be talking at the Economic Crime Congress in London - outlining how Optalysys technology can help businesses and public sector groups collaborate securely. The setting at the QE2 centre, just round the corner from the Palace of Westminster is spectacular! Together with Robbyn Snow and Joseph Wilson we will also have a stand and look forward to some good conversations! #securecomputing #fhe 71 Like Comment Share Optalysys 3,604 followers 1mo Report this post 📍 Next stop: tomorrow's UK Finance Economic Crime Congress 2025 at QEII Centre 💷 Find us on stand 6 (floor 2) all day, and then in the Connect Zone at 1.20pm, where VP of Product, Nic Lawrence , will be giving a lightning talk on ‘Privacy-enhancing technologies (PETs) for real-world intelligence sharing’. The team in attendance is Joseph Wilson , Nic Lawrence and Robbyn Snow . Come and chat to us about your privacy problems 💬 #ecc2025 #economiccrime #PETs #privacy 26 Like Comment Share Optalysys 3,604 followers 1mo Report this post 🏆 We’ve been named as a TechRound Top 100 Startup in UK & Europe! Recognition in the TechRound100 2025 highlights our groundbreaking approach to secure, scalable computing across the UK and Europe 🌟 and we are incredibly proud to be included.      Read the full announcement: https://lnkd.in/ek3rBX7x   Read the full entry: https://lnkd.in/eJkUPFh6 View the full list: https://lnkd.in/eZf4i4ru Thank you to TechRound and esteemed judges Mary Pedler ( INPUT Global ), Andrey Kuznetsov ( Haqq Network ), SAMUEL OLAMIDE ( EdgeTrace ), Furkat Kasimov , Andrei Komissarov ( OpenWay and @DEVAR), Ayoola Samagbeyi , and David Soffer ( TechRound ). #TechRound100 #tech #leaders   Optalysys Named Among TechRound's Top 100 Startups in UK & Europe - Optalysys https://optalysys.com 53 1 Comment Like Comment Share Optalysys 3,604 followers 1mo Report this post This time last week we were wrapping up an incredible week in Buenos Aires for DevConnect ARG 2025 🇦🇷 It was an amazing opportunity to connect with builders, innovaters, friends and new faces from across the Ethereum ecosystem. Our key takeaways: - Privacy has landed and it’s here to stay  - Privacy & scalability are the bridge to enterprise adoption - Crypto 🤝 AI: it's happening! - That Buenos Aires was the perfect backdrop Read on for our full roundup ⬇️ DevConnect ARG 🇦🇷 2025 roundup: Privacy + scalability = Ethereum for the masses Optalysys on LinkedIn 21 2 Comments Like Comment Share Optalysys 3,604 followers 1mo Edited Report this post Happy Thanksgiving to our friends in North America! 🍂 🦃 💛 🧡 🤎 🙏 Thank you for all you do to help Optalysys' reach to the world 🌎 #gratitude #thanksgiving #colleagues 20 2 Comments Like Comment Share Optalysys 3,604 followers 1mo Report this post 💷 What does the 2025 Autumn Budget mean for UK businesses, investment, and growth? Stuart Mellis , CFO of Optalysys, comments on the future of Business Asset Disposal Relief 🔴 #ukbudget #leak #announcement FieldHouse Associates 2,145 followers 1mo 🚰 The mistaken publication of the Chancellor's plans an hour before her speech today solidified this Budget's position as the leakiest in recent memory... But after a tumultuous road to the announcement, what does the 2025 Autumn Budget mean for UK businesses, investment, and growth? A piece by Patricia Cullen in Entrepreneur UK sets out the initial response from business leaders and investors to the policies announced 🗣️ 📌 Martin Jacob , Professor of Accounting and Control at IESE Business School explores the increasing complexity of the UK tax system. 📌 Timothy Mills , Managing Partner at ACF Investors , delves into the positive decision to raise EIS limits. 📌 Duncan Johnson , CEO of Northern Gritstone and Stuart Mellis , CFO of Optalysys , comment on the future of Business Asset Disposal Relief. 📌 Tanya Suarez Ph.D. , CEO of IoT Tribe discusses the UK's competitive position in the global tech sector. 📌 And John Dunlop , Head of Tax at Marriott Harrison , explores the proposed reduction in EOT tax relief. Read the full article via the link: https://lnkd.in/eB7qDcXz Budget Meets Backlash | Entrepreneur entrepreneur.com 14 Like Comment Share Optalysys 3,604 followers 1mo Edited Report this post 🟨 We just popped in to Zama 's CoFHE Shop at #devconnect , in preparation for our session tomorrow. Come for the caffeine, stay for the talks ☕ 💬 Hardware Accelerated FHE in Practice: Talk + Technical Workshop 💬 Featuring Nic Lawrence (VP Product, Optalysys) and Joseph Wilson (Head of Strategic Innovation, Optalysys). 📍 Rosedal Café, Godoy Cruz 2905 ⏰ TUES 18th Nov | 14:00 - 15:30 🎟️ https://luma.com/9vfkncep In part one: Join Nic as he shares... - a detailed look at the first enterprise-grade FHE acceleration server - an analysis of some DeFi use cases and how SW tools can help developers and architects to predict system performance and hardware requirements (tools are available to try) - a look ahead to an exciting photonic future for confidential computing In part two: Joe will lead a technical workshop, including... - showcasing the Optalysys testnet, built on Zama’s fhEVM architecture + Optalysys’ LightLocker Node FHE co-processor - an encrypted online game, using a dApp running on the Optalysys testnet. Everyone is invited to join us for this interactive session. The team looks forward to sharing ideas and discussing real-world deployment of confidential blockchains over coffee and snacks. 🟡 About the Zama CoFHE Shop: Your favorite homebase is back for Ethereum Foundation 's DevConnect ARG week. The Zama CoFHE Shop is open from Monday, November 17th to Friday, November 21st, 7.30AM to 8.30PM (with evening afterwork events). Come enjoy good coffee, attend cool meetups and meet the Zama team. #FHE #coffee #talktalktalk #hardware 38 Like Comment Share Optalysys 3,604 followers 1mo Report this post 📍DevConnect is a go! 👋 from Marcella Arthur Aditi Vyakarnam Joseph Wilson Nic Lawrence and Robbyn Snow . 🆕 Track our trip on our new #telegram (🔎Optalysys): @TeamOptalysys. 24 Like Comment Share Join now to see what you are missing Find people you know at Optalysys Browse recommended jobs for you View all updates, news, and articles Join now Similar pages Salience Labs Computer Hardware Manufacturing Oxford, England Lumai Computer Hardware Manufacturing Lightelligence Computer Hardware Manufacturing Boston, MA Niobium Microsystems Computer Hardware Manufacturing Dayton, Ohio Wave Photonics Semiconductors Cambridge, UK Belfort Data Security Software Products Fractile Computer Hardware Manufacturing London, London Inpher Software Development New York, NY Heronic Technologies Software Development London, London Cornami, Inc. Computer Hardware Show more similar pages Show fewer similar pages Browse jobs Program Manager jobs 13,502 open jobs Analyst jobs 63,799 open jobs Engineer jobs 79,088 open jobs Customer Engineer jobs 27,091 open jobs Portfolio Manager jobs 6,625 open jobs Vice President jobs 12,303 open jobs Software Engineering Manager jobs 4,717 open jobs Director jobs 156,873 open jobs Engineering Manager jobs 15,782 open jobs Design Verification Engineer jobs 496 open jobs President jobs 2,134 open jobs Chief Executive Officer jobs 12,046 open jobs Physicist jobs 138 open jobs Optical Engineer jobs 1,960 open jobs Senior Project Manager jobs 12,345 open jobs Scientist jobs 4,117 open jobs Developer jobs 33,470 open jobs Communications Manager jobs 7,892 open jobs Head of Sales jobs 5,319 open jobs Editor jobs 2,723 open jobs Show more jobs like this Show fewer jobs like this Funding Optalysys 4 total rounds Last Round Series A Aug 17, 2023 External Crunchbase Link for last round of funding US$ 27.5M Investors Lingotto imec.xpand + 2 Other investors See more info on crunchbase More searches More searches Delivery Director jobs Technical Program Manager jobs Director jobs Delivery Manager jobs Engineering Manager jobs Chief Technology Officer jobs Revenue Officer jobs Chief Product Officer jobs Junior Scientist jobs Firmware Engineer jobs Head of Product jobs Account Manager jobs Technology Officer jobs Machine Learning Engineer jobs Embedded Software Engineer jobs Embedded System Engineer jobs Engineer jobs Vice President jobs Scientist jobs Senior Project Manager jobs Developer jobs Project Manager jobs Senior Product Manager jobs Design Verification Engineer jobs Vice President of Engineering jobs Software Engineering Manager jobs Chief Information Officer jobs Hardware Engineering Manager jobs Vice President Research And Development jobs Product Manager jobs Associate Product Manager jobs Marketing Executive jobs Architect jobs Campaign Manager jobs Marketing Manager jobs Optical Engineer jobs Director of Information Technology jobs Data Engineer jobs Robotics Engineer jobs Writer jobs Integration Engineer jobs Design Engineer jobs Senior Scientist jobs Researcher jobs Quantitative Researcher jobs Quantitative Analyst jobs Director of Software jobs Head of Software jobs Senior Recruitment Consultant jobs Software Test Engineer jobs Content Manager jobs Android Developer jobs Python Developer jobs Head of Content jobs Head of Information Technology jobs Product Designer jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at Optalysys Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy .
2026-01-13T09:29:19
https://git-scm.com/book/pl/v2/Git-na-serwerze-GitLab
Git - GitLab About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Pierwsze kroki 1.1 Wprowadzenie do kontroli wersji 1.2 Krótka historia Git 1.3 Podstawy Git 1.4 Linia poleceń 1.5 Instalacja Git 1.6 Wstępna konfiguracja Git 1.7 Uzyskiwanie pomocy 1.8 Podsumowanie 2. Podstawy Gita 2.1 Pierwsze repozytorium Gita 2.2 Rejestrowanie zmian w repozytorium 2.3 Podgląd historii rewizji 2.4 Cofanie zmian 2.5 Praca ze zdalnym repozytorium 2.6 Tagowanie 2.7 Aliasy 2.8 Podsumowanie 3. Gałęzie Gita 3.1 Czym jest gałąź 3.2 Podstawy rozgałęziania i scalania 3.3 Zarządzanie gałęziami 3.4 Sposoby pracy z gałęziami 3.5 Gałęzie zdalne 3.6 Zmiana bazy 3.7 Podsumowanie 4. Git na serwerze 4.1 Protokoły 4.2 Uruchomienie Git na serwerze 4.3 Generowanie Twojego publicznego klucza SSH 4.4 Konfigurowanie serwera 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Inne opcje hostowania przez podmioty zewnętrzne 4.10 Podsumowanie 5. Rozproszony Git 5.1 Rozproszone przepływy pracy 5.2 Wgrywanie zmian do projektu 5.3 Utrzymywanie projektu 5.4 Podsumowanie 6. GitHub 6.1 Account Setup and Configuration 6.2 Contributing to a Project 6.3 Maintaining a Project 6.4 Managing an organization 6.5 Scripting GitHub 6.6 Summary 7. Narzędzia Gita 7.1 Wskazywanie rewizji 7.2 Interaktywne używanie przechowali 7.3 Schowek i czyszczenie 7.4 Signing Your Work 7.5 Searching 7.6 Przepisywanie historii 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugowanie z Gitem 7.11 Moduły zależne 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 Podsumowanie 8. Dostosowywanie Gita 8.1 Konfiguracja Gita 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Summary 9. Git i inne systemy 9.1 Git jako klient 9.2 Migracja do Gita 9.3 Podsumowanie 10. Mechanizmy wewnętrzne w Git 10.1 Komendy typu plumbing i porcelain 10.2 Obiekty Gita 10.3 Referencje w Git 10.4 Spakowane pliki (packfiles) 10.5 Refspec 10.6 Protokoły transferu 10.7 Konserwacja i odzyskiwanie danych 10.8 Environment Variables 10.9 Podsumowanie A1. Appendix A: Git in Other Environments A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git in Powershell A1.7 Summary A2. Appendix B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. Appendix C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 4.8 Git na serwerze - GitLab GitLab GitWeb is pretty simplistic though. If you’re looking for a more modern, fully featured Git server, there are some several open source solutions out there that you can install instead. As GitLab is one of the more popular ones, we’ll cover installing and using it as an example. This is a bit more complex than the GitWeb option and likely requires more maintenance, but it is a much more fully featured option. Installation GitLab is a database-backed web application, so its installation is a bit more involved than some other git servers. Fortunately, this process is very well-documented and supported. There are a few methods you can pursue to install GitLab. To get something up and running quickly, you can download a virtual machine image or a one-click installer from https://bitnami.com/stack/gitlab , and tweak the configuration to match your particular environment. One nice touch Bitnami has included is the login screen (accessed by typing alt-→); it tells you the IP address and default username and password for the installed GitLab. Figure 50. The Bitnami GitLab virtual machine login screen. For anything else, follow the guidance in the GitLab Community Edition readme, which can be found at https://gitlab.com/gitlab-org/gitlab-ce/tree/master . There you’ll find assistance for installing GitLab using Chef recipes, a virtual machine on Digital Ocean, and RPM and DEB packages (which, as of this writing, are in beta). There’s also “unofficial” guidance on getting GitLab running with non-standard operating systems and databases, a fully-manual installation script, and many other topics. Administration GitLab’s administration interface is accessed over the web. Simply point your browser to the hostname or IP address where GitLab is installed, and log in as an admin user. The default username is admin@local.host , and the default password is 5iveL!fe (which you will be prompted to change as soon as you enter it). Once logged in, click the “Admin area” icon in the menu at the top right. Figure 51. The “Admin area” item in the GitLab menu. Users Users in GitLab are accounts that correspond to people. User accounts don’t have a lot of complexity; mainly it’s a collection of personal information attached to login data. Each user account comes with a namespace , which is a logical grouping of projects that belong to that user. If the user jane had a project named project , that project’s url would be http://server/jane/project . Figure 52. The GitLab user administration screen. Removing a user can be done in two ways. “Blocking” a user prevents them from logging into the GitLab instance, but all of the data under that user’s namespace will be preserved, and commits signed with that user’s email address will still link back to their profile. “Destroying” a user, on the other hand, completely removes them from the database and filesystem. All projects and data in their namespace is removed, and any groups they own will also be removed. This is obviously a much more permanent and destructive action, and its uses are rare. Groups A GitLab group is an assemblage of projects, along with data about how users can access those projects. Each group has a project namespace (the same way that users do), so if the group training has a project materials , its url would be http://server/training/materials . Figure 53. The GitLab group administration screen. Each group is associated with a number of users, each of which has a level of permissions for the group’s projects and the group itself. These range from “Guest” (issues and chat only) to “Owner” (full control of the group, its members, and its projects). The types of permissions are too numerous to list here, but GitLab has a helpful link on the administration screen. Projects A GitLab project roughly corresponds to a single git repository. Every project belongs to a single namespace, either a user or a group. If the project belongs to a user, the owner of the project has direct control over who has access to the project; if the project belongs to a group, the group’s user-level permissions will also take effect. Every project also has a visibility level, which controls who has read access to that project’s pages and repository. If a project is Private , the project’s owner must explicitly grant access to specific users. An Internal project is visible to any logged-in user, and a Public project is visible to anyone. Note that this controls both git “fetch” access as well as access to the web UI for that project. Hooks GitLab includes support for hooks, both at a project or system level. For either of these, the GitLab server will perform an HTTP POST with some descriptive JSON whenever relevant events occur. This is a great way to connect your git repositories and GitLab instance to the rest of your development automation, such as CI servers, chat rooms, or deployment tools. Basic Usage The first thing you’ll want to do with GitLab is create a new project. This is accomplished by clicking the “+” icon on the toolbar. You’ll be asked for the project’s name, which namespace it should belong to, and what its visibility level should be. Most of what you specify here isn’t permanent, and can be re-adjusted later through the settings interface. Click “Create Project”, and you’re done. Once the project exists, you’ll probably want to connect it with a local Git repository. Each project is accessible over HTTPS or SSH, either of which can be used to configure a Git remote. The URLs are visible at the top of the project’s home page. For an existing local repository, this command will create a remote named gitlab to the hosted location: $ git remote add gitlab https://server/namespace/project.git If you don’t have a local copy of the repository, you can simply do this: $ git clone https://server/namespace/project.git The web UI provides access to several useful views of the repository itself. Each project’s home page shows recent activity, and links along the top will lead you to views of the project’s files and commit log. Working Together The simplest way of working together on a GitLab project is by giving another user direct push access to the git repository. You can add a user to a project by going to the “Members” section of that project’s settings, and associating the new user with an access level (the different access levels are discussed a bit in Groups ). By giving a user an access level of “Developer” or above, that user can push commits and branches directly to the repository with impunity. Another, more decoupled way of collaboration is by using merge requests. This feature enables any user that can see a project to contribute to it in a controlled way. Users with direct access can simply create a branch, push commits to it, and open a merge request from their branch back into master or any other branch. Users who don’t have push permissions for a repository can “fork” it (create their own copy), push commits to that copy, and open a merge request from their fork back to the main project. This model allows the owner to be in full control of what goes into the repository and when, while allowing contributions from untrusted users. Merge requests and issues are the main units of long-lived discussion in GitLab. Each merge request allows a line-by-line discussion of the proposed change (which supports a lightweight kind of code review), as well as a general overall discussion thread. Both can be assigned to users, or organized into milestones. This section is focused mainly on the Git-related features of GitLab, but as a mature project, it provides many other features to help your team work together, such as project wikis and system maintenance tools. One benefit to GitLab is that, once the server is set up and running, you’ll rarely need to tweak a configuration file or access the server via SSH; most administration and general usage can be accomplished through the in-browser interface. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/commands/cargo-publish.html#cargo-publish1
cargo publish - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-publish(1) NAME cargo-publish — Upload a package to the registry SYNOPSIS cargo publish [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory and upload it to a registry. The default registry is https://crates.io . This performs the following steps: Performs a few checks, including: Checks the package.publish key in the manifest for restrictions on which registries you are allowed to publish to. Create a .crate file by following the steps in cargo-package(1) . Upload the crate to the registry. The server will perform additional checks on the crate. The client will poll waiting for the package to appear in the index, and may timeout. In that case, you will need to check for completion manually. This timeout does not affect the upload. This command requires you to be authenticated using cargo-login(1) or environment variables of the registry.token and registries.<name>.token config fields. See the reference for more details about packaging and publishing. OPTIONS Publish Options --dry-run Perform all checks without uploading. --no-verify Don’t verify the contents by building them. --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --index index The URL of the registry index to use. --registry registry Name of the registry to publish to. Registry names are defined in Cargo config files . If not specified, and there is a package.publish field in Cargo.toml with a single registry, then it will publish to that registry. Otherwise it will use the default registry, which is defined by the registry.default config key which defaults to crates-io . Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Publish only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Publish all members in the workspace. --all Deprecated alias for --workspace . --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Publish for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo publish -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo publish -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Publish the current package: cargo publish SEE ALSO cargo(1) , cargo-package(1) , cargo-login(1)
2026-01-13T09:29:19
https://www.linkedin.com/products/hclsoftware-hcl-sametime/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_full-click
HCL Sametime | LinkedIn Skip to main content LinkedIn HCLSoftware in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in HCL Sametime Employee Communications Software by HCLSoftware See who's skilled in this Add as skill Learn more Report this product About HCL Sametime is a proven and trusted persistent chat and video meetings platform that complex industries and government entities use to ensure their most important communications are secure. It’s been rebuilt for how today’s modern organizations need to be working — nimbly, securely, and remotely — all at a significantly lower cost than the competition. This product is intended for Chief Technology Officer Technical Manager Senior System Analyst Head of Information Technology Department Information Security Engineer Director Network Engineering Chief Security Officer Information Technology Network Administrator Lead Information Technology Specialist DevOps Engineer Similar products Zoho Cliq Zoho Cliq Employee Communications Software Workvivo by Zoom Workvivo by Zoom Employee Communications Software Flip Flip Employee Communications Software Poppulo Poppulo Employee Communications Software Staffbase Staffbase Employee Communications Software Firstup Firstup Employee Communications Software Sign in to see more Show more Show less HCLSoftware products HCL BigFix HCL BigFix Endpoint Detection & Response (EDR) Software HCL Commerce+ HCL Commerce+ E-Commerce Platforms HCL Connections HCL Connections Enterprise Social Networking Software HCL Digital Experience HCL Digital Experience Digital Experience Platforms (DXP) HCL Domino HCL Domino Business Process Management Software HCL Mainframe Solutions HCL Mainframe Solutions Mainframe Management Software HCL Software DevOps HCL Software DevOps DevOps Software HCL Unica HCL Unica Marketing Automation Software HCL Volt MX HCL Volt MX Low-Code Development Platforms Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/reference/config.html#registriesnametoken
Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar.
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/appendix/glossary.html#manifest
Appendix: Glossary - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Glossary Artifact An artifact is the file or set of files created as a result of the compilation process. This includes linkable libraries, executable binaries, and generated documentation. Cargo Cargo is the Rust package manager , and the primary topic of this book. Cargo.lock See lock file . Cargo.toml See manifest . Crate A Rust crate is either a library or an executable program, referred to as either a library crate or a binary crate , respectively. Every target defined for a Cargo package is a crate . Loosely, the term crate may refer to either the source code of the target or to the compiled artifact that the target produces. It may also refer to a compressed package fetched from a registry . The source code for a given crate may be subdivided into modules . Edition A Rust edition is a developmental landmark of the Rust language. The edition of a package is specified in the Cargo.toml manifest , and individual targets can specify which edition they use. See the Edition Guide for more information. Feature The meaning of feature depends on the context: A feature is a named flag which allows for conditional compilation. A feature can refer to an optional dependency, or an arbitrary name defined in a Cargo.toml manifest that can be checked within source code. Cargo has unstable feature flags which can be used to enable experimental behavior of Cargo itself. The Rust compiler and Rustdoc have their own unstable feature flags (see The Unstable Book and The Rustdoc Book ). CPU targets have target features which specify capabilities of a CPU. Index The index is the searchable list of crates in a registry . Lock file The Cargo.lock lock file is a file that captures the exact version of every dependency used in a workspace or package . It is automatically generated by Cargo. See Cargo.toml vs Cargo.lock . Manifest A manifest is a description of a package or a workspace in a file named Cargo.toml . A virtual manifest is a Cargo.toml file that only describes a workspace, and does not include a package. Member A member is a package that belongs to a workspace . Module Rust’s module system is used to organize code into logical units called modules , which provide isolated namespaces within the code. The source code for a given crate may be subdivided into one or more separate modules. This is usually done to organize the code into areas of related functionality or to control the visible scope (public/private) of symbols within the source (structs, functions, and so on). A Cargo.toml file is primarily concerned with the package it defines, its crates, and the packages of the crates on which they depend. Nevertheless, you will see the term “module” often when working with Rust, so you should understand its relationship to a given crate. Package A package is a collection of source files and a Cargo.toml manifest file which describes the package. A package has a name and version which is used for specifying dependencies between packages. A package contains multiple targets , each of which is a crate . The Cargo.toml file describes the type of the crates (binary or library) within the package, along with some metadata about each one — how each is to be built, what their direct dependencies are, etc., as described throughout this book. The package root is the directory where the package’s Cargo.toml manifest is located. (Compare with workspace root .) The package ID specification , or SPEC , is a string used to uniquely reference a specific version of a package from a specific source. Small to medium sized Rust projects will only need a single package, though it is common for them to have multiple crates. Larger projects may involve multiple packages, in which case Cargo workspaces can be used to manage common dependencies and other related metadata between the packages. Package manager Broadly speaking, a package manager is a program (or collection of related programs) in a software ecosystem that automates the process of obtaining, installing, and upgrading artifacts. Within a programming language ecosystem, a package manager is a developer-focused tool whose primary functionality is to download library artifacts and their dependencies from some central repository; this capability is often combined with the ability to perform software builds (by invoking the language-specific compiler). Cargo is the package manager within the Rust ecosystem. Cargo downloads your Rust package ’s dependencies ( artifacts known as crates ), compiles your packages, makes distributable packages, and (optionally) uploads them to crates.io , the Rust community’s package registry . Package registry See registry . Project Another name for a package . Registry A registry is a service that contains a collection of downloadable crates that can be installed or used as dependencies for a package . The default registry in the Rust ecosystem is crates.io . The registry has an index which contains a list of all crates, and tells Cargo how to download the crates that are needed. Source A source is a provider that contains crates that may be included as dependencies for a package . There are several kinds of sources: Registry source — See registry . Local registry source — A set of crates stored as compressed files on the filesystem. See Local Registry Sources . Directory source — A set of crates stored as uncompressed files on the filesystem. See Directory Sources . Path source — An individual package located on the filesystem (such as a path dependency ) or a set of multiple packages (such as path overrides ). Git source — Packages located in a git repository (such as a git dependency or git source ). See Source Replacement for more information. Spec See package ID specification . Target The meaning of the term target depends on the context: Cargo Target — Cargo packages consist of targets which correspond to artifacts that will be produced. Packages can have library, binary, example, test, and benchmark targets. The list of targets are configured in the Cargo.toml manifest , often inferred automatically by the directory layout of the source files. Target Directory — Cargo places built artifacts in the target directory. By default this is a directory named target at the workspace root, or the package root if not using a workspace. The directory may be changed with the --target-dir command-line option, the CARGO_TARGET_DIR environment variable , or the build.target-dir config option . For more information see the build cache documentation. Target Architecture — The OS and machine architecture for the built artifacts are typically referred to as a target . Target Triple — A triple is a specific format for specifying a target architecture. Triples may be referred to as a target triple which is the architecture for the artifact produced, and the host triple which is the architecture that the compiler is running on. The target triple can be specified with the --target command-line option or the build.target config option . The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> where: arch = The base CPU architecture, for example x86_64 , i686 , arm , thumb , mips , etc. sub = The CPU sub-architecture, for example arm has v7 , v7s , v5te , etc. vendor = The vendor, for example unknown , apple , pc , nvidia , etc. sys = The system name, for example linux , windows , darwin , etc. none is typically used for bare-metal without an OS. abi = The ABI, for example gnu , android , eabi , etc. Some parameters may be omitted. Run rustc --print target-list for a list of supported targets. Test Targets Cargo test targets generate binaries which help verify proper operation and correctness of code. There are two types of test artifacts: Unit test — A unit test is an executable binary compiled directly from a library or a binary target. It contains the entire contents of the library or binary code, and runs #[test] annotated functions, intended to verify individual units of code. Integration test target — An integration test target is an executable binary compiled from a test target which is a distinct crate whose source is located in the tests directory or specified by the [[test]] table in the Cargo.toml manifest . It is intended to only test the public API of a library, or execute a binary to verify its operation. Workspace A workspace is a collection of one or more packages that share common dependency resolution (with a shared Cargo.lock lock file ), output directory, and various settings such as profiles. A virtual workspace is a workspace where the root Cargo.toml manifest does not define a package, and only lists the workspace members . The workspace root is the directory where the workspace’s Cargo.toml manifest is located. (Compare with package root .)
2026-01-13T09:29:19
https://web3.okx.com/fi/dex-swap
DEX Swap: Treidaa kryptoja ja hyödynnä alhaiset maksut ja korkea likviditeetti | OKX Wallet Siirry pääsisältöön Mark. Meme Pump Seuraaja Signaali Tulostaulukko Vaihto Salkku Suosittelu Lisää Tuotteet DeFi Markkinapaikka Suojaus Rakenna X Layer Lisätietoja X Layerista X Layer -selain Silta Kehittäjäasiakirjat Testiverkon ilmaisjakelu GitHub Wallet API Tutustu Wallet API:hin API-asiakirjat Hallitse API-avaimia Blockchain Explorer -työkalu Yhdistä lompakko DApp-sovelluksissa Boost X-lanseeraus Osallistu ja ansaitse rahakkeita X-kampanja Liity ja voita jännittäviä palkkioita Palkkiokeskus Lunasta bonukset ja airdropit Hälytys Kieli Valuutta Väriasetukset Lataa OKX Wallet Kaikki Web3:sta Ilmoitukset Takaisin Suomi 简体中文 繁體中文 English Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Svenska Türkçe Takaisin Takaisin Learn Tukikeskus Vaihda Silta Vaihda UKK What is OKX DEX Swap? OKX DEX Swap is a decentralized exchange (DEX) aggregator that allows you to trade crypto seamlessly across multiple blockchains. Our X Routing algorithm finds the best prices by comparing liquidity pools, splitting orders, and optimizing for fees and slippage. Miten OKX DEX varmistaa parhaat vaihtohinnat? OKX DEX käyttää kehittyneitä algoritmeja useiden DEXien ja likviditeettipalveluntarjoajien (LP) likviditeettien aggregointiin. Jakamalla toimeksiannot ja ottamalla huomioon tekijät, kuten hinnan, liukuman ja verkkomaksut, varmistamme, että saat parhaat mahdolliset hinnat. Why do trades fail? 1. Insufficient network fee. During times of network congestion, network fees may skyrocket. Miners prioritize transactions with higher network fees, so if your transaction isn’t picked up by a miner for an extended period, it can result in transaction failure. Therefore, when initiating a transaction, it’s recommended to use the default values recommended by the platform. 2. Exceeded slippage. In volatile markets, liquidity depths can change. If the trading amount falls below the required minimum, the contract may halt the transaction, leading to a failed transaction due to exceeded slippage. 3.Duplicate transactions. If a user initiates multiple identical transactions simultaneously, but their balance is only sufficient to cover the first transaction, subsequent transactions will fail. What to do when I don’t have enough balance for the network fee? You can buy the corresponding tokens on the OKX Exchange and withdraw them to your wallet address, or swap tokens on OKX DEX to cover the network fees. What is slippage? Slippage refers to the price difference between the expected price of a trade and its actual execution price. Slippage is an implicit cost of trading and is inevitable in most trades, especially in a DEX. Slippage becomes more prominent during market volatility or when there is insufficient liquidity on the trading platform. How can slippage be avoided? Slippage cannot be completely avoided, but its impact can be minimized in three ways: 1. The best way is to choose assets with high trading volume and liquidity. 2. You can opt for transactions with smaller amounts, which are more likely to be executed, instead of placing large orders directly. 3. Consider adjusting the slippage tolerance during price fluctuations, where slippage may exceed the preset limit when your transaction is executed. In this case, increasing slippage could avoid transaction failure, but it also means you can't get a good price. Please proceed with caution. Why trading some tokens requires a higher slippage setting? Some tokens have a higher slippage tolerance due to the presence of transaction taxes, such as SAFEMOON, which are used for liquidity pool buy-back, token burning, and rewarding token holders. To help you trade these tokens, OKX DEX has launched automated slippage to improve the success rate of these trades at the current optimal slippage level. Vaihda parhaalla hinnalla Valitse vapaasti treidausreitit kaikista suurimmista DEX-aggregaattoreista; X Routing löytää parhaat tarjoukset yli 400 DEXistä ja yli 30 verkosta. Nopeat tarjoukset MEV-suojaus ©2017 - 2026 WEB3.OKX.COM Suomi 简体中文 繁體中文 English Tiếng Việt Русский Español (Latinoamérica) Bahasa Indonesia Français Deutsch Italiano Polski Čeština Română Português (Portugal) Português (Brasil) Українська Español (España) Nederlands العربية 日本語 Norsk (bokmål) Svenska Türkçe More about OKX Wallet Lataa Opi Tietoa meistä Työpaikat Ota meihin yhteyttä Käyttöehdot Tietosuojaselosteen X (ennen Twitter) Product Tokens Swap Marketplace Earn Discover Build Explorer Suojaus Tuki Tukikeskus Kanavan todennus Ilmoitukset DEX-hinnasto Yhdistä OKX:ään Bitcoin-lompakko Ethereum-lompakko Solana-lompakko
2026-01-13T09:29:19
https://git-scm.com/book/ko/v2/Git%ec%9d%98-%ea%b8%b0%ec%b4%88-%ed%83%9c%ea%b7%b8
Git - 태그 About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. 시작하기 1.1 버전 관리란? 1.2 짧게 보는 Git의 역사 1.3 Git 기초 1.4 CLI 1.5 Git 설치 1.6 Git 최초 설정 1.7 도움말 보기 1.8 요약 2. Git의 기초 2.1 Git 저장소 만들기 2.2 수정하고 저장소에 저장하기 2.3 커밋 히스토리 조회하기 2.4 되돌리기 2.5 리모트 저장소 2.6 태그 2.7 Git Alias 2.8 요약 3. Git 브랜치 3.1 브랜치란 무엇인가 3.2 브랜치와 Merge 의 기초 3.3 브랜치 관리 3.4 브랜치 워크플로 3.5 리모트 브랜치 3.6 Rebase 하기 3.7 요약 4. Git 서버 4.1 프로토콜 4.2 서버에 Git 설치하기 4.3 SSH 공개키 만들기 4.4 서버 설정하기 4.5 Git 데몬 4.6 스마트 HTTP 4.7 GitWeb 4.8 GitLab 4.9 또 다른 선택지, 호스팅 4.10 요약 5. 분산 환경에서의 Git 5.1 분산 환경에서의 워크플로 5.2 프로젝트에 기여하기 5.3 프로젝트 관리하기 5.4 요약 6. GitHub 6.1 계정 만들고 설정하기 6.2 GitHub 프로젝트에 기여하기 6.3 GitHub 프로젝트 관리하기 6.4 Organization 관리하기 6.5 GitHub 스크립팅 6.6 요약 7. Git 도구 7.1 리비전 조회하기 7.2 대화형 명령 7.3 Stashing과 Cleaning 7.4 내 작업에 서명하기 7.5 검색 7.6 히스토리 단장하기 7.7 Reset 명확히 알고 가기 7.8 고급 Merge 7.9 Rerere 7.10 Git으로 버그 찾기 7.11 서브모듈 7.12 Bundle 7.13 Replace 7.14 Credential 저장소 7.15 요약 8. Git맞춤 8.1 Git 설정하기 8.2 Git Attributes 8.3 Git Hooks 8.4 정책 구현하기 8.5 요약 9. Git과 여타 버전 관리 시스템 9.1 Git: 범용 Client 9.2 Git으로 옮기기 9.3 요약 10. Git의 내부 10.1 Plumbing 명령과 Porcelain 명령 10.2 Git 개체 10.3 Git Refs 10.4 Packfile 10.5 Refspec 10.6 데이터 전송 프로토콜 10.7 운영 및 데이터 복구 10.8 환경변수 10.9 요약 A1. 부록 A: 다양한 환경에서 Git 사용하기 A1.1 GUI A1.2 Visual Studio A1.3 Eclipse A1.4 Bash A1.5 Zsh A1.6 Git in Powershell A1.7 요약 A2. 부록 B: 애플리케이션에 Git 넣기 A2.1 Git 명령어 A2.2 Libgit2 A2.3 JGit A2.4 go-git A3. 부록 C: Git 명령어 A3.1 설치와 설정 A3.2 프로젝트 가져오기와 생성하기 A3.3 스냅샷 다루기 A3.4 Branch와 Merge A3.5 공유하고 업데이트하기 A3.6 보기와 비교 A3.7 Debugging A3.8 Patch 하기 A3.9 Email A3.10 다른 버전 관리 시스템 A3.11 관리 A3.12 Plumbing 명령어 2nd Edition 2.6 Git의 기초 - 태그 태그 다른 VCS처럼 Git도 태그를 지원한다. 사람들은 보통 릴리즈할 때 사용한다(v1.0, 등등). 이번에는 태그를 조회하고 생성하는 법과 태그의 종류를 설명한다. 태그 조회하기 우선 git tag 명령으로 ( -l , --list 는 옵션) 이미 만들어진 태그가 있는지 확인할 수 있다. $ git tag v0.1 v1.3 이 명령은 알파벳 순서로 태그를 보여준다. 사실 순서는 별로 중요한 게 아니다. 검색 패턴을 사용하여 태그를 검색할 수 있다. Git 소스 저장소는 500여 개의 태그가 있다. 만약 1.8.5 버전의 태그들만 검색하고 싶으면 아래와 같이 실행한다. $ git tag -l "v1.8.5*" v1.8.5 v1.8.5-rc0 v1.8.5-rc1 v1.8.5-rc2 v1.8.5-rc3 v1.8.5.1 v1.8.5.2 v1.8.5.3 v1.8.5.4 v1.8.5.5 노트 와일드카드를 사용하여 Tag 리스트를 확인하려면 -l , --list 옵션을 지정 단순히 모든 Tag 목록을 확인하기 위해 git tag 명령을 실행했을 때 -l 또는 --list 옵션이 적용된 것과 동일한 결과가 출력된다. 하지만 와일드카드를 사용하여 태그 목록을 검색하는 경우에는 반드시 -l 또는 --list 옵션을 같이 써 줘야 원하는 결과를 얻을 수 있다. 태그 붙이기 Git의 태그는 Lightweight 태그와 Annotated 태그로 두 종류가 있다. Lightweight 태그는 브랜치와 비슷한데 브랜치처럼 가리키는 지점을 최신 커밋으로 이동시키지 않는다. 단순히 특정 커밋에 대한 포인터일 뿐이다. 한편 Annotated 태그는 Git 데이터베이스에 태그를 만든 사람의 이름, 이메일과 태그를 만든 날짜, 그리고 태그 메시지도 저장한다. GPG(GNU Privacy Guard)로 서명할 수도 있다. 일반적으로 Annotated 태그를 만들어 이 모든 정보를 사용할 수 있도록 하는 것이 좋다. 하지만 임시로 생성하는 태그거나 이러한 정보를 유지할 필요가 없는 경우에는 Lightweight 태그를 사용할 수도 있다. Annotated 태그 Annotated 태그를 만드는 방법은 간단하다. tag 명령을 실행할 때 -a 옵션을 추가한다. $ git tag -a v1.4 -m "my version 1.4" $ git tag v0.1 v1.3 v1.4 -m 옵션으로 태그를 저장할 때 메시지를 함께 저장할 수 있다. 명령을 실행할 때 메시지를 입력하지 않으면 Git은 편집기를 실행시킨다. git show 명령으로 태그 정보와 커밋 정보를 모두 확인할 수 있다. $ git show v1.4 tag v1.4 Tagger: Ben Straub <ben@straub.cc> Date: Sat May 3 20:19:12 2014 -0700 my version 1.4 commit ca82a6dff817ec66f44342007202690a93763949 Author: Scott Chacon <schacon@gee-mail.com> Date: Mon Mar 17 21:52:11 2008 -0700 changed the version number 커밋 정보를 보여주기 전에 먼저 태그를 만든 사람이 누구인지, 언제 태그를 만들었는지, 그리고 태그 메시지가 무엇인지 보여준다. Lightweight 태그 Lightweight 태그는 기본적으로 파일에 커밋 체크섬을 저장하는 것뿐이다. 다른 정보는 저장하지 않는다. Lightweight 태그를 만들 때는 -a , -s , -m 옵션을 사용하지 않는다. 이름만 달아줄 뿐이다. $ git tag v1.4-lw $ git tag v0.1 v1.3 v1.4 v1.4-lw v1.5 이 태그에 git show 를 실행하면 별도의 태그 정보를 확인할 수 없다. 이 명령은 단순히 커밋 정보만을 보여준다. $ git show v1.4-lw commit ca82a6dff817ec66f44342007202690a93763949 Author: Scott Chacon <schacon@gee-mail.com> Date: Mon Mar 17 21:52:11 2008 -0700 changed the version number 나중에 태그하기 예전 커밋에 대해서도 태그할 수 있다. 커밋 히스토리는 아래와 같다고 가정한다. $ git log --pretty=oneline 15027957951b64cf874c3557a0f3547bd83b3ff6 Merge branch 'experiment' a6b4c97498bd301d84096da251c98a07c7723e65 beginning write support 0d52aaab4479697da7686c15f77a3d64d9165190 one more thing 6d52a271eda8725415634dd79daabbc4d9b6008e Merge branch 'experiment' 0b7434d86859cc7b8c3d5e1dddfed66ff742fcbc added a commit function 4682c3261057305bdd616e23b64b0857d832627b added a todo file 166ae0c4d3f420721acbb115cc33848dfcc2121a started write support 9fceb02d0ae598e95dc970b74767f19372d61af8 updated rakefile 964f16d36dfccde844893cac5b347e7b3d44abbc commit the todo 8a5cbc430f1a9c3d00faaeffd07798508422908a updated readme “updated rakefile” 커밋을 v1.2로 태그하지 못했다고 해도 나중에 태그를 붙일 수 있다. 특정 커밋에 태그하기 위해서 명령의 끝에 커밋 체크섬을 명시한다(긴 체크섬을 전부 사용할 필요는 없다). $ git tag -a v1.2 9fceb02 이제 아래와 같이 만든 태그를 확인한다. $ git tag v0.1 v1.2 v1.3 v1.4 v1.4-lw v1.5 $ git show v1.2 tag v1.2 Tagger: Scott Chacon <schacon@gee-mail.com> Date: Mon Feb 9 15:32:16 2009 -0800 version 1.2 commit 9fceb02d0ae598e95dc970b74767f19372d61af8 Author: Magnus Chacon <mchacon@gee-mail.com> Date: Sun Apr 27 20:43:35 2008 -0700 updated rakefile ... 태그 공유하기 git push 명령은 자동으로 리모트 서버에 태그를 전송하지 않는다. 태그를 만들었으면 서버에 별도로 Push 해야 한다. 브랜치를 공유하는 것과 같은 방법으로 할 수 있다. git push origin <태그 이름> 을 실행한다. $ git push origin v1.5 Counting objects: 14, done. Delta compression using up to 8 threads. Compressing objects: 100% (12/12), done. Writing objects: 100% (14/14), 2.05 KiB | 0 bytes/s, done. Total 14 (delta 3), reused 0 (delta 0) To git@github.com:schacon/simplegit.git * [new tag] v1.5 -> v1.5 만약 한 번에 태그를 여러 개 Push 하고 싶으면 --tags 옵션을 추가하여 git push 명령을 실행한다. 이 명령으로 리모트 서버에 없는 태그를 모두 전송할 수 있다. $ git push origin --tags Counting objects: 1, done. Writing objects: 100% (1/1), 160 bytes | 0 bytes/s, done. Total 1 (delta 0), reused 0 (delta 0) To git@github.com:schacon/simplegit.git * [new tag] v1.4 -> v1.4 * [new tag] v1.4-lw -> v1.4-lw 이제 누군가 저장소에서 Clone 하거나 Pull을 하면 모든 태그 정보도 함께 전송된다. 태그를 Checkout 하기 예를 들어 태그가 특정 버전을 가리키고 있고, 특정 버전의 파일을 체크아웃 해서 확인하고 싶다면 다음과 같이 실행한다. 단 태그를 체크아웃하면(브랜치를 체크아웃 하는 것이 아니라면) “detached HEAD”(떨어져나온 HEAD) 상태가 되며 일부 Git 관련 작업이 브랜치에서 작업하는 것과 다르게 동작할 수 있다. $ git checkout 2.0.0 Note: checking out '2.0.0'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch> HEAD is now at 99ada87... Merge pull request #89 from schacon/appendix-final $ git checkout 2.0-beta-0.1 Previous HEAD position was 99ada87... Merge pull request #89 from schacon/appendix-final HEAD is now at df3f601... add atlas.json and cover image “detached HEAD”(떨어져나온 HEAD) 상태에서는 작업을 하고 커밋을 만들면, 태그는 그대로 있으나 새로운 커밋이 하나 쌓인 상태가 되고 새 커밋에 도달할 수 있는 방법이 따로 없게 된다. 물론 커밋의 해시 값을 정확히 기억하고 있으면 가능하긴 하다. 특정 태그의 상태에서 새로 작성한 커밋이 버그 픽스와 같이 의미있도록 하려면 반드시 브랜치를 만들어서 작업하는 것이 좋다. $ git checkout -b version2 v2.0.0 Switched to a new branch 'version2' 물론 이렇게 브랜치를 만든 후에 version2 브랜치에 커밋하면 브랜치는 업데이트된다. 하지만, v2.0.0 태그는 가리키는 커밋이 변하지 않았으므로 두 내용이 가리키는 커밋이 다르다는 것을 알 수 있다. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://www.linkedin.com/posts/bmc-software_were-genuinely-grateful-every-review-every-activity-7416493576290775040-M1WK
We’re genuinely grateful. Every review, every comment, and every bit of feedback you’ve shared about Control-M made this possible. Thanks to you, we’re a Customers’ Choice for Service Orchestration… | BMC Software Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Join for free BMC Software’s Post BMC Software 329,940 followers 18h Report this post We’re genuinely grateful. Every review, every comment, and every bit of feedback you’ve shared about Control-M made this possible. Thanks to you, we’re a Customers’ Choice for Service Orchestration and Automation Platforms in the 2025 Gartner ® Peer Insights™ Voice of the Customer Report. 194 2 Comments Like Comment Share Copy LinkedIn Facebook X BMC Software 18h Report this comment See what customers are saying: https://bit.ly/4qb5Gai Like Reply 1 Reaction Lucia Londero 16m Report this comment 🏆 ‼️ Like Reply 1 Reaction See more comments To view or add a comment, sign in 329,940 followers View Profile Follow More from this author 2025 Highlights: Wins, Proof Points, and Momentum BMC Software 2w Control-M Ranks #1 for the 8th Consecutive Year. What EMA and Gartner® Say About Orchestration BMC Software 1mo BMC Named a Leader in the 2025 Gartner® Magic Quadrant™ for SOAP BMC Software 2mo Explore content categories Career Productivity Finance Soft Skills & Emotional Intelligence Project Management Education Technology Leadership Ecommerce User Experience Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Sign in to view more content Create your free account or sign in to continue your search Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy .
2026-01-13T09:29:19
https://www.linkedin.com/products/zohocorp-zoho-cliq/
Zoho Cliq | LinkedIn Skip to main content LinkedIn Zoho in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Zoho Cliq Employee Communications Software by Zoho See who's skilled in this Add as skill Get started Report this product About Zoho Cliq is a business communication tool that helps simplify and streamline your business communication with organized conversations, information that's easy to find, and connecting to the tools you love. Simply put, it helps improve your team's overall productivity and, in turn, impacts your business's performance. Media Products media viewer No more previous content Zoho Cliq 3.0 Cliq isn't just remote-work friendly—now we're hybrid-work friendly. Whether at the office or home, Zoho Cliq helps strengthen communication in your organization to make work happen. Bring your office into Cliq with Remote Work Working remotely shouldn't take away all the little quirks you enjoy about working in the office. From getting into work and greeting your co-workers to quickly checking if someone is available for a discussion and taking a quick coffee break together, Cliq's Remote Work ensures you don't miss out on the little things. Zoho Cliq Meetings Host more significant audio/video conferences. The more, the merrier. You can add up to 1000 participants in a single audio call and start the discussion. Manage your projects from one place A single home for all your tasks and workflows. Bring your task management and productivity apps together. Build your own bots in Zoho Cliq A bot is your system-powered contact or colleague with whom you can interact as you do with any other person. The bot can be programmed to respond to your queries, perform an action on your behalf and notify you of any significant event. You can build your bots in Cliq with very few lines of code. No more next content Featured customers of Zoho Cliq CRCC Asia Education Administration Programs 16,269 followers Purolite Healthcare & Life Sciences Biotechnology Research 12,778 followers GPSS Group Renewables & Environment 1,372 followers Betsson Georgia Gambling Facilities and Casinos 10,587 followers FundsIndia Investment Management 28,368 followers GameVision Studios Computer Games 994 followers Similar products HCL Sametime HCL Sametime Employee Communications Software Workvivo by Zoom Workvivo by Zoom Employee Communications Software Flip Flip Employee Communications Software Poppulo Poppulo Employee Communications Software Staffbase Staffbase Employee Communications Software Firstup Firstup Employee Communications Software Sign in to see more Show more Show less Zoho products Bigin by Zoho CRM Bigin by Zoho CRM Customer Relationship Management (CRM) Software Zoho Analytics Zoho Analytics Business Intelligence (BI) Software Zoho Commerce Zoho Commerce E-Commerce Platforms Zoho DataPrep Zoho DataPrep Data Preparation Tools Zoho Forms Zoho Forms Online Form Builder Software Zoho FSM Zoho FSM Field Service Management (FSM) Software Zoho One Zoho One Business Management Software Zoho People Zoho People Human Resources Management Systems (HRMS) Zoho RPA Zoho RPA Robotic Process Automation (RPA) Software Zoho Webinar Zoho Webinar Webinar Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:19
https://git-scm.com/book/nl/v2/Gedistribueerd-Git-Het-beheren-van-een-project
Git - Het beheren van een project About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 5.3 Gedistribueerd Git - Het beheren van een project Het beheren van een project Naast weten hoe effectief bij te dragen aan een project, is het ook handig om te weten hoe je er een beheert. Dit kan bestaan uit het accepteren en toepassen van patches die met format-patch gemaakt en naar je gemaild zijn, of het integreren van wijzigingen in de remote branches van repositories die je hebt toegevoegd als remotes van je project. Of je nu een canonieke repository beheert, of wilt bijdragen door het controleren of goedkeuren van patches, je moet weten hoe werk te ontvangen op een manier die het duidelijkst is voor andere bijdragers en voor jou op langere termijn vol te houden. Werken in topic branches Als je overweegt om nieuw werk te integreren, is het over het algemeen een goed idee om het uit te proberen in een topic branch — een tijdelijke branch, speciaal gemaakt om dat nieuwe werk uit te proberen. Op deze manier is het handig om een patch individueel te bewerken en het even opzij te zetten als het niet lukt, totdat je tijd hebt om er op terug te komen. Als je een eenvoudige branchnaam maakt, gebaseerd op het onderwerp van het werk dat je aan het proberen bent, bijvoorbeeld ruby_client of iets dergelijks, dan is het makkelijk om te herinneren als je het voor een tijdje opzij legt en er later op terug komt. De beheerder van het Git project heeft de neiging om deze branches ook van een naamsruimte (namespace) te voorzien —  zoals sc/ruby_client , waarbij sc een afkorting is van de persoon die het werk heeft bijgedragen. Zoals je je zult herinneren, kun je de branch gebaseerd op je master -branch zo maken: $ git branch sc/ruby_client master Of, als je er ook meteen naar wilt omschakelen, kun je de checkout -b optie gebruiken: $ git checkout -b sc/ruby_client master Nu ben je klaar om het bijgedragen werk in deze topic branch toe te voegen, en te bepalen of je het wilt mergen in je meer permanente branches. Patches uit e-mail toepassen Als je een patch per e-mail ontvangt, en je moet die integreren in je project, moet je de patch in je topic branch toepassen om het te evalueren. Er zijn twee manieren om een gemailde patch toe te passen: met git apply of met git am . Een patch toepassen met apply Als je de patch ontvangen hebt van iemand die het gegenereerd heeft met de git diff of een Unix diff commando (wat niet wordt aangeraden, zie de volgende paragraaf), kun je het toepassen met het git apply commando. Aangenomen dat je de patch als /tmp/patch-ruby-client.patch opgeslagen hebt, kun je de patch als volgt toepassen: $ git apply /tmp/patch-ruby-client.patch Dit wijzigt de bestanden in je werk directory. Het is vrijwel gelijk aan het uitvoeren van een patch -p1 commando om de patch toe te passen, alhoewel het meer paranoïde is en minder "fuzzy matches" accepteert dan patch. Het handelt ook het toevoegen, verwijderen, en hernoemen van bestanden af als ze beschreven staan in het git diff formaat, wat patch niet doet. Als laatste volgt git apply een “pas alles toe of laat alles weg” model waarbij alles of niets wordt toegepast. Dit in tegenstelling tot patch die gedeeltelijke patches kan toepassen, waardoor je werkdirectory in een vreemde status achterblijft. Over het algemeen is git apply terughoudender dan patch . Het zal geen commit voor je aanmaken — na het uitvoeren moet je de geïntroduceerde wijzigingen handmatig stagen en committen. Je kunt ook git apply gebruiken om te zien of een patch netjes kan worden toepast voordat je het echt doet — je kunt git apply --check uitvoeren met de patch: $ git apply --check 0001-seeing-if-this-helps-the-gem.patch error: patch failed: ticgit.gemspec:1 error: ticgit.gemspec: patch does not apply Als er geen uitvoer is, dan zou de patch netjes moeten passen. Dit commando retourneert ook een niet-nul status als de controle faalt, zodat je het kunt gebruiken in scripts als je dat zou willen. Een patch met am toepassen Als de bijdrager een Git gebruiker is en zo vriendelijk is geweest om het format-patch commando te gebruiken om de patch te genereren, dan is je werk eenvoudiger omdat de patch de auteur-informatie en een commit-bericht voor je bevat. Als het enigzins kan, probeer dan je bijdragers aan te moedigen om format-patch te gebruiken in plaats van diff om patches voor je te genereren. Je zou alleen `git apply hoeven te gebruiken voor oude patches en dergelijke zaken. Om een patch gegenereerd met format-patch toe te passen, gebruik je git am (het commando wordt am genoemd, omdat het wordt gebruikt om "een serie van patches toe te passen [apply] uit een mailbox". Technisch is git am gemaakt om een mbox bestand te lezen, wat een eenvoudig gewone platte tekstformaat is om één of meer e-mail berichten in een tekstbestand op te slaan. Het ziet er ongeveer zo uit: From 330090432754092d704da8e76ca5c05c198e71a8 Mon Sep 17 00:00:00 2001 From: Jessica Smith <jessica@example.com> Date: Sun, 6 Apr 2008 10:17:23 -0700 Subject: [PATCH 1/2] add limit to log function Limit log functionality to the first 20 Dit is het begin van de uitvoer van het git format-patch commando dat je gezien hebt in de vorige paragraaf. Dit is ook een geldig mbox e-mail formaat. Als iemand jou de patch correct gemaild heeft door gebruik te maken van git send-email en je download dat in een mbox formaat, dan kan je git am naar dat mbox bestand verwijzen, en het zal beginnen met alle patches die het tegenkomt toe te passen. Als je een mail client gebruikt die meerdere e-mails kan opslaan in mbox formaat, dan kun je hele reeksen patches in een bestand opslaan en dan git am gebruiken om ze één voor één toe te passen. Maar, als iemand een patch bestand heeft geüpload die gegenereerd is met git format-patch naar een ticket systeem of zoiets, kun je het bestand lokaal opslaan en dan dat opgeslagen bestand aan git am doorgeven om het te applyen: $ git am 0001-limit-log-function.patch Applying: add limit to log function Je ziet dat het netjes is toegepast, en automatisch een nieuwe commit voor je heeft aangemaakt. De auteursinformatie wordt gehaald uit de From en Date velden in de kop van de e-mail, en het bericht van de commit wordt gehaald uit de Subject en de inhoud (voor de patch) van het mailbericht zelf. Bijvoorbeeld, als deze patch was toegepast van het mbox voorbeeld hierboven, dan zou de gegenereerde commit er ongeveer zo uit zien: $ git log --pretty=fuller -1 commit 6c5e70b984a60b3cecd395edd5b48a7575bf58e0 Author: Jessica Smith <jessica@example.com> AuthorDate: Sun Apr 6 10:17:23 2008 -0700 Commit: Scott Chacon <schacon@gmail.com> CommitDate: Thu Apr 9 09:19:06 2009 -0700 add limit to log function Limit log functionality to the first 20 De Commit informatie toont de persoon die de patch toegepast heeft en de tijd waarop het is toegepast. De Author informatie toont de persoon die de patch oorspronkelijk gemaakt heeft en wanneer het gemaakt is. Maar het is mogelijk dat de patch niet netjes toegepast kan worden. Misschien is jouw hoofdbranch te ver afgeweken van de branch waarop de patch gebouwd is, of is de patch afhankelijk van een andere patch, die je nog niet hebt toegepast. In dat geval zal het git am proces falen en je vragen wat je wilt doen: $ git am 0001-seeing-if-this-helps-the-gem.patch Applying: seeing if this helps the gem error: patch failed: ticgit.gemspec:1 error: ticgit.gemspec: patch does not apply Patch failed at 0001. When you have resolved this problem run "git am --resolved". If you would prefer to skip this patch, instead run "git am --skip". To restore the original branch and stop patching run "git am --abort". Dit commando zet conflict markeringen in alle bestanden waar het problemen mee heeft, net zoals een conflicterende merge of rebase operatie. Je lost dit probleem op een vergelijkbare manier op — wijzig het bestand om het conflict op te lossen, stage het bestand en voer dan git am --resolved uit om door te gaan met de volgende patch: $ (fix the file) $ git add ticgit.gemspec $ git am --resolved Applying: seeing if this helps the gem Als je wilt dat Git iets meer intelligentie toepast om het conflict op te lossen, kun je een -3 optie eraan meegeven, dit zorgt ervoor dat Git een driewegs-merge probeert. Deze optie staat standaard niet aan omdat het niet werkt als de commit waarvan de patch zegt dat het op gebaseerd is niet in je repository zit. Als je die commit wel hebt — als de patch gebaseerd was op een gepubliceerde commit — dan is de -3 over het algemeen veel slimmer in het toepassen van een conflicterende patch: $ git am -3 0001-seeing-if-this-helps-the-gem.patch Applying: seeing if this helps the gem error: patch failed: ticgit.gemspec:1 error: ticgit.gemspec: patch does not apply Using index info to reconstruct a base tree... Falling back to patching base and 3-way merge... No changes -- Patch already applied. In dit geval, zonder de -3 optie zou het als een conflict zijn beschouwd. Omdat de de -3 optie gebruikt is, is de patch netjes toegepast. Als je een aantal patches van een mbox toepast, kun je ook het am commando in een interactieve modus uitvoeren, wat bij iedere patch die het vindt stopt en je vraagt of je het wilt applyen: $ git am -3 -i mbox Commit Body is: -------------------------- seeing if this helps the gem -------------------------- Apply? [y]es/[n]o/[e]dit/[v]iew patch/[a]ccept all Dit is prettig wanneer je een aantal patches opgespaard hebt, omdat je de patch eerst kunt zien als je je niet kunt herinneren wat het is, of de patch niet toepassen omdat je dat al eerder gedaan hebt. Als alle patches voor je topic branch zijn toegepast en gecommit zijn op je branch, kan je besluiten of en hoe ze te integreren in een branch met een langere looptijd. Remote branches uitchecken Als je bijdrage van een Git gebruiker komt die zijn eigen repository opgezet heeft, een aantal patches daarin gepusht heeft, en jou de URL naar de repository gestuurd heeft en de naam van de remote branch waarin de wijzigingen zitten, kan je ze toevoegen als een remote en het mergen lokaal doen. Bijvoorbeeld, als Jessica je een e-mail stuurt waarin staat dat ze een prachtig mooie nieuwe feature in de ruby-client -branch van haar repository heeft, kun je deze testen door de remote toe te voegen en die branch lokaal te bekijken: $ git remote add jessica git://github.com/jessica/myproject.git $ git fetch jessica $ git checkout -b rubyclient jessica/ruby-client Als ze je later opnieuw mailt met een andere branch die weer een andere mooie feature bevat, dan kun je die meteen ophalen ( fetch ) en bekijken ( checkout ) omdat je de remote al ingesteld hebt. Dit is meest praktisch als je vaak met een persoon werkt. Als iemand eens in de zoveel tijd een enkele patch bij te dragen heeft, dan is het accepteren per mail misschien minder tijdrovend dan te eisen dat iedereen hun eigen server moet beheren, en daarna voortdurend remotes te moeten toevoegen en verwijderen voor die paar patches. Je zult daarbij waarschijnlijk ook niet honderden remotes willen hebben, elk voor iemand die maar een patch of twee bijdraagt. Aan de andere kant, scripts en gehoste diensten maken het wellicht eenvoudiger — het hangt sterk af van de manier waarop je ontwikkelt en hoe je bijdragers ontwikkelen. Een bijkomend voordeel van deze aanpak is dat je de historie van de commits ook krijgt. Alhoewel je misschien terechte merge-problemen hebt, weet je op welk punt in de historie hun werk is gebaseerd; een echte drieweg merge is de standaard in plaats van een -3 te moeten meegeven en hopen dat de patch gegenereerd was van een publieke commit waar je toegang toe hebt. Als je maar af en toe met een persoon werkt, maar toch op deze manier van hen wilt pullen, dan kun je de URL van de remote repository geven aan het git pull commando. Dit doet een eenmalig pull en bewaart de URL niet als een remote referentie: $ git pull https://github.com/onetimeguy/project From https://github.com/onetimeguy/project * branch HEAD -> FETCH_HEAD Merge made by the 'recursive' strategy. Bepalen wat geïntroduceerd is geworden Je hebt een topic branch dat bijgedragen werk bevat. Nu kan je besluiten wat je er mee wilt doen. Deze paragraaf worden een aantal commando’s nogmaals behandeld om te laten zien hoe je ze kunt gebruiken om precies te reviewen wat je zult introduceren als je dit merged in je hoofd-branch. Het is vaak handig om een overzicht te krijgen van alle commits die in deze branch zitten, maar die niet in je master-branch zitten. Je kunt commits weglaten die al in de master branch zitten door de --not optie mee te geven voor de branch naam. Dit doet hetzelfde als het master..contrib formaat dat we eerder gebruikt hebben. Bijvoorbeeld, als je bijdrager je twee patches stuurt, je hebt een branch genaamd contrib gemaakt en hebt die patches daar toegepast, dan kun je dit uitvoeren: $ git log contrib --not master commit 5b6235bd297351589efc4d73316f0a68d484f118 Author: Scott Chacon <schacon@gmail.com> Date: Fri Oct 24 09:53:59 2008 -0700 seeing if this helps the gem commit 7482e0d16d04bea79d0dba8988cc78df655f16a0 Author: Scott Chacon <schacon@gmail.com> Date: Mon Oct 22 19:38:36 2008 -0700 updated the gemspec to hopefully work better Om te zien welke wijzigingen door een commit worden geïntroduceerd, onthoud dan dat je de -p optie kunt meegeven aan git log en dan zal de geïntroduceerde diff bij elke commit erachter geplakt worden. Om een volledige diff te zien van wat zou gebeuren als je deze topic branch merged met een andere branch, zul je misschien een vreemde truc moeten toepassen om de juiste resultaten te krijgen. Je zult misschien dit uit willen voeren: $ git diff master Dit commando geeft je een diff, maar het kan misleidend zijn. Als je master -branch vooruit geschoven is sinds je de topic branch er vanaf hebt gemaakt, dan zul je ogenschijnlijk vreemde resultaten krijgen. Dit gebeurt omdat Git de snapshots van de laatste commit op de topic branch waar je op zit vergelijkt met de snapshot van de laatste commit op de master -branch. Bijvoorbeeld, als je een regel in een bestand hebt toegevoegd op de master -branch, dan zal een directe vergelijking van de snapshots eruit zien alsof de topic branch die regel gaat verwijderen. Als master een directe voorganger is van je topic branch is dit geen probleem, maar als de twee histories uitelkaar zijn gegaan, zal de diff eruit zien alsof je alle nieuwe spullen in je topic-branch toevoegt en al hetgeen wat alleen in de master -branch staat weghaalt. Wat je eigenlijk had willen zien zijn de wijzigingen die in de topic branch zijn toegevoegd — het werk dat je zult introduceren als je deze branch met master merget. Je doet dat door Git de laatste commit op je topic branch te laten vergelijken met de eerste gezamenlijke voorouder die het heeft met de master-branch. Technisch, kun je dat doen door de gezamenlijke voorouder op te zoeken en dan daar je diff op uit te voeren: $ git merge-base contrib master 36c7dba2c95e6bbb78dfa822519ecfec6e1ca649 $ git diff 36c7db of, beknopter: $ git diff $(git merge-base contrib master) Echter, dat is niet handig, dus levert Git een andere verkorte manier om hetzelfde te doen: de driedubbele punt syntax. In de context van het diff commando, kun je twee punten achter een andere branch zetten om een diff te doen tussen de laatste commit van de branch waar je op zit en de gezamenlijke voorouder met een andere branch: $ git diff master..contrib Dit commando laat alleen het werk zien dat je huidige topic branch heeft geïntroduceerd sinds de gezamenlijke voorouder met master. Dat is een erg handige syntax om te onthouden. Bijgedragen werk integreren Als al het werk in je onderwerp branch klaar is om te worden geïntegreerd in een hogere branch, dan is de vraag hoe dat te doen. En daarbij, welke workflow wil je gebruiken om je project te beheren? Je hebt een aantal alternatieven, dus we zullen er een aantal behandelen. Mergende workflows Een eenvoudige workflow is om al dat werk direct in de master -branch te mergen. In dit scenario heb je een master -branch die feitelijk de stabiele code bevat. Als je werk in een topic branch hebt waaraan je gewerkt hebt, of dat iemand anders heeft bijgedragen en je hebt dat nagekeken, dan merge je het in de master branch, verwijdert de topic branch en herhaalt het proces. Bijvoorbeeld, als we een repository hebben met werk in twee branches genaamd ruby_client en php_client , wat eruit ziet zoals Historie met een aantal topic branches. en mergen eerst ruby_client en daarna php_client , dan zal je historie er uit gaan zien zoals in Na het mergen van een topic branch. . Figuur 73. Historie met een aantal topic branches. Figuur 74. Na het mergen van een topic branch. Dat is waarschijnlijk de eenvoudigste workflow, maar het wordt problematisch als je werkt met grotere of stabielere projecten waar je heel voorzichtig moet zijn met wat je introduceert. Als je belangrijkere projecten hebt, zou je kunnen denken aan een twee-fasen merge cyclus. In dit scenario heb je twee langlopende branches, master en develop , waarin je bepaalt dat master alleen wordt geupdate als een hele stabiele release wordt gemaakt en alle nieuwe code wordt geintegreerd in de develop -branch. Je pusht beide branches regelmatig naar de publieke repository. Elke keer heb je een nieuwe topic branch om in te mergen ( Voor de merge van een topic branch. ), je merget deze in develop ( Na de merge van een topic branch. ); daarna, als je een release tagt, fast-forward je master naar waar de nu stabiele develop -branch is ( Na een project release. ). Figuur 75. Voor de merge van een topic branch. Figuur 76. Na de merge van een topic branch. Figuur 77. Na een project release. Op deze manier kunnen mensen, wanneer ze de repository van jouw project klonen, ervoor kiezen om je master uit te checken om de laatste stabiele versie te maken en eenvoudig up-to-date te blijven op die versie, of ze kunnen develop uitchecken, die het nieuweste materiaal bevat. Je kunt dit concept ook doortrekken, en een intergrate -branch hebben waar al het werk wordt gemerged. Dan, als de code op die branch stabiel is en de tests passeert, kan je dit op de develop -branch mergen, als dat dan enige tijd stabiel is gebleken kan je de master -branch fast-forwarden. Workflows met grote merges Het Git project heeft vier langlopende branches: master , next , en pu (proposed updates, voorgestelde wijzigingen) voor nieuw spul, en maint voor onderhoudswerk (maintenance backports). Als nieuw werk wordt geïntroduceerd door bijdragers, wordt het samengeraapt in topic branches in de repository van de beheerder op een manier die lijkt op wat we omschreven hebben (zie Een complexe reeks van parallelle bijgedragen topic branches beheren. ). Hier worden de topics geëvalueerd om te bepalen of ze veilig zijn en klaar voor verdere verwerking of dat ze nog wat werk nodig hebben. Als ze veilig zijn, worden ze in next gemerged, en wordt die branch gepusht zodat iedereen de geïntegreerde topics kan uitproberen. Figuur 78. Een complexe reeks van parallelle bijgedragen topic branches beheren. Als de topics nog werk nodig hebben, dan worden ze in plaats daarvan gemerged in pu . Zodra vastgesteld is dat ze helemaal stabiel zijn, dan worden de topics opnieuw gemerged in master . De next en pu -branches worden dan opnieuw opgebouwd vanaf de master . Dit betekent dat master vrijwel altijd vooruit beweegt, next eens in de zoveel tijd gerebaset wordt, en pu nog vaker gerebaset wordt: Figuur 79. Bijgedragen topic branches mergen in langlopende integratie branches. Als een topic branch uiteindelijk is gemerged in master , dan wordt het verwijderd van de repository. Het Git project heeft ook een maint -branch, die geforkt is van de laatste release om teruggewerkte (backported) patches te leveren in het geval dat een onderhoudsrelease nodig is. Dus als je de Git repository kloont, dan heb je vier branches die je kunt uitchecken om het project in verschillende stadia van ontwikkeling te evalueren, afhankelijk van hoe nieuw je alles wilt hebben of hoe je wil bijdragen; en de onderhouder heeft een gestructureerde workflow om nieuwe bijdragen aan de tand te voelen. De workflow van het Git project is gespecialiseerd. Om dit goed te begrijpen zou je het Git Beheerdersgids kunnen lezen. Rebasende en cherry pick workflows Andere beheerders geven de voorkeur aan rebasen of bijgedragen werk te cherry-picken naar hun master branch in plaats van ze erin te mergen, om een vrijwel lineaire historie te behouden. Als je werk in een topic branch hebt en hebt besloten dat je het wil integreren, dan ga je naar die branch en voert het rebase commando uit om de wijzigingen op je huidige master branch te baseren (of develop , enzovoorts). Als dat goed werkt, dan kun je de master -branch fast-forwarden, en eindig je met een lineaire project historie. De andere manier om geïntroduceerd werk van de ene naar de andere branch te verplaatsen is om het te cherry-picken. Een cherry-pick in Git is een soort rebase voor een enkele commit. Het pakt de patch die was geïntroduceerd in een commit en probeert die weer toe te passen op de branch waar je nu op zit. Dit is handig als je een aantal commits op een topic branch hebt en je er slechts één van wilt integreren, of als je alleen één commit op een topic branch hebt en er de voorkeur aan geeft om het te cherry-picken in plaats van rebase uit te voeren. Bijvoorbeeld, stel dat je een project hebt dat eruit ziet als dit: Figuur 80. Voorbeeld historie voor een cherry-pick. Als je commit e43a6 in je master branch wilt pullen, dan kun je dit uitvoeren $ git cherry-pick e43a6 Finished one cherry-pick. [master]: created a0a41a9: "More friendly message when locking the index fails." 3 files changed, 17 insertions(+), 3 deletions(-) Dit pullt dezelfde wijziging zoals geïntroduceerd in e43a6 , maar je krijgt een nieuwe SHA-1 waarde, omdat de toepassingsdatum anders is. Nu ziet je historie er zo uit: Figuur 81. Historie na het cherry-picken van een commit op een topic branch. Nu kun je de topic branch verwijderen en de commits die je niet wilde pullen weggooien. Rerere Als je veel gaat mergen en rebasen, of als je een langlopende topic branch onderhoudt heeft Git een optie genaamd “rerere” die je kan helpen. Rerere staat voor “hergebruik het opgeslagen besluit” (Reuse Recorded Resolution) - het is een manier om het andmatige conflict situatie oplossen in te korten. Als rerere is ingeschakeld, zal Git een reeks van ervoor- en erna-plaatjes van succesvolle merges bijhouden en als het opmerkt dat er een conflict is dat precies lijkt op een die je al opgelost hebt, zal het gewoon de oplossing van de vorige keer gebruiken, zonder je ermee lastig te vallen. Deze optie komt in twee delen: een configuratie instelling en een commando. De configuratie is een rerere.enabled instelling, en het is handig genoeg om in je globale configuratie te zetten: $ git config --global rerere.enabled true Vanaf nu, elke keer als je een merge doet dat een conflict oplost, zal de oplossing opgeslagen worden in de cache voor het geval dat je het in de toekomst nodig gaat hebben. Als het nodig is kan je interacteren met de rerere cache met het git rerere commando. Als het op zich aangeroepen wordt, zal Git de database met oplossingen controleren en probeert een passende te vinden voor alle huidige merge conflicten en deze proberen op te lossen (alhoewel dit automatisch wordt gedaan als rerere .enabled op true is gezet). Er zijn ook sub-commando’s om te zien wat er opgeslagen gaat worden, om specifieke oplossingen uit de cache te verwijderen, en om de gehele cache te legen. We zullen rerere meer gedetailleerd behandelen in Rerere . Je releases taggen Als je hebt besloten om een release te maken, zul je waarschijnlijk een tag willen aanmaken zodat je die release op elk moment in de toekomst opnieuw kunt maken. Je kunt een nieuwe tag maken zoals ik heb beschreven in Git Basics . Als je besluit om de tag als de beheerder te tekenen, dan ziet het taggen er wellicht zo uit: $ git tag -s v1.5 -m 'my signed 1.5 tag' You need a passphrase to unlock the secret key for user: "Scott Chacon <schacon@gmail.com>" 1024-bit DSA key, ID F721C45A, created 2009-02-09 Als je tags tekent, dan heb je misschien een probleem om de publieke PGP sleutel, die gebruikt is om de tags te tekenen, te verspreiden. De beheerder van het Git project heeft dit probleem opgelost door hun publieke sleutel als een blob in de repository mee te nemen en een tag te maken die direct naar die inhoud wijst. Om dit te doen kun je opzoeken welke sleutel je wilt gebruiken door gpg --list-keys uit te voeren: $ gpg --list-keys /Users/schacon/.gnupg/pubring.gpg --------------------------------- pub 1024D/F721C45A 2009-02-09 [expires: 2010-02-09] uid Scott Chacon <schacon@gmail.com> sub 2048g/45D02282 2009-02-09 [expires: 2010-02-09] Daarna kun je de sleutel direct in de Git database importeren, door het te exporteren en te "pipen" naar git hash-object , wat een nieuwe blob schrijft in Git met die inhoud en je de SHA-1 van de blob teruggeeft: $ gpg -a --export F721C45A | git hash-object -w --stdin 659ef797d181633c87ec71ac3f9ba29fe5775b92 Nu je de inhoud van je sleutel in Git hebt, kun je een tag aanmaken die direct daarnaar wijst door de nieuw SHA-1 waarde die het hash-object commando je gaf te specificeren: $ git tag -a maintainer-pgp-pub 659ef797d181633c87ec71ac3f9ba29fe5775b92 Als je git push --tags uitvoert, zal de maintainer-pgp-pub tag met iedereen gedeeld worden. Als iemand een tag wil verifiëren, dan kunnen ze jouw PGP sleutel direct importeren door de blob direct uit de database te halen en het in GPG te importeren: $ git show maintainer-pgp-pub | gpg --import Ze kunnen die sleutel gebruiken om al je getekende tags te verifiëren. En als je instructies in het tag bericht zet, dan zal git show <tag> je eindgebruikers meer specifieke instructies geven over tag verificatie. Een bouw nummer genereren Omdat Git geen monotoon oplopende nummers heeft zoals v123 of iets gelijkwaardigs om bij iedere commit mee te worden genomen, en je een voor mensen leesbare naam wilt hebben bij een commit, kan je git describe uitvoeren op die commit. Als antwoord geeft Git je een string met de naam van de dichtstbijzijnde tag voor die commit, gevolg door het aantal commits achter die tag en uiteindelijk gevolgd door een gedeeltelijke SHA-1 waarde van de commit die je omschrijft (vooraf gegaan door een "g" wat Git betekent): $ git describe master v1.6.2-rc1-20-g8c5b85c Op deze manier kun je een snapshot of "build" exporteren en het vernoemen naar iets dat begrijpelijk is voor mensen. Sterker nog: als je Git, gekloont van de Git repository, vanaf broncode gebouwd hebt geeft git --version je iets dat er zo uitziet. Als je een commit beschrijft die je direct getagged hebt, dan krijg je simpelweg de tag naam. Het git describe commando geeft beschreven tags de voorkeur (tags gemaakt met de -a of -s vlag); als je ook gebruik wilt maken van lichtgewicht (niet beschreven) tags, voeg dan de optie --tags aan het commando toe. Je kunt deze tekst ook gebruiken als het doel van een git checkout of git show commando, met de aantekening dat het afhankelijk is van de verkorte SHA-1 waarde aan het einde, dus het zou mogelijk niet voor altijd geldig blijven. Als voorbeeld, de Linux kernel sprong recentelijk van 8 naar 10 karakters om er zeker van de zijn dat de SHA-1 uniek zijn, oudere git describe commando uitvoernamen werden daardoor ongeldig. Een release voorbereiden Nu wil je een build vrijgeven. Een van de dingen die je wilt doen is een archief maken van de laatste snapshot van je code voor de arme stumperds die geen Git gebruiken. Het commando om dit te doen is git archive : $ git archive master --prefix='project/' | gzip > `git describe master`.tar.gz $ ls *.tar.gz v1.6.2-rc1-20-g8c5b85c.tar.gz Als iemand die tarball opent, dan krijgen ze de laatste snapshot van je project onder een project directory. Je kunt op vrijwel dezelfde manier ook een zip archief maken, maar dan door de format=zip optie mee te geven aan git archive : $ git archive master --prefix='project/' --format=zip > `git describe master`.zip Je hebt nu een mooie tarball en een zip archief van je project release, die je kunt uploaden naar je website of naar mensen kunt e-mailen. De shortlog De tijd is gekomen om de maillijst met mensen die willen weten wat er gebeurt in je project te mailen. Een prettige manier om een soort van wijzigingsverslag te krijgen van wat er is toegevoegd in je project sinds je laatste release of e-mail is om het git shortlog commando te gebruiken. Het vat alle commits samen binnen de grenswaarden die je het geeft. Het volgende, bijvoorbeeld, geeft je een samenvatting van alle commits sinds je vorige release, als je laatste release de naam v1.0.1 had: $ git shortlog --no-merges master --not v1.0.1 Chris Wanstrath (6): Add support for annotated tags to Grit::Tag Add packed-refs annotated tag support. Add Grit::Commit#to_patch Update version and History.txt Remove stray `puts` Make ls_tree ignore nils Tom Preston-Werner (4): fix dates in history dynamic version method Version bump to 1.0.2 Regenerated gemspec for version 1.0.2 Je krijgt een opgeschoonde samenvatting van alle commits sinds v1.0.1, gegroepeerd op auteur, die je naar je lijst kunt e-mailen. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/reference/source-replacement.html#registry-sources
Source Replacement - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Source Replacement This document is about redirecting communication with registries or repositories of git-based dependencies to another data source, such as a server mirroring the original registry or an exact local copy. If you want to patch individual dependencies, see overriding dependencies section of this documentation. If you want to control how Cargo makes network requests, see [http] and [net] configuration. A source is a provider that contains crates that may be included as dependencies for a package. Cargo supports the ability to replace one source with another to express strategies such as: Vendoring — custom sources can be defined which represent crates on the local filesystem. These sources are subsets of the source that they’re replacing and can be checked into packages if necessary. Mirroring — sources can be replaced with an equivalent version which acts as a cache for crates.io itself. Cargo has a core assumption about source replacement that the source code is exactly the same from both sources. Note that this also means that a replacement source is not allowed to have crates which are not present in the original source. As a consequence, source replacement is not appropriate for situations such as patching a dependency or a private registry. Cargo supports patching dependencies through the usage of the [patch] key , and private registry support is described in the Registries chapter . When using source replacement, running commands that need to contact the registry directly 1 requires passing the --registry option. This helps avoid any ambiguity about which registry to contact, and will use the authentication token for the specified registry. Configuration Configuration of replacement sources is done through .cargo/config.toml and the full set of available keys are: # The `source` table is where all keys related to source-replacement # are stored. [source] # Under the `source` table are a number of other tables whose keys are a # name for the relevant source. For example this section defines a new # source, called `my-vendor-source`, which comes from a directory # located at `vendor` relative to the directory containing this `.cargo/config.toml` # file [source.my-vendor-source] directory = "vendor" # The crates.io default source for crates is available under the name # "crates-io", and here we use the `replace-with` key to indicate that it's # replaced with our source above. # # The `replace-with` key can also reference an alternative registry name # defined in the `[registries]` table. [source.crates-io] replace-with = "my-vendor-source" # Each source has its own table where the key is the name of the source [source.the-source-name] # Indicate that `the-source-name` will be replaced with `another-source`, # defined elsewhere replace-with = "another-source" # Several kinds of sources can be specified (described in more detail below): registry = "https://example.com/path/to/index" local-registry = "path/to/registry" directory = "path/to/vendor" # Git sources can optionally specify a branch/tag/rev as well git = "https://example.com/path/to/repo" # branch = "master" # tag = "v1.0.1" # rev = "313f44e8" Registry Sources A “registry source” is one that works like crates.io itself. It’s an index that conforms to the specification at https://doc.rust-lang.org/cargo/reference/registry-index.html with a configuration file indicating where to download crates from. Registry sources can use either git or sparse HTTP protocol : # Git protocol registry = "ssh://git@example.com/path/to/index.git" # Sparse HTTP protocol registry = "sparse+https://example.com/path/to/index" # HTTPS git protocol registry = "https://example.com/path/to/index" Local Registry Sources A “local registry source” is intended to be a subset of another registry source, but available on the local filesystem (aka vendoring). Local registries are downloaded ahead of time, typically sync’d with a Cargo.lock , and are made up of a set of *.crate files and an index like the normal registry is. The primary way to manage and create local registry sources is through the cargo-local-registry subcommand, available on crates.io and can be installed with cargo install cargo-local-registry . Local registries are contained within one directory and contain a number of *.crate files downloaded from crates.io as well as an index directory with the same format as the crates.io-index project (populated with just entries for the crates that are present). Directory Sources A “directory source” is similar to a local registry source where it contains a number of crates available on the local filesystem, suitable for vendoring dependencies. Directory sources are primarily managed by the cargo vendor subcommand. Directory sources are distinct from local registries though in that they contain the unpacked version of *.crate files, making it more suitable in some situations to check everything into source control. A directory source is just a directory containing a number of other directories which contain the source code for crates (the unpacked version of *.crate files). Currently no restriction is placed on the name of each directory. Each crate in a directory source also has an associated metadata file indicating the checksum of each file in the crate to protect against accidental modifications. Git sources Git sources represent repositories used by git-based dependencies . They’re used to specify which git-based dependencies should be replaced with alternative sources. Git sources are not related to the git registries , and can’t be used to replace registry sources. Examples of such commands are in Publishing Commands . ↩
2026-01-13T09:29:19
https://git-scm.com/book/nl/v2/GitHub-Samenvatting
Git - Samenvatting About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 6.6 GitHub - Samenvatting Samenvatting Je bent nu een GitHub gebruiker. Je weet hoe je een account aanmaakt, een organisatie kunt onderhouden, repositories aanmaken en er naar pushen, hoe je bijdraagt aan projecten van anderen en bijdragen van anderen kunt accepteren. In het volgende hoofdstuk zal je kennismaken met krachtigere instrumenten en tips krijgen hoe met complexe situaties om te gaan waarna je een echte meester zult zijn in het gebruik van Git. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/commands/cargo-publish.html#options
cargo publish - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-publish(1) NAME cargo-publish — Upload a package to the registry SYNOPSIS cargo publish [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory and upload it to a registry. The default registry is https://crates.io . This performs the following steps: Performs a few checks, including: Checks the package.publish key in the manifest for restrictions on which registries you are allowed to publish to. Create a .crate file by following the steps in cargo-package(1) . Upload the crate to the registry. The server will perform additional checks on the crate. The client will poll waiting for the package to appear in the index, and may timeout. In that case, you will need to check for completion manually. This timeout does not affect the upload. This command requires you to be authenticated using cargo-login(1) or environment variables of the registry.token and registries.<name>.token config fields. See the reference for more details about packaging and publishing. OPTIONS Publish Options --dry-run Perform all checks without uploading. --no-verify Don’t verify the contents by building them. --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --index index The URL of the registry index to use. --registry registry Name of the registry to publish to. Registry names are defined in Cargo config files . If not specified, and there is a package.publish field in Cargo.toml with a single registry, then it will publish to that registry. Otherwise it will use the default registry, which is defined by the registry.default config key which defaults to crates-io . Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Publish only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Publish all members in the workspace. --all Deprecated alias for --workspace . --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Publish for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo publish -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo publish -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Publish the current package: cargo publish SEE ALSO cargo(1) , cargo-package(1) , cargo-login(1)
2026-01-13T09:29:19
https://git-scm.com/book/id/v2/Memulai-Dasar-dasar-Git
Git - Dasar-dasar Git About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Memulai 1.1 Tentang Version Control 1.2 Sejarah Singkat Git 1.3 Dasar-dasar Git 1.4 Command Line 1.5 Memasang Git 1.6 Pengaturan Awal Git 1.7 Mendapatkan Bantuan 1.8 Kesimpulan 2. Git Basics 2.1 Mendapatkan Repository Git 2.2 Recording Changes to the Repository 2.3 Viewing the Commit History 2.4 Undoing Things 2.5 Working with Remotes 2.6 Tagging 2.7 Alias Git 2.8 Summary 3. Git Branching 3.1 Branches in a Nutshell 3.2 Basic Branching and Merging 3.3 Branch Management 3.4 Branching Workflows 3.5 Remote Branches 3.6 Rebasing 3.7 Summary 4. Git di Server 4.1 Protokol 4.2 Getting Git on a Server 4.3 Generating Your SSH Public Key 4.4 Setting Up the Server 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Third Party Hosted Options 4.10 Ringkasan 5. Distributed Git 5.1 Distributed Workflows 5.2 Contributing to a Project 5.3 Maintaining a Project 5.4 Summary 6. GitHub 6.1 Pengaturan dan Konfigurasi Akun 6.2 Contributing to a Project 6.3 Maintaining a Project 6.4 Mengelola Organization 6.5 Scripting GitHub 6.6 Ringkasan 7. Git Tools 7.1 Revision Selection 7.2 Interactive Staging 7.3 Stashing and Cleaning 7.4 Signing Your Work 7.5 Searching 7.6 Rewriting History 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugging with Git 7.11 Submodules 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 Summary 8. Kostumisasi Git 8.1 Konfigurasi Git 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Ringkasan 9. Git and Other Systems 9.1 Git as a Client 9.2 Migrating to Git 9.3 Summary 10. Git Internals 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 The Refspec 10.6 Transfer Protocols 10.7 Maintenance and Data Recovery 10.8 Environment Variables 10.9 Summary A1. Appendix A: Git in Other Environments A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git in Powershell A1.7 Summary A2. Appendix B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. Appendix C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 1.3 Memulai - Dasar-dasar Git Dasar-dasar Git Jadi, mudahnya, apakah Git itu? Ini adalah bab yang penting untuk dipahami, karena jika Anda memahami apa itu Git dan pemahaman dasar tentang bagaimana Git bekerja, maka, menggunakan Git dengan efektif mungkin akan menjadi lebih mudah Anda lakukan. Selama Anda belajar Git, cobalah untuk menjernihkan pikiran Anda dari hal-hal yang Anda ketahui tentang VCS lainnya, seperti Subversion dan Perforce; dengan begitu, akan membantu Anda menghindari hal-hal yang membingungkan ketika menggunakan alatnya. Git menyimpan dan berpikir tentang informasi dengan sangat berbeda daripada sistem lainnya, meskipun antarmuka penggunanya cukup mirip, dan memahami perbedaan-perbedaan tersebut akan membantu mencegah Anda menjadi bingung ketika menggunakannya. Snapshots, Bukan Perbedaan-perbedaan Perbedaan besar antara Git dan VCS lainnya (Subversion dan sejenisnya) adalah tentang cara Git berpikir tentang datanya. Secara konsep, kebanyakan sistem lain menyimpan informasi sebagai sebuah daftar dari perubahan-perubahan berbasis berkas. Sistem-sistem tersebut (CVS, Subversion, Perforce, Bazaar, dan seterusnya) berpikir tentang informasi yang mereka simpan sebagai sekumpulan berkas dan perubahan-perubahan yang dibuat kepada tiap berkas sepanjang waktu. Figure 4. Menyimpan data sebagai perubahan-perubahan ke sebuah versi dasar dari tiap berkas. Git tidak berpikir atau menyimpan datanya dengan cara ini. Namun, Git berpikir tentang datanya lebih seperti sekumpulan snapshot dari sebuah miniatur filesystem . Setiap kali Anda melakukan commit , atau menyimpan keadaan dari proyek Anda di Git, pada dasarnya itu mengambil sebuah gambar tentang bagaimana tampilan semua berkas Anda pada saat itu dan menyimpan acuan kepada snapshot tersebut. Singkatnya, jika berkas-berkas itu tidak berubah, Git tidak menyimpan berkasnya lagi, hanya menautkan ke berkas yang sama persis sebelumnya yang telah tersimpan. Git berpikir tentang datanya lebih seperti sebuah aliran snapshot . Figure 5. Menyimpan data sebagai snapshot dari proyek sepanjang waktu. Ini adalah perbedaan penting antara Git dan hampir semua VCS lainnya. Hal itu membuat Git mempertimbangkan ulang hampir semua aspek dari version control yang kebanyakan sistem lainnya tiru dari generasi sebelumnya. Ini membuat Git lebih seperti sebuah filesystem kecil dengan beberapa alat yang sangat hebat terpasang padanya, daripada hanya sebuah VCS sederhana. Kita akan menjelajahi beberapa keuntungan yang Anda dapatkan dengan berpikir tentang data Anda seperti ini ketika kami membahas percabangan Git pada [_percabangan_git] . Hampir Setiap Pekerjaan Adalah Lokal Kebanyakan pekerjaan pada Git hanya membutuhkan berkas-berkas dan sumber daya lokal untuk bekerja – secara umum, tidak ada informasi yang dibutuhkan dari komputer lain dalam jaringan Anda. Jika Anda terbiasa dengan CVCS di mana kebanyakan pekerjaan memiliki kelebihan network latency , aspek ini dalam Git akan membuat Anda berpikir bahwa Tuhan telah memberkati Git dengan kekuatan yang tak dapat diungkapkan dengan kata-kata. Karena, jika Anda memiliki seluruh riwayat proyek tepat berada di dalam local disk Anda, kebanyakan pekerjaan terlihat hampir dalam sekejap. Sebagai contoh, untuk meramban riwayat dari proyek, Git tidak perlu pergi ke server untuk mendapatkan riwayat dan menampilkannya kepada Anda – dia hanya membacanya langsung dari basis data lokal Anda. Ini berarti Anda melihat riwayat proyek hampir dalam sekejap. Jika Anda ingin melihat perubahan-perubahan yang dikenalkan antara versi sekarang dari sebuah berkas dan berkasnya pada saat sebulan yang lalu, Git dapat mencari berkasnya sebulan yang lalu dan melakukan perhitungan perbedaan secara lokal, bukannya meminta kepada remote server untuk melakukannya atau menarik versi lama dari berkas dari remote server untuk melakukannya secara lokal. Ini juga berarti bahwa hanya ada sedikit hal yang tidak dapat Anda lakukan ketika Anda berada di luar jaringan atau di luar VPN. Jika Anda sedang mengendarai pesawat terbang atau kereta dan ingin sedikit bekerja, Anda dapat melakukan commit dengan bahagia hingga Anda mendapat sambungan jaringan untuk mengunggah. Jika Anda pulang dan tidak dapat menggunakan klien VPN dengan wajar, Anda masih dapat bekerja. Pada banyak sistem lain, melakukan hal tersebut adalah tidak mungkin atau sangat susah. Pada Perforce, misalnya, Anda tidak dapat melakukan banyak hal ketika Anda tidak tersambung ke server ; dan pada Subversion dan CVS, Anda dapat menyunting berkas, namun Anda tidak dapat melakukan commit tentang perubahan-perubahan ke basis data Anda (karena basis data Anda sedang luring). Ini mungkin tidak terlihat seperti sebuah masalah, namun, Anda mungkin akan terkejut betapa besar perbedaan yang dapat dibuatnya. Git Memiliki Integritas Semuanya dalam Git telah dilakukan checksum sebelum itu disimpan dan kemudian mengacu pada checksum tersebut. Ini berarti bahwa tidak mungkin untuk mengubah isi dari sebarang berkas atau direktori tanpa diketahui oleh Git. Kemampuan ini terpasang pada Git pada tingkat paling bawah dan terpadu pada filosofinya. Anda tidak dapat kehilangan informasi dalam singgahan atau mendapat berkas yang corrupt yang tidak terlacak oleh Git. Cara kerja yang digunakan oleh Git untuk melakukan checksum disebut dengan SHA-1 hash . Ini adalah kumpulan kata sepanjang 40 karakter dari karakter heksadesimal (0-9 dan a-f) dan dihitung berdasarkan isi dari sebuah berkas atau struktur direktori dalam Git. Sebuah SHA-1 hash nampak seperti berikut: 24b9da6552252987aa493b52f8696cd6d3b00373 Anda akan melihat nilai hash tersebut di semua tempat pada Git karena dia sering menggunakannya. Nyatanya, Git menyimpan semuanya dalam basis datanya bukan dari nama berkas, namun dari nilai hash isinya. Git Umumnya Hanya Menambah Data Ketika Anda melakukan aksi dalam Git, hampir semuanya hanya menambahkan data ke basis data Git. Adalah sulit untuk membuat sistem melakukan apapun yang tidak dapat dikembalikan atau membuatnya menghapus data dalam berbagai cara. Seperti pada VCS lain, Anda dapat kehilangan atau mengacak-acak perubahan yang belum Anda commit ; namun, setelah Anda melakukan commit snapshot ke Git, akan sangat sulit untuk kehilangan, terutama jika Anda menyimpan ke basis data Anda ke repository lain secara rutin. Ini membuat menggunakan Git adalah sebuah kebahagiaan, karena kita tahu kita dapat melakukan uji coba tanpa bahaya dari mengacak-acak hal-hal. Untuk melihat lebih dalam tentang bagaimana Git menyimpan datanya dan bagaimana Anda dapat memulihkan data yang kelihatannya hilang, lihat [_mengembalikan_ke_sebelumnya] . Tiga Keadaan Sekarang, perhatikan. Ini adalah hal utama untuk diingat tentang Git jika Anda ingin sisa perjalanan belajar Anda berjalan dengan lancar. Git memiliki tiga keadaan utama yang berkas-berkas Anda dapat masuk ke dalamnya: committed , modified , dan staged . Committed berarti datanya telah tersimpan dengan aman pada basis data lokal Anda. Modified berarti Anda telah mengubah berkas, namun belum di- commit ke basis data Anda. Staged berarti Anda telah menandai berkas yang telah diubah ke dalam versi sekarang untuk snapshot commit Anda selanjutnya. Ini memimpin kita kepada tiga bab utama dalam proyek Git: direktori Git, working directory , dan staging area . Figure 6. Working directory, staging area, dan directory Git. Directory Git adalah di mana Git menyimpan metadata dan basis data obyek untuk proyek Anda. Ini adalah bagian paling penting tentang Git, dan ini adalah apa yang disalin ketika Anda menggandakan sebuah repository dari komputer lain. Working directory adalah sebuah checkout tunggal dari satu versi milik proyek. Berkas-berkas ini ditarik dari basis data yang telah dimampatkan dalam direktori Git dan ditempatkan pada diska untuk Anda gunakan atau sunting. Staging area adalah sebuah berkas, umumnya berada pada direktori Git Anda, yang menyimpan informasi tentang apa yang akan menjadi commit Anda selanjutnya. Terkadang disebut juga sebagai index , namun juga sering disebut sebagai staging area Alur kerja dasar Git adalah seperti berikut: Anda mengubah berkas dalam working directory Anda. Anda menyiapkan berkasnya, menambah snapshot darinya ke staging area Anda. Anda melakukan commit , yang mengambil berkas-berkas yang ada pada staging area dan menyimpan snapshot tersebut secara tetap ke dalam direktori Git Anda. Jika sebuah versi tertentu dari sebuah berkas ada pada direktori Git, itu dianggap telah committed . Jika itu diubah, namun telah ditambahkan ke staging area , maka itu staged . Dan jika itu telah diubah sejak setelah di- check out , namun belum staged , maka itu adalah modified . Dalam [_bab_dasar-dasar_git] , Anda akan belajar lebih banyak tentang keadaan tersebut dan bagaimana Anda dapat memanfaatkannya atau melewati semua ke bagian staged . prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://www.linkedin.com/legal/privacy-policy?session_redirect=%2Fservices%2Fproducts%2Fcategories%2Fenterprise-social-networking-software&trk=registration-frontend_join-form-privacy-policy
LinkedIn Privacy Policy Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws Privacy Policy Effective November 3, 2025 Your Privacy Matters LinkedIn’s mission is to connect the world’s professionals to allow them to be more productive and successful. Central to this mission is our commitment to be transparent about the data we collect about you, how it is used and with whom it is shared. This Privacy Policy applies when you use our Services (described below). We offer our users choices about the data we collect, use and share as described in this Privacy Policy,  Cookie Policy , Settings and our  Help Center. Key Terms Choices  Settings  are available to Members of LinkedIn and Visitors are provided separate controls.  Learn More . Table of Contents Data We Collect How We Use Your Data How We Share Information Your Choices and Obligations Other Important Information Introduction We are a social network and online platform for professionals. People use our Services to find and be found for business opportunities, to connect with others and find information. Our Privacy Policy applies to any Member or Visitor to our Services. Our registered users (“Members”) share their professional identities, engage with their network, exchange knowledge and professional insights, post and view relevant content, learn and develop skills, and find business and career opportunities. Content and data on some of our Services is viewable to non-Members (“Visitors”). We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. Members and Visitors located in the Designated Countries or the UK can review additional information in our European Regional Privacy Notice . Services This Privacy Policy, including our Cookie Policy applies to your use of our Services. This Privacy Policy applies to LinkedIn.com, LinkedIn-branded apps, and other LinkedIn-branded sites, apps, communications and services offered by LinkedIn (“Services”), including off-site Services, such as our ad services and the “Apply with LinkedIn” and “Share with LinkedIn” plugins, but excluding services that state that they are offered under a different privacy policy. For California residents, additional disclosures required by California law may be found in our  California Privacy Disclosure . Data Controllers and Contracting Parties If you are in the “Designated Countries”, LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) will be the controller of your personal data provided to, or collected by or for, or processed in connection with our Services. If you are outside of the Designated Countries, LinkedIn Corporation will be the controller of (or business responsible for) your personal data provided to, or collected by or for, or processed in connection with our Services. As a Visitor or Member of our Services, the collection, use and sharing of your personal data is subject to this Privacy Policy and other documents referenced in this Privacy Policy, as well as updates. Change Changes to the Privacy Policy apply to your use of our Services after the “effective date.” LinkedIn (“we” or “us”) can modify this Privacy Policy, and if we make material changes to it, we will provide notice through our Services, or by other means, to provide you the opportunity to review the changes before they become effective. If you object to any changes, you may  close your account. You acknowledge that your continued use of our Services after we publish or send a notice about our changes to this Privacy Policy means that the collection, use and sharing of your personal data is subject to the updated Privacy Policy, as of its effective date. 1. Data We Collect 1.1 Data You Provide To Us You provide data to create an account with us. Registration To create an account you need to provide data including your name, email address and/or mobile number, general location (e.g., city), and a password. If you register for a premium Service, you will need to provide payment (e.g., credit card) and billing information. You create your LinkedIn profile (a complete profile helps you get the most from our Services). Profile You have  choices  about the information on your profile, such as your education, work experience, skills, photo,  city or area , endorsements, and optional verifications of information on your profile (such as verifications of your identity or workplace). You don’t have to provide additional information on your profile; however, profile information helps you to get more from our Services, including helping recruiters and business opportunities find you. It’s your choice whether to include  sensitive information  on your profile and to make that sensitive information public. Please do not post or add personal data to your profile that you would not want to be publicly available. You may give other data to us, such as by syncing your calendar. Posting and Uploading We collect personal data from you when you provide, post or upload it to our Services, such as when you fill out a form, (e.g., with demographic data or salary), respond to a survey, or submit a resume or fill out a job application on our Services. If you sync your calendars with our Services, we will collect your calendar meeting information to keep growing your network by suggesting connections for you and others, and by providing information about events, e.g. times, places, attendees and contacts. You don’t have to post or upload personal data; though if you don’t, it may limit your ability to grow and engage with your network over our Services. 1.2 Data From Others Others may post or write about you. Content and News You and others may post content that includes information about you (as part of articles, posts, comments, videos) on our Services. We also may collect public information about you, such as professional-related news and accomplishments, and make it available as part of our Services, including, as permitted by your settings, in notifications to others of  mentions in the news . Others may sync their calendar with our Services Contact and Calendar Information We receive personal data (including contact information) about you when others import or sync their calendar with our Services, associate their contacts with Member profiles, scan and upload business cards, or send messages using our Services (including invites or connection requests). If you or others opt-in to sync email accounts with our Services, we will also collect “email header” information that we can associate with Member profiles. Customers and partners may provide data to us. Partners We receive personal data (e.g., your job title and work email address) about you when you use the services of our customers and partners, such as employers or prospective employers and applicant tracking systems providing us job application data. Related Companies and Other Services We receive data about you when you use some of the other services provided by us or our  Affiliates , including Microsoft. For example, you may choose to send us information about your contacts in Microsoft apps and services, such as Outlook, for improved professional networking activities on our Services or we may receive information from Microsoft about your engagement with their sites and services. 1.3 Service Use We log your visits and use of our Services, including mobile apps. We log usage data when you visit or otherwise use our Services, including our sites, app and platform technology, such as when you view or click on content (e.g., learning video) or ads (on or off our sites and apps), perform a search, install or update one of our mobile apps, share articles or apply for jobs. We use log-ins, cookies,  device information  and internet protocol (“IP”) addresses to identify you and log your use. 1.4 Cookies and Similar Technologies We collect data through cookies and similar technologies. As further described in our  Cookie Policy , we use cookies and similar technologies (e.g., pixels and ad tags) to collect data (e.g., device IDs) to recognize you and your device(s) on, off and across different services and devices where you have engaged with our Services. We also allow some others to use cookies as described in our Cookie Policy. If you are outside the Designated Countries, we also collect (or rely on others, including Microsoft, who collect) information about your device where you have not engaged with our Services (e.g., ad ID, IP address, operating system and browser information) so we can provide our Members with relevant ads and better understand their effectiveness.  Learn more . You can  opt out  from our use of data from cookies and similar technologies that track your behavior on the sites of others for ad targeting and other ad-related purposes. For Visitors,  the controls are here . 1.5 Your Device and Location We receive data through cookies and similar technologies When you visit or leave our Services (including some plugins and our cookies or similar technology on the sites of others), we receive the URL of both the site you came from and the one you go to and the time of your visit. We also get information about your network and device (e.g., IP address, proxy server, operating system, web browser and add-ons, device identifier and features, cookie IDs and/or ISP, or your mobile carrier). If you use our Services from a mobile device, that device will send us data about your location based on your phone settings. We will ask you to opt-in before we use GPS or other tools to identify your precise location. 1.6 Communications If you communicate through our Services, we learn about that. We collect information about you when you communicate with others through our Services (e.g., when you send, receive, or engage with messages, events, or connection requests, including our marketing communications). This may include information that indicates who you are communicating with and when. We also use  automated systems to support and protect our site. For example, we use such systems to suggest possible responses to messages and to manage or block content that violates our User Agreement or  Professional Community Policies . 1.7 Workplace and School Provided Information When your organization (e.g., employer or school) buys a premium Service for you to use, they give us data about you. Others buying our Services for your use, such as your employer or your school, provide us with personal data about you and your eligibility to use the Services that they purchase for use by their workers, students or alumni. For example, we will get contact information for “ LinkedIn Page ” (formerly Company Page) administrators and for authorizing users of our premium Services, such as our recruiting, sales or learning products. 1.8 Sites and Services of Others We get data when you visit sites that include our ads, cookies or plugins or when you log-in to others’ services with your LinkedIn account. We receive information about your visits and interaction with services provided by others when you log-in with LinkedIn or visit others’ services that include some of our plugins (such as “Apply with LinkedIn”) or our ads, cookies or similar technologies. 1.9 Other We are improving our Services, which means we get new data and create new ways to use data. Our Services are dynamic, and we often introduce new features, which may require the collection of new information. If we collect materially different personal data or materially change how we collect, use or share your data, we will notify you and may also modify this Privacy Policy. Key Terms Affiliates Affiliates are companies controlling, controlled by or under common control with us, including, for example, LinkedIn Ireland, LinkedIn Corporation, LinkedIn Singapore and Microsoft Corporation or any of its subsidiaries (e.g., GitHub, Inc.). 2. How We Use Your Data We use your data to provide, support, personalize and develop our Services. How we use your personal data will depend on which Services you use, how you use those Services and the choices you make in your  settings . We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others. You can review LinkedIn's Responsible AI principles here and learn more about our approach to generative AI here . Learn more about the inferences we may make, including as to your age and gender and how we use them. 2.1 Services Our Services help you connect with others, find and be found for work and business opportunities, stay informed, get training and be more productive. We use your data to authorize access to our Services and honor your settings. Stay Connected Our Services allow you to stay in touch and up to date with colleagues, partners, clients, and other professional contacts. To do so, you can “connect” with the professionals who you choose, and who also wish to “connect” with you. Subject to your and their  settings , when you connect with other Members, you will be able to search each others’ connections in order to exchange professional opportunities. We use data about you (such as your profile, profiles you have viewed or data provided through address book uploads or partner integrations) to help others find your profile, suggest connections for you and others (e.g. Members who share your contacts or job experiences) and enable you to invite others to become a Member and connect with you. You can also opt-in to allow us to use your precise location or proximity to others for certain tasks (e.g. to suggest other nearby Members for you to connect with, calculate the commute to a new job, or notify your connections that you are at a professional event). It is your choice whether to invite someone to our Services, send a connection request, or allow another Member to become your connection. When you invite someone to connect with you, your invitation will include your network and basic profile information (e.g., name, profile photo, job title, region). We will send invitation reminders to the person you invited. You can  choose  whether or not to share your own list of connections with your connections. Visitors have  choices  about how we use their data. Stay Informed Our Services allow you to stay informed about news, events and ideas regarding professional topics you care about, and from professionals you respect. Our Services also allow you to improve your professional skills, or learn new ones. We use the data we have about you (e.g., data you provide, data we collect from your engagement with our Services and inferences we make from the data we have about you), to personalize our Services for you, such as by recommending or ranking relevant content and conversations on our Services. We also use the data we have about you to suggest skills you could add to your profile and skills that you might need to pursue your next opportunity. So, if you let us know that you are interested in a new skill (e.g., by watching a learning video), we will use this information to personalize content in your feed, suggest that you follow certain Members on our site, or suggest related learning content to help you towards that new skill. We use your content, activity and other data, including your name and photo, to provide notices to your network and others. For example, subject to your  settings , we may notify others that you have updated your profile, posted content, took a  social action , used a feature, made new connections or were  mentioned in the news . Career Our Services allow you to explore careers, evaluate educational opportunities, and seek out, and be found for, career opportunities. Your profile can be found by those looking to hire (for a job or a  specific task ) or be hired by you. We will use your data to recommend jobs and show you and others relevant professional contacts (e.g., who work at a company, in an industry, function or location or have certain skills and connections). You can signal that you are  interested  in changing jobs and share information with recruiters. We will use your data to recommend jobs to you and you to recruiters. We may use automated systems to provide content and recommendations to help make our Services more relevant to our Members, Visitors and customers. Keeping your profile accurate and up-to-date may help you better connect to others and to opportunities through our Services. Productivity Our Services allow you to collaborate with colleagues, search for potential clients, customers, partners and others to do business with. Our Services allow you to communicate with other Members and schedule and prepare meetings with them. If your  settings  allow, we scan messages to provide “bots” or similar tools that facilitate tasks such as scheduling meetings, drafting responses, summarizing messages or recommending next steps.  Learn more . 2.2 Premium Services Our premium Services help paying users to search for and contact Members through our Services, such as searching for and contacting job candidates, sales leads and co-workers, manage talent and promote content. We sell premium Services that provide our customers and subscribers with customized-search functionality and tools (including messaging and activity alerts) as part of our talent, marketing and sales solutions. Customers can export limited information from your profile, such as name, headline, current company, current title, and general location (e.g., Dublin), such as to manage sales leads or talent, unless you  opt-out . We do not provide contact information to customers as part of these premium Services without your consent. Premium Services customers can store information they have about you in our premium Services, such as a resume or contact information or sales history. The data stored about you by these customers is subject to the policies of those customers. Other enterprise Services and features that use your data include  TeamLink  and LinkedIn Pages (e.g., content analytics and followers). 2.3 Communications We contact you and enable communications between Members. We offer settings to control what messages you receive and how often you receive some types of messages. We will contact you through email, mobile phone, notices posted on our websites or apps, messages to your LinkedIn inbox, and other ways through our Services, including text messages and push notifications. We will send you messages about the availability of our Services, security, or other service-related issues. We also send messages about how to use our Services, network updates, reminders, job suggestions and promotional messages from us and our partners. You may change your communication  preferences  at any time. Please be aware that you cannot opt out of receiving service messages from us, including security and legal notices. We also enable  communications  between you and others through our Services, including for example  invitations ,  InMail ,  groups  and  messages  between connections. 2.4 Advertising We serve you tailored ads both on and off our Services. We offer you choices regarding personalized ads, but you cannot opt-out of seeing non-personalized ads. We target (and measure the performance of) ads to Members, Visitors and others both on and off our Services directly or through a variety of partners, using the following data, whether separately or combined: Data collected by advertising technologies on and off our Services using pixels, ad tags (e.g., when an advertiser installs a LinkedIn tag on their website), cookies, and other device identifiers; Member-provided information (e.g., profile, contact information, title and industry); Data from your use of our Services (e.g., search history, feed, content you read, who you follow or is following you, connections,  groups  participation, page visits, videos you watch, clicking on an ad, etc.), including as described in Section 1.3; Information from advertising partners , vendors and  publishers ; and Information inferred from data described above (e.g., using job titles from a profile to infer industry, seniority, and compensation bracket; using graduation dates to infer age or using first names or pronoun usage to infer gender; using your feed activity to infer your interests; or using device data to recognize you as a Member). Learn more about the inferences we make and how they may be used for advertising. Learn more about the ad technologies we use and our advertising services and partners. You can learn more about our compliance with laws in the Designated Countries or the UK in our European Regional Privacy Notice . We will show you ads called  sponsored content  which look similar to non-sponsored content, except that they are labeled as advertising (e.g., as “ad” or “sponsored”). If you take a social action (such as like, comment or share) on these ads, your action is associated with your name and viewable by others, including the advertiser. Subject to your settings , if you take a social action on the LinkedIn Services, that action may be mentioned with related ads. For example, when you like a company we may include your name and photo when their sponsored content is shown. Ad Choices You have choices regarding our uses of certain categories of data to show you more relevant ads. Member settings can be found here . For Visitors, the setting is here . Info to Ad Providers We do not share your personal data with any non-Affiliated third-party advertisers or ad networks except for: (i) hashed IDs or device identifiers (to the extent they are personal data in some countries); (ii) with your separate permission (e.g., in a lead generation form) or (iii) data already visible to any users of the Services (e.g., profile). However, if you view or click on an ad on or off our Services, the ad provider will get a signal that someone visited the page that displayed the ad, and they may, through the use of mechanisms such as cookies, determine it is you. Advertising partners can associate personal data collected by the advertiser directly from you with hashed IDs or device identifiers received from us. We seek to contractually require such advertising partners to obtain your explicit, opt-in consent before doing so where legally required, and in such instances, we take steps to ensure that consent has been provided before processing data from them. 2.5 Marketing We promote our Services to you and others. In addition to advertising our Services, we use Members’ data and content for invitations and communications promoting membership and network growth, engagement and our Services, such as by showing your connections that you have used a feature on our Services. 2.6 Developing Services and Research We develop our Services and conduct research Service Development We use data, including public feedback, to conduct  research  and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity. Other Research We seek to create economic opportunity for Members of the global workforce and to help them be more productive and successful. We use the personal data available to us to research social, economic and workplace trends, such as jobs availability and skills needed for these  jobs  and policies that help bridge the gap in various industries and geographic areas. In some cases, we work with trusted third parties to perform this research, under  controls  that are designed to protect your privacy. We may also make public data available to researchers to enable assessment of the safety and legal compliance of our Services. We publish or allow others to publish economic insights, presented as aggregated data rather than personal data. Surveys Polls and surveys are conducted by us and others through our Services. You are not obligated to respond to polls or surveys, and you have choices about the information you provide. You may  opt-out  of survey invitations. 2.7 Customer Support We use data to help you and fix problems. We use data (which can include your communications) to investigate, respond to and resolve complaints and for Service issues (e.g., bugs). 2.8 Insights That Do Not Identify You We use data to generate insights that do not identify you. We use your data to perform analytics to produce and share insights that do not identify you. For example, we may use your data to generate statistics about our Members, their profession or industry, to calculate ad impressions served or clicked on (e.g., for basic business reporting to support billing and budget management or, subject to your settings , for reports to advertisers who may use them to inform their advertising campaigns), to show Members' information about engagement with a post or LinkedIn Page , to publish visitor demographics for a Service or create demographic workforce insights, or to understand usage of our services. 2.9 Security and Investigations We use data for security, fraud prevention and investigations. We and our Affiliates, including Microsoft, may use your data (including your communications) for security purposes or to prevent or investigate possible fraud or other violations of the law, our  User Agreement  and/or attempts to harm our Members, Visitors, company, Affiliates, or others. Key Terms Social Action E.g. like, comment, follow, share  Partners Partners include ad networks, exchanges and others  3. How We Share Information 3.1 Our Services Any data that you include on your profile and any content you post or social action (e.g., likes, follows, comments, shares) you take on our Services will be seen by others, consistent with your settings. Profile Your profile is fully visible to all Members and customers of our Services. Subject to your  settings , it can also be visible to others on or off of our Services (e.g., Visitors to our Services or users of third-party search tools). As detailed in our  Help Center , your settings, degree of connection with the viewing Member, the subscriptions they may have, their  usage of our Services , access channels and search types (e.g., by name or by keyword) impact the availability of your profile and whether they can view certain fields in your profile. Posts, Likes, Follows, Comments, Messages Our Services allow viewing and sharing information including through posts, likes, follows and comments. When you share an article or a post (e.g., an update, image, video or article) publicly it can be viewed by everyone and re-shared anywhere (subject to your  settings ). Members, Visitors and others will be able to find and see your publicly-shared content, including your name (and photo if you have provided one). In a  group , posts are visible to others according to group type. For example, posts in private groups are visible to others in the group and posts in public groups are visible publicly. Your membership in groups is public and part of your profile, but you can change visibility in your  settings . Any information you share through companies’ or other organizations’ pages on our Services will be viewable by those organizations and others who view those pages' content. When you follow a person or organization, you are visible to others and that “page owner” as a follower. We let senders know when you act on their message, subject to your  settings  where applicable. Subject to your  settings , we let a Member know when you view their profile. We also give you choices about letting organizations know when you've viewed their Page. When you like or re-share or comment on another’s content (including ads), others will be able to view these “social actions” and associate it with you (e.g., your name, profile and photo if you provided it). Your employer can see how you use Services they provided for your work (e.g. as a recruiter or sales agent) and related information. We will not show them your job searches or personal messages. Enterprise Accounts Your employer may offer you access to our enterprise Services such as Recruiter, Sales Navigator, LinkedIn Learning or our advertising Campaign Manager. Your employer can review and manage your use of such enterprise Services. Depending on the enterprise Service, before you use such Service, we will ask for permission to share with your employer relevant data from your profile or use of our non-enterprise Services. For example, users of Sales Navigator will be asked to share their “social selling index”, a score calculated in part based on their personal account activity. We understand that certain activities such as job hunting and personal messages are sensitive, and so we do not share those with your employer unless you choose to share it with them through our Services (for example, by applying for a new position in the same company or mentioning your job hunting in a message to a co-worker through our Services). Subject to your  settings , when you use workplace tools and services (e.g., interactive employee directory tools) certain of your data may also be made available to your employer or be connected with information we receive from your employer to enable these tools and services. 3.2 Communication Archival Regulated Members may need to store communications outside of our Service. Some Members (or their employers) need, for legal or professional compliance, to archive their communications and social media activity, and will use services of others to provide these archival services. We enable archiving of messages by and to those Members outside of our Services. For example, a financial advisor needs to archive communications with her clients through our Services in order to maintain her professional financial advisor license. 3.3 Others’ Services You may link your account with others’ services so that they can look up your contacts’ profiles, post your shares on such platforms, or enable you to start conversations with your connections on such platforms. Excerpts from your profile will also appear on the services of others. Subject to your  settings , other services may look up your profile. When you opt to link your account with other services, personal data (e.g., your name, title, and company) will become available to them. The sharing and use of that personal data will be described in, or linked to, a consent screen when you opt to link the accounts. For example, you may link your Twitter or WeChat account to share content from our Services into these other services, or your email provider may give you the option to upload your LinkedIn contacts into its own service. Third-party services have their own privacy policies, and you may be giving them permission to use your data in ways we would not. You may  revoke  the link with such accounts. The information you make available to others in our Services (e.g., information from your profile, your posts, your engagement with the posts, or message to Pages) may be available to them on other services . For example, search tools, mail and calendar applications, or talent and lead managers may show a user limited profile data (subject to your settings ), and social media management tools or other platforms may display your posts. The information retained on these services may not reflect updates you make on LinkedIn. 3.4 Related Services We share your data across our different Services and LinkedIn affiliated entities. We will share your personal data with our Affiliates to provide and develop our Services. For example, we may refer a query to Bing in some instances, such as where you'd benefit from a more up to date response in a chat experience. Subject to our  European Regional Privacy Notice , we may also share with our Affiliates, including Microsoft, your (1) publicly-shared content (such as your public LinkedIn posts) to provide or develop their services and (2) personal data to improve, provide or develop their advertising services. Where allowed , we may combine information internally across the different Services covered by this Privacy Policy to help our Services be more relevant and useful to you and others. For example, we may personalize your feed or job recommendations based on your learning history. 3.5 Service Providers We may use others to help us with our Services. We use others to help us provide our Services (e.g., maintenance, analysis, audit, payments, fraud detection, customer support, marketing and development). They will have access to your information (e.g., the contents of a customer support request) as reasonably necessary to perform these tasks on our behalf and are obligated not to disclose or use it for other purposes. If you purchase a Service from us, we may use a payments service provider who may separately collect information about you (e.g., for fraud prevention or to comply with legal obligations). 3.6 Legal Disclosures We may need to share your data when we believe it’s required by law or to help protect the rights and safety of you, us or others. It is possible that we will need to disclose information about you when required by law, subpoena, or other legal process or if we have a good faith belief that disclosure is reasonably necessary to (1) investigate, prevent or take action regarding suspected or actual illegal activities or to assist government enforcement agencies; (2) enforce our agreements with you; (3) investigate and defend ourselves against any third-party claims or allegations; (4) protect the security or integrity of our Services or the products or services of our Affiliates (such as by sharing with companies facing similar threats); or (5) exercise or protect the rights and safety of LinkedIn, our Members, personnel or others. We attempt to notify Members about legal demands for their personal data when appropriate in our judgment, unless prohibited by law or court order or when the request is an emergency. We may dispute such demands when we believe, in our discretion, that the requests are overbroad, vague or lack proper authority, but we do not promise to challenge every demand. To learn more see our  Data Request Guidelines  and  Transparency Report . 3.7 Change in Control or Sale We may share your data when our business is sold to others, but it must continue to be used in accordance with this Privacy Policy. We can also share your personal data as part of a sale, merger or change in control, or in preparation for any of these events. Any other entity which buys us or part of our business will have the right to continue to use your data, but only in the manner set out in this Privacy Policy unless you agree otherwise. 4. Your Choices & Obligations 4.1 Data Retention We keep most of your personal data for as long as your account is open. We generally retain your personal data as long as you keep your account open or as needed to provide you Services. This includes data you or others provided to us and data generated or inferred from your use of our Services. Even if you only use our Services when looking for a new job every few years, we will retain your information and keep your profile open, unless you close your account. In some cases we choose to retain certain information (e.g., insights about Services use) in a depersonalized or aggregated form. 4.2 Rights to Access and Control Your Personal Data You can access or delete your personal data. You have many choices about how your data is collected, used and shared. We provide many  choices  about the collection, use and sharing of your data, from deleting or correcting data you include in your  profile  and controlling the visibility of your  posts  to advertising  opt-outs  and  communication  controls. We offer you  settings  to control and manage the personal data we have about you. For personal data that we have about you, you can: Delete Data : You can ask us to erase or delete all or some of your personal data (e.g., if it is no longer necessary to provide Services to you). Change or Correct Data : You can edit some of your personal data through your account. You can also ask us to change, update or fix your data in certain cases, particularly if it’s inaccurate. Object to, or Limit or Restrict, Use of Data : You can ask us to stop using all or some of your personal data (e.g., if we have no legal right to keep using it) or to limit our use of it (e.g., if your personal data is inaccurate or unlawfully held). Right to Access and/or Take Your Data : You can ask us for a copy of your personal data and can ask for a copy of personal data you provided in machine readable form. Visitors can learn more about how to make these requests here . You may also contact us using the contact information below, and we will consider your request in accordance with applicable laws. Residents in the  Designated Countries and the UK , and  other regions , may have additional rights under their laws. 4.3 Account Closure We keep some of your data even after you close your account. If you choose to close your  LinkedIn  account, your personal data will generally stop being visible to others on our Services within 24 hours. We generally delete closed account information within 30 days of account closure, except as noted below. We retain your personal data even after you have closed your account if reasonably necessary to comply with our legal obligations (including law enforcement requests), meet regulatory requirements, resolve disputes, maintain security, prevent fraud and abuse (e.g., if we have  restricted  your account for breach of our  Professional Community Policies ), enforce our User Agreement, or fulfill your request to "unsubscribe" from further messages from us. We will retain de-personalized information after your account has been closed. Information you have shared with others (e.g., through InMail, updates or group posts) will remain visible after you close your account or delete the information from your own profile or mailbox, and we do not control data that other Members have copied out of our Services. Groups content and ratings or review content associated with closed accounts will show an unknown user as the source. Your profile may continue to be displayed in the services of others (e.g., search tools) until they refresh their cache. 5. Other Important Information 5.1. Security We monitor for and try to prevent security breaches. Please use the security features available through our Services. We implement security safeguards designed to protect your data, such as HTTPS. We regularly monitor our systems for possible vulnerabilities and attacks. However, we cannot warrant the security of any information that you send us. There is no guarantee that data may not be accessed, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. 5.2. Cross-Border Data Transfers We store and use your data outside your country. We process data both inside and outside of the United States and rely on legally-provided mechanisms to lawfully transfer data across borders.  Learn more . Countries where we process data may have laws which are different from, and potentially not as protective as, the laws of your own country. 5.3 Lawful Bases for Processing We have lawful bases to collect, use and share data about you. You have choices about our use of your data. At any time, you can withdraw consent you have provided by going to settings. We will only collect and process personal data about you where we have lawful bases. Lawful bases include consent (where you have given consent), contract (where processing is necessary for the performance of a contract with you (e.g., to deliver the LinkedIn Services you have requested) and “legitimate interests.”  Learn more . Where we rely on your consent to process personal data, you have the right to withdraw or decline your consent at any time and where we rely on legitimate interests, you have the right to object.  Learn More . If you have any questions about the lawful bases upon which we collect and use your personal data, please contact our Data Protection Officer here . If you're located in one of the Designated Countries or the UK, you can learn more about our lawful bases for processing in our European Regional Privacy Notice . 5.4. Direct Marketing and Do Not Track Signals Our statements regarding direct marketing and “do not track” signals. We currently do not share personal data with third parties for their direct marketing purposes without your permission.  Learn more  about this and about our response to “do not track” signals. 5.5. Contact Information You can contact us or use other options to resolve any complaints. If you have questions or complaints regarding this Policy, please first  contact LinkedIn  online. You can also reach us by  physical mail . If contacting us does not resolve your complaint, you have more  options . Residents in the  Designated Countries  and  other regions  may also have the right to contact our Data Protection Officer here . If this does not resolve your complaint, Residents in the Designated Countries and other regions may have more  options  under their laws. Key Terms Consent Where we process data based on consent, we will ask for your explicit consent. You may withdraw your consent at any time, but that will not affect the lawfulness of the processing of your personal data prior to such withdrawal. Where we rely on contract, we will ask that you agree to the processing of personal data that is necessary for entering into or performance of your contract with us. We will rely on legitimate interests as a basis for data processing where the processing of your data is not overridden by your interests or fundamental rights and freedoms.  LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language
2026-01-13T09:29:19
https://git-scm.com/book/nl/v2/GitHub-Aan-een-project-bijdragen
Git - Aan een project bijdragen About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 6.2 GitHub - Aan een project bijdragen Aan een project bijdragen Nu het account is ingericht, laten we eens door de details lopen die je kunnen helpen bij het bijdragen bij bestaande projecten. Projecten afsplitsen (forken) Als je wilt bijdragen aan een bestaand project waar je geen push toegang tot hebt, kan je het project “forken”. Dat houdt in dat GitHub je een kopie laat maken van het project die geheel van jouw is; het bestaat in de namespace van jouw gebruiker en jij kunt ernaar pushen. Noot Historisch gezien is de term “fork” een beetje negatief in context, in die zin dat iemand een open source project in een andere richting leidde, soms een concurrerend project makend en de bijdragers onderling verdeelde. In GitHub is een “fork” eenvoudigweg hetzelfde project in jouw namespace, wat jou toestaat om wijzigingen aan een project openbaar te maken met als doel om op een meer open manier bij te dragen. Op deze manier hoeven projecten zich geen zorgen te maken over gebruikers als bijdragers toe te voegen om ze push-toegang te geven. Mensen kunnen een project forken, ernaar pushen, en hun wijzigingen terug naar de oorspronkelijke project bij te dragen door een zogenoemde Pull Request te maken, wat we straks zullen behandelen. Dit opent een discussie thread met code review en de eigenaar en de bijdrager kunnen dan over de wijziging communiceren totdat de eigenaar ermee tevreden is, op welk moment de eigenaar deze kan mergen. Om een project te forken, bezoek je de projectpagina en klikt op de “Fork” knop rechtboven op de pagina. Figuur 89. De “Fork” knop. Na enkele seconden zal je naar jouw nieuwe projectpagina worden geleid, met je eigen schrijfbare kopie van de code. De GitHub flow GitHub is ontworpen rond een specifieke samenwerkings workflow die draait om pull-verzoeken (Pull Requests). Deze workflow werkt of je nu samenwerkt met een hecht team in een enkel gedeelde repository of een bedrijf dat wereldwijd verspreid is of een netwerk van onbekenden die bijdragen aan een project door middel van vele forks. Het is gericht op de Topic branches workflow die behandeld is in Branchen in Git . Hier is hoe het over het algemeen werkt: Fork het project Maak een topic branch van master . Doe een aantal commits om het project te verbeteren. Push deze branch naar jouw GitHub project. Open een Pull Request op GitHub. Bespreek en blijf committen zo je wilt. De project eigenaar merget of sluit de Pull Request. Synchroniseer de geupdate master terug naar je fork Dit is eigenlijk de Integratie Manager workflow zoals deze behandeld is in Integratie-manager workflow , maar in plaats van mail te gebruiken om te communiceren en wijzigingen te reviewen, gebruiken teams de web-gebaseerde instrumenten van GitHub. Laten we eens een voorbeeld bespreken van een voorstel tot wijziging aan een open source project die op GitHub gehost wordt die deze workflow gebruikt. Een Pull Request maken Tony is op zoek naar code om op zijn Arduino programmeerbare microcontroller te draaien en heeft een fantastisch project gevonden op GitHub op https://github.com/schacon/blink . Figuur 90. Het project waar we aan willen bijdragen. Het enige probleem is dat het lichtje te snel knippert. We vinden dat het veel beter is als we 3 seconden wachten in plaats van 1 tussen elke status wijziging. Laten we dus het programma verbeteren en het terugsturen aan het project als een wijzigingsvoorstel. Eerst klikken we de Fork knop zoals eerder gezegd om onze eigen kopie van het project te krijgen. Onze gebruikersnaam is in dit geval “tonychacon” dus onze kopie van dit project is op https://github.com/tonychacon/blink en dat is waar we het kunnen wijzigen. We clonen het lokaal, maken een topic branch en doen de codewijziging en tot slot pushen we de wijziging weer naar GitHub. $ git clone https://github.com/tonychacon/blink (1) Cloning into 'blink'... $ cd blink $ git checkout -b slow-blink (2) Switched to a new branch 'slow-blink' $ sed -i '' 's/1000/3000/' blink.ino (macOS) (3) # If you're on a Linux system, do this instead: # $ sed -i 's/1000/3000/' blink.ino (3) $ git diff --word-diff (4) diff --git a/blink.ino b/blink.ino index 15b9911..a6cc5a5 100644 --- a/blink.ino +++ b/blink.ino @@ -18,7 +18,7 @@ void setup() { // the loop routine runs over and over again forever: void loop() { digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level) [-delay(1000);-]{+delay(3000);+} // wait for a second digitalWrite(led, LOW); // turn the LED off by making the voltage LOW [-delay(1000);-]{+delay(3000);+} // wait for a second } $ git commit -a -m 'three seconds is better' (5) [slow-blink 5ca509d] three seconds is better 1 file changed, 2 insertions(+), 2 deletions(-) $ git push origin slow-blink (6) Username for 'https://github.com': tonychacon Password for 'https://tonychacon@github.com': Counting objects: 5, done. Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (3/3), 340 bytes | 0 bytes/s, done. Total 3 (delta 1), reused 0 (delta 0) To https://github.com/tonychacon/blink * [new branch] slow-blink -> slow-blink Kloon onze fork van het project lokaal Maak een omschrijvende topic branch Voer onze wijzigingen ana de code uit Controleer dat de wijziging juist is Commit de wijziging naar de topic branch Push onze nieuwe topic branch terug naar onze GitHub fork Als we nu teruggaan naar onze fork op GitHub, kunnen we zien dat GitHub heeft opgemerkt dat we een nieuwe topic branch hebben gepusht en laat ons een grote groene knop zien om onze wijzigingen te bekijken en een Pull Request te openen naar het oorspronkelijke project. Als alternatief zou je naar de “Branches” pagina kunnen gaan op https://github.com/<user>/<project>/branches en jouw branch opzoeken en een Pull Request vanuit die locatie openen. Figuur 91. Pull Request knop Als we op die groene knop klikken, zullen we een scherm zien die ons in staat stelt een titel en een omschrijving voor de wijziging die we willen aanvragen. Het is over het algemeen een goed idee om wat moeite te steken in het maken van een zo goed mogelijke omschrijving zodat de eigenaar van het originele project weet waarom dit wordt gesuggereert, dat je wijziging correct is, en waarom het een waardevolle wijziging is als deze wordt geaccepteerd. We zien ook een lijst van de commits in onze topic branch die “voorlopen” op de master -branch (in dit geval, alleen deze ene) en een unified diff van alle wijzigingen die gemaakt zullen worden als deze branch gemerged gaat worden door de project eigenaar. Figuur 92. Pull Request aanmaak pagina Als je op de Create pull request knop drukt op deze pagina, zal de eigenaar van het project waar jij vanaf hebt geforked een berichtje krijgen dat iemand een wijziging voorstelt en zal naar een pagina verwijzen waar al deze informatie op vermeld staat. Noot Alhoewel Pull Requests gewoonlijk gebruikt worden voor openbare projecten zoals deze als de bijdrager een volledige wijziging klaar heeft staan, is het ook vaak gebruikt in interne projecten aan het begin van de ontwikkel-cyclus. Omdat je kunt blijven pushen naar de topic branch zelfs nadat de Pull Request is geopend, wordt deze vaak vroeg geopend en gebruikt als een manier om op werk te itereren als een team binnen een context, in plaats van te worden geopend helemaal aan het eind van het proces. Iteraties op een Pull Request Op dit moment kan de project eigenaar naar de voorgedragen wijziging kijken en deze mergen, afwijzen of er op reageren. Laten doen alsof het idee hem aanspreekt, maar dat hij het lichtje iets langer uit wil hebben dan aan. Waar deze discussie via mail zou kunnen plaatsvinden in de workflows die we hebben laten zien in Gedistribueerd Git , heeft het bij GitHub online plaats. De project eigenaar kan de unified diff bekijken en een commentaar achterlaten door op een of meer regels te klikken. Figuur 93. Commentariëren op een specifieke code regel in een Pull Request Als de onderhouder dit commentaar eenmaal heeft gemaakt zal de persoon die de Pull Request heeft geopend (en verder iedereen die deze repository volgt) een berichtje krijgen. We zullen straks de manier waarop dit kan worden aangepast behandelen, maar als hij email notificaties aan heeft staan, zou Tony een mail zoals deze krijgen: Figuur 94. Commentaar verstuurd als email notificaties Het is iedereen toegestaan om algemene commentaren op het Pull Request te maken. In Pull Request discussie pagina kunnen we een voorbeeld zien van de project eigenaar die zowel een regel code becommentariëert en daarna een algemeen commentaar achterlaat in het discussie gedeelte. Je kunt zien dat de code commentaren ook in de conversatie worden gevoegd. Figuur 95. Pull Request discussie pagina Nu kan de bijdrager zien wat hij moet doen om hun wijziging geaccepteerd te krijgen. Gelukkig is dit ook erg eenvoudig om te doen. Waar je bij email je reeks opnieuw moet samenstellen en opnieuw submitten naar de mail lijst, hoef je met GitHub alleen opnieuw naar de topic branch te committen en te pushen. In Pull Request finaal is te zien dat de oude code commentaar is ingeklapt in de bijgewerkte Pull Request, omdat deze is gemaakt dat sindsdien is gewijzigd. Het toevoegen van commits in een bestaande Pull Request veroorzaakt geen notificatie, dus als Tony eenmaal zijn correcties heeft gepusht besluit hij om een commentaar achter te laten om de project eigenaar te informeren dat hij de gevraagde wijziging gemaakt heeft. Figuur 96. Pull Request finaal Een interessant iets om op te merken is dat wanneer je de “Files Changed” tab klikt op deze Pull Request, je de “unified” diff krijgt —  daarmee wordt het uiteindelijke geaggregeerde verschil bedoeld die geintroduceerd wordt met je hoofd branch als deze topic branch zou zijn gemerged. In git diff terminologie, het laat feitelijk automatisch de git diff master...<branch> zien voor de branch waar deze Pull Request op is gebaseerd. Zie Bepalen wat geïntroduceerd is geworden voor meer informatie over dit type diff. Het andere wat je zult zien is dat GitHub controleert of de Pull Request goed zou mergen en een knop biedt om de merge voor je te doen op de server. Deze knop krijg je alleen te zien als je schrijf-rechten hebt op de repository en een triviale merge mogelijk is. Als je de knop klikt zal GitHub een “non-fast-forward” merge uitvoeren, wat inhoudt dat zelfs als de merge een fast-forward zou kunnen zijn, het nog steeds een merge commit maakt. Als je dat liever hebt, kan je de branch eenvoudigweg pullen en het lokaal mergen. Als je deze branch merget in de master -branch en deze naar GitHub pusht, wordt de Pull Request automatisch gesloten. Dit is de eenvoudige workflow dat de meeste GitHub projecten gebruiken. Topic branches worden gemaakt, Pull Requests worden hierop geopend, een discussie volgt, mogelijk wordt er meer werk op de branch gedaan en uiteindelijk wordt het request gesloten of gemerged. Noot Niet alleen forks Het is belangrijk op te merken dat je ook een Pull Request kunt openen tussen twee branches in dezelfde repository. Als je met iemand samenwerkt aan een feature en je hebt beiden schrijfrechten op het project, kan je een topic branch pushen naar de repository en een Pull Request openen naar de master -branch van hetzelfde project om het code review en discussie proces te starten. Forken is niet noodzakelijk. Pull Requests voor gevorderden Nu we de grondbeginselen van bijdragen aan een project op GitHub hebben behandeld, laten we een paar interessante tips en truks zien met betrekking tot Pull Requests zodat je nog effectiever kunt zijn in het gebruik. Pull Requests als patches Het is belangrijk om te begrijpen dat veel projecten Pull Requests niet echt zien als stapels met perfecte patches die altijd netjes achterelkaar zullen kunnen worden toegepast, zoals de meeste maillijst-gebaseerde projecten de reeks bijgedragen patches zien. De meeste GitHub projecten zien Pull Request branches als iteratieve conversaties rond een voorgestelde wijziging, uitmondend in een unified diff die met een merge wordt toegepast. Dit is een belangrijk onderscheid, omdat de wijziging over het algemeen wordt voorgesteld voordat de code als perfect wordt beschouwd, wat zeldzamer is bij de reeks patch bijdragen in maillijsten. Dit maakt een vroeg gesprek mogelijk met de beheerders zodat het vinden van een goede oplossing meer een inspanning wordt van de hele gemeenschap. Als code wordt voorgesteld met een Pull Request en de beheerders of de gemeenschap een wijziging voorstellen wordt de reeks patches niet opnieuw samengesteld, maar daarentegen wordt het verschil gepusht als een nieuwe commit op de branch, waarbij de conversatie doorgaat met behoud van de context van het vorige werk. Bijvoorbeeld, als je Pull Request finaal erop terugslaat, zal je zien dat de bijdrager zijn commit niet heeft gerebased en een andere Pull Request heeft gestuurd. In plaats daarvan zijn er nieuwe commits toegevoegd en deze zijn naar de bestaande branch gepusht. Op deze manier kan je in de toekomst teruggaan naar deze Pull Request en alle context terugvinden waarop besluiten zijn genomen. De “Merge” knop indrukken op de site maakt opzettelijk een merge commit die aan de Pull Request refereert zodat het eenvoudig is om terug te gaan en de oorspronkelijke conversatie te onderzoeken mocht het nodig zijn. Met de upstream bijblijven Als je Pull Request veroudert raakt of om een andere reden niet schoon merget, zal het willen corrigeren zodat de onderhouder deze eenvoudig kan mergen. GitHub zal dit voor je controleren en je aan de onderkant van elke Pull Request laten weten of de merge triviaal is of niet. Figuur 97. Pull Request zal niet netjes mergen Als je zoiets als Pull Request zal niet netjes mergen ziet, zal je jouw branch willen repareren zodat het groen wordt en de onderhouder geen extra werk heoft te doen. Je hebt twee voor de hand liggende opties om dit te doen. Je kunt je branch rebasen op waar de target branch is (normaalgesproken de master -branch van de repository die je hebt geforked), of je kunt de target branch in je eigen branch mergen. De meeste ontwikkelaars op GitHub zullen het laatste doen, om dezelfde redenen die we behandeld hebben in de vorige paragraaf. Waar het om draait is de historie en de laatste merge, dus rebasen geeft je niet veel meer dan een enigzins schonere historie en is aan de andere kant veel moeilijker en foutgevoeliger. Als je de target branch wilt mergen om je Pull Request merge-baar te maken, moet je de oorspronkelijke repository als een nieuwe remote moeten toevoegen, ervan fetchen, de hoofdbranch van die repository in jouw topic branch mergen, de problemen oplossen als ze er zijn en daarna je topic branch weer terugpushen naar dezelfde branch als waar je de Pull Request op geopend hebt. Als voorbeeld, stel dat in het “tonychacon” voorbeeld dat we hiervoor gebruikt hebben, de oorspronkelijke auteur een wijziging gemaakt heeft die een conflict in het Pull Request veroorzaakt. Laten we de stappen eens doorlopen. $ git remote add upstream https://github.com/schacon/blink (1) $ git fetch upstream (2) remote: Counting objects: 3, done. remote: Compressing objects: 100% (3/3), done. Unpacking objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0) From https://github.com/schacon/blink * [new branch] master -> upstream/master $ git merge upstream/master (3) Auto-merging blink.ino CONFLICT (content): Merge conflict in blink.ino Automatic merge failed; fix conflicts and then commit the result. $ vim blink.ino (4) $ git add blink.ino $ git commit [slow-blink 3c8d735] Merge remote-tracking branch 'upstream/master' \ into slower-blink $ git push origin slow-blink (5) Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (6/6), done. Writing objects: 100% (6/6), 682 bytes | 0 bytes/s, done. Total 6 (delta 2), reused 0 (delta 0) To https://github.com/tonychacon/blink ef4725c..3c8d735 slower-blink -> slow-blink Voeg de oorspronkelijke repository als remote toe met de naam “upstream” Fetch het nieuwste werk van die remote Merge de main branch in jouw topic branch Los het conflict op dat optrad Push naar dezelfde topic branch Als je dat gedaan hebt zal de Pull Request automatisch geupdate worden en opnieuw gecontroleerd of het zuiver merget. Figuur 98. Pull Request merget goed Een van de geweldige dingen aan Git is dat je dat constant kunt blijven doen. Als je een erg lang-lopend project hebt, kan je eenvoudig keer op keer de target branch mergen en hoef je alleen de conflicten op te lossen die zijn opgetreden sinds de laatste keer dat je gemerged hebt, wat het proces erg beheerbaar maakt. Als je perse de branch wilt rebasen om het op te schonen, kan je dat zeker doen, maar het wordt je sterk aangeraden om niet te force pushen naar de branch waar al een Pull Request op is geopend. Als andere mensen deze hebben gepulld en er op zijn gaan doorwerken, krijg je te maken met alle problemen die zijn genoemd in De gevaren van rebasen . In plaats daarvan push je de gerebasede branch naar een nieuwe branch op GitHub en open je een gloednieuwe Pull Request waarin je aan de oude refereert, en sluit daarna het orginele request. Referenties Je volgende vraag zou “Hoe refereer ik aan het oude Pull Request?” kunnen zijn. Er blijken vele, vele manieren te zijn waarop je aan andere dingen kunt refereren zo ongeveer overaal waar je kunt schrijven in GitHub. Laten we beginnen met hoe naar een andere Pull Request of Issue te verwijzen. Alle Pull Requests en Issues hebben een nummer toegewezen gekregen en deze zijn uniek binnen het project. Bijvoorbeeld, je kunt geen Pull Request #3 en Issue #3 hebben. Als je aan enig Pull Request of Issue wilt refereren vanuit een andere, kan je eenvoudigweg #<num> in elke commentaar of omschrijving neerzetten. Je kunt specifieker zijn als het Issue of Pull Request elders leeft; schrijf gebruikersnaam#<num> als je aan een Issue of Pull Request refereert in een fork of repository waar je in zit, of gebruikersnaam/repo#<num> om te refereren aan iets in een andere repository. Laten we naar een voorbeeld kijken. Stel we hebben de branch in het vorige voorbeeld gerebased, een nieuwe pull request ervoor gemaakt en nu willen we verwijzen naar de oude pull request vanuit de nieuwe. We willen ook refereren naar een issue in de fork van de repository in een heel ander project. We maken de beschrijving als in Verwijzingen in een Pull Request. . Figuur 99. Verwijzingen in een Pull Request. Als we deze pull request indienen, zien we dit alles getoond als Verwijzingen getoond in een Pull Request. . Figuur 100. Verwijzingen getoond in een Pull Request. Merk op dat de volledige GitHub URL die we erin gezet hebben afgekort is tot alleen de benodigde informatie. Als Tony nu het orginele Pull Request gaat sluiten, zien we dit doordat we het vermelden in de nieuwe, GitHub heeft automatisch een terugslag gebeurtenis aangemaakt in de tijdslijn van het Pull Request. Dit betekent dat iedereen die dit Pull Request bezoekt en ziet dat het is gesloten eenvoudig kan teruglinken naar degene die het overschrijft. De link zal eruit zien als Verwijzing getoond in een Pull Request. . Figuur 101. Verwijzing getoond in een Pull Request. Naast issue nummers, kan je ook refereren aan een specifieke commit door middel van de SHA-1. Je moet een volledige 40 positie SHA-1 vermelden, maar als GitHub dat in een commentaar ziet, zal het direct linken naar de commit. Nogmaals, je kunt aan commits refereren in forks of andere repositories op dezelfde manier als je deed met issues. Markdown met een GitHub smaakje Linken naar andere Issues is maar het begin van de interessante dingen die je met bijna elke tekstbox op GitHub kan doen. In Issue en Pull Request omschrijvingen, commentaren, code commentaren en andere zaken kan je de zogenoemde “GitHub Flavored Markdown” (Markdown met een GitHub smaakje) gebruiken. Markdown is als schrijven in platte tekst maar wat met meer functionaliteit wordt getoond. Zie Een voorbeeld van Markdown zoals geschreven en getoond. voor een voorbeeld van hoe comemntaar of tekst kan worden geschreven en daarna getoond met Markdown. Figuur 102. Een voorbeeld van Markdown zoals geschreven en getoond. Het smaakje wat GitHub aan Markdown meegeeft is meer dan wat je met de standaard Markdown krijgt. Deze smaakjes kunnen alle heel nuttig zijn als je bruikbare Pull Requests of Issue commentaar of omschrijvingen maakt. Taaklijsten Het eerste echt bruikbare GitHub specifieke Markdown optie, vooral in het gebruik in Pull Requests, is de taaklijst. Een taaklijst is een lijst van checkboxen met dingen die je gedaan wilt hebben. Het neerzetten ervan in een Issue of Pull Request geeft normaalgesproken de dingen weer die je gedaan wilt hebben voordat je het onderwerp voor gesloten beschouwt. Je kunt op deze manier een taaklijst maken: - [X] Write the code - [ ] Write all the tests - [ ] Document the code Als we deze in de omschrijving van een Pull Request of Issue zetten, zullen we het als Taaklijsten zoals getoond een Markdown commentaar. getoond zien Figuur 103. Taaklijsten zoals getoond een Markdown commentaar. Dit wordt vaak in een Pull Request gebruikt om aan te geven wat je allemaal gedaan wilt zien op de branch voordat de Pull Request klaar is om te mergen. Het echte gave hiervan is dat je eenvoudig op de checkboxen kunt klikken om het commentaar bij te werken — je hoeft de tekst van de Markdown zelf niet te wijzigen om de taken af te tikken. En er is meer: GitHub zal naar taaklijsten zoeken in je Issues en Pull Requesten en deze als metadata op de pagina tonen die ze bevatten. Bijvoorbeeld, als je een Pull Request hebt met taken en je kijkt naar de overzichtspagina van alle Pull Requesten, kan je zien in hoeverre het klaar is. Dit helpt mensen om Pull Requests in te delen naar subtaken en andere mensen om de voortgang van de branch te volgen. Je kunt een voorbeeld hiervan zien in Samenvatting van taaklijsten in de Pull Request lijst. . Figuur 104. Samenvatting van taaklijsten in de Pull Request lijst. Dit is ontzettend handig als je vroeg een Pull Request opent en deze gebruikt om de voortgang te volgen tijdens de implementatie van de feature. Code Snippets (code knipsels) Je kun ook code knipsels aan commentaren toevoegen. Dit is in het bijzonder handig als je iets wilt voorstellen wat je zou kunnen proberen te doen voordat je het daadwerkelijk implementeert als een commit in je branch. Dit wordt ook vaak gebruikt om een voorbeeld te geven de code die niet werkt of wat in deze Pull Request zou kunnen worden geïmplementeerd. Om een code knipsel toe te voegen moet je het met backticks omsluiten. ```java for(int i=0 ; i < 5 ; i++) { System.out.println("i is : " + i); } ``` Als je de naam van de taal toevoegt, zoals we hier met java gedaan hebben, zal GitHub proberen ook de syntax te markeren. In het bovenstaande voorbeeld zou het worden getoond als Getoonde omsloten code voorbeeld. . Figuur 105. Getoonde omsloten code voorbeeld. Quoting (Citeren) Als je reageert op een klein deel van een lang commentaar, kan je naar keuze citeren uit het andere commentaar door de regel te laten beginen met het > teken. Dit is zelfs zo gewoon en bruikbaar dat er een sneltoets combinatie voor gemaakt is. Als je de tekst selecteert in het commentaar waar je direct op wilt reageren en de r toets indrukt, wordt deze direct voor je als citaat in de commentaar ruimte geplaatst. De citaten zien er zo ongeveer uit: > Whether 'tis Nobler in the mind to suffer > The Slings and Arrows of outrageous Fortune, How big are these slings and in particular, these arrows? Zodra getoond, zal het commentaar er als Getoond citaat voorbeeld. uitzien. Figuur 106. Getoond citaat voorbeeld. Emoji Als laatste, kan je ook emoji in je commentaar gebruiken. Dit wordt eigenlijk best wel vaak gebruikt in de commentaren die je bij veel GitHub issues en Pull Requests ziet. Er is zelfs een emoji hulp in GitHub. Als je een commentaar intypt en je begint met een : teken, zal een automatische voltooihulp je komen helpen met vinden wat je zoekt. Figuur 107. Emoji voltooi hulp in actie. Emojis komen er als :<naam>: uit te zien ergens in je commentaar. Je zou bijvoorbeeld iets als dit kunnen schrijven: I :eyes: that :bug: and I :cold_sweat:. :trophy: for :microscope: it. :+1: and :sparkles: on this :ship:, it's :fire::poop:! :clap::tada::panda_face: Als het wordt getoond, komt het er als Zwaar emoji commentaar. uit te zien. Figuur 108. Zwaar emoji commentaar. Niet dat het erg veel inhoudelijk toevoegt, maar het geeft wel wat sjeu en emotie aan een medium dat normaalgesproken nogal moeilijk emoties laat weergeven. Noot Er zijn vandaag de dag nogal wat webservices die gebruik maken van emoji. Een goede spiekbrief voor het vinden van emoji die uitdrukken wat je wilt zeggen kan je vinden op: http://www.emoji-cheat-sheet.com Plaatjes Technisch gezien is het geen GitHub smaak van Markdown, maar het is erg handig. Naast het toevoegen van Markdown plaatjes links aan commentaar, waarvan het moeilijk kan zijn om URLs voor te vinden en in te voegen, staat GitHub je toe om plaatjes te slepen en lossen (drag & drop) in tekstgebieden om ze in te voegen. Figuur 109. Slepen en lossen van plaatjes om ze te uploaden en automatisch in te voegen. Als je terugkijkt naar Verwijzingen in een Pull Request. , kan je een kleine “Parsed as Markdown” hint boven het tekstgebied zien. Als je hierop klikt zal dit je een complete spiekbrief laten zien van alles wat met met Markdown in GitHub kunt doen. Houd je GitHub openbare repository up-to-date Als je eenmaal een GitHub repository geforkt hebt, bestaat jouw repository (jouw "fork") onafhankelijk van het orgineel. In het bijzonder, als de originele repository nieuwe commits heeft, informeert GitHub je met een bericht als deze: This branch is 5 commits behind progit:master. Maar houw GitHub repository zal nooit automatisch worden bijgewerkt door GitHub; dit is iets wat je zelf moet doen. Gelukkig is dit erg eenvoudig te doen. Een mogelijkheid om dit te doen vereist geen configuratie. Bijvoorbeeld, als je van https://github.com/progit/progit2.git geforkt hebt, kan je als volgt jouw master branch up-to-date houden: $ git checkout master (1) $ git pull https://github.com/progit/progit2.git (2) $ git push origin master (3) Als je op een andere branch zit, keer terug naar master . Fetch wijzigingen van https://github.com/progit/progit2.git en merge ze in master . Push jouw master branch naar origin . Dit werkt, maar het is een beetje saai om elke keer de fetch URL helemaal te moeten typen. Je kuknt dit met een beetje configuratie automatiseren. $ git remote add progit https://github.com/progit/progit2.git (1) $ git branch --set-upstream-to=progit/master master (2) $ git config --local remote.pushDefault origin (3) Voeg de bron repository toe en geef het een naam. Hier heb ik de naam progit gekozen Stel je master branch in om te fetchen van de progit remote. Zorg ervoor dat de standaard push repository origin is. Als dit eenmaal gedaan is, wordt de workflow veel simpeler: $ git checkout master (1) $ git pull (2) $ git push (3) Als je op een andere branch zit, keer terug naar master . Fetch wijzigingen van progit en merge ze in master . Push jouw master branch naar origin . Deze aanpak kan handig zijn, maar het is niet zonder nadelen. Git zal vrolijk al dit werk stilletjes voor je doen, maar het zal je niet waarschuwen als je een commit naar master doet, van progit pullt en dan naar origin pusht; al deze handelingen zijn geldig met deze opzet. Dus je moet goed opletten om nooit direct naar master te committen, omdat die branch effectief een voorliggende repository toebehoort. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://www.linkedin.com/products/akamai-technologies-kona-ddos-defender/?trk=products_seo_search
Kona DDoS Defender | LinkedIn Skip to main content LinkedIn Akamai Technologies in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Kona DDoS Defender DDoS Protection Software by Akamai Technologies See who's skilled in this Add as skill Learn more Report this product About Always-on, managed DDoS protection service to stop DDoS for critical websites and web applications. HTTP and HTTPS. Similar products Cloudflare DDoS Protection Cloudflare DDoS Protection DDoS Protection Software Cloudflare Spectrum Cloudflare Spectrum DDoS Protection Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software OVHcloud Anti-DDoS Protection OVHcloud Anti-DDoS Protection DDoS Protection Software Kaspersky DDoS Protection Kaspersky DDoS Protection DDoS Protection Software Arbor Cloud DDoS Protection Arbor Cloud DDoS Protection DDoS Protection Software Sign in to see more Show more Show less Akamai Technologies products Akamai Edge DNS Akamai Edge DNS Managed DNS Services Software Akamai Identity Cloud Akamai Identity Cloud Identity & Access Management (IAM) Software Akamai IoT Edge Connect Akamai IoT Edge Connect Internet of Things (IoT) Software Akamai Prolexic Routed Akamai Prolexic Routed DDoS Protection Software Aura Managed CDN Aura Managed CDN Content Delivery Network (CDN) Software BOCC BOCC Live Streaming Software DNSi AuthServe DNSi AuthServe Managed DNS Services Software DNSi Big Data Connector DNSi Big Data Connector Managed DNS Services Software Enterprise Threat Protector Enterprise Threat Protector Secure Web Gateways Media Services Live Media Services Live Live Streaming Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:19
https://git-scm.com/book/bg/v2/%d0%9d%d0%b0%d1%87%d0%b0%d0%bb%d0%be-%d0%9a%d0%be%d0%bd%d0%b7%d0%be%d0%bb%d0%b0%d1%82%d0%b0-%d0%bd%d0%b0-Git
Git - Конзолата на Git About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Начало 1.1 За Version Control системите 1.2 Кратка история на Git 1.3 Какво е Git 1.4 Конзолата на Git 1.5 Инсталиране на Git 1.6 Първоначална настройка на Git 1.7 Помощна информация в Git 1.8 Обобщение 2. Основи на Git 2.1 Създаване на Git хранилище 2.2 Запис на промени в хранилището 2.3 Преглед на историята на действията 2.4 Възстановяване на направени действия 2.5 Работа с отдалечени хранилища 2.6 Тагове в Git 2.7 Псевдоними в Git 2.8 Обобщение 3. Клонове в Git 3.1 Накратко за разклоненията 3.2 Основи на клоновете код и сливането 3.3 Управление на клонове 3.4 Стратегии за работа с клонове код 3.5 Отдалечени клонове 3.6 Управление на проект 3.7 Обобщение 4. GitHub 4.1 Създаване и настройка на акаунт 4.2 Как да сътрудничим в проект 4.3 Управление на проект 4.4 Управление на организация 4.5 Автоматизиране с GitHub 4.6 Обобщение 5. Git инструменти 5.1 Избор на къмити 5.2 Интерактивно индексиране 5.3 Stashing и Cleaning 5.4 Подписване на вашата работа 5.5 Търсене 5.6 Манипулация на историята 5.7 Мистерията на командата Reset 5.8 Сливане за напреднали 5.9 Rerere 5.10 Дебъгване с Git 5.11 Подмодули 5.12 Пакети в Git (Bundling) 5.13 Заместване 5.14 Credential Storage система 5.15 Обобщение 6. Настройване на Git 6.1 Git конфигурации 6.2 Git атрибути 6.3 Git Hooks 6.4 Примерна Git-Enforced политика 6.5 Обобщение 7. Git и други системи 7.1 Git като клиент 7.2 Миграция към Git 7.3 Обобщение 8. Git на ниско ниво 8.1 Plumbing и Porcelain команди 8.2 Git обекти 8.3 Git референции 8.4 Packfiles 8.5 Refspec спецификации 8.6 Транспортни протоколи 8.7 Поддръжка и възстановяване на данни 8.8 Environment променливи 8.9 Обобщение 9. Приложение A: Git в други среди 9.1 Графични интерфейси 9.2 Git във Visual Studio 9.3 Git във Visual Studio Code 9.4 Git in IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine 9.5 Git в Sublime Text 9.6 Git в Bash 9.7 Git в Zsh 9.8 Git в PowerShell 9.9 Обобщение 10. Приложение B: Вграждане на Git в приложения 10.1 Git от команден ред 10.2 Libgit2 10.3 JGit 10.4 go-git 10.5 Dulwich A1. Приложение C: Git команди A1.1 Настройки и конфигурация A1.2 Издърпване и създаване на проекти A1.3 Snapshotting A1.4 Клонове и сливане A1.5 Споделяне и обновяване на проекти A1.6 Инспекция и сравнение A1.7 Дебъгване A1.8 Patching A1.9 Email команди A1.10 Външни системи A1.11 Административни команди A1.12 Plumbing команди 2nd Edition 1.4 Начало - Конзолата на Git Конзолата на Git Съществуват различни начини за ползване на Git. Налице са традиционните инструменти от командния ред, а също така и многобройни графични инструменти с различни възможности. За целите на тази книга ще използваме Git от конзола. Командният ред е единственото място, където можете да ползвате всички Git команди - повечето GUI имплементират само част от пълната Git функционалност за по-просто. Ако знаете как да използвате командния ред, вероятно лесно ще разберете как да ползвате и GUI инструмент, докато обратното не е непременно вярно. Също така, докато изборът на графичен клиент е въпрос на персонални предпочитания, то всички потребители ще имат инсталирани и готови за работа командите от конзолата. Затова ще приемем, че знаете как да отворите терминал в macOS, Command Prompt или PowerShell под Windows. Ако се чудите за какво разговаряме, по-добре е да спрете с книгата дотук, да си опресните знанията за конзолата на операционната система, която ползвате и след това да продължите четенето. Така ще можете по-лесно да следвате примерите и описанията в следващите глави. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://id-id.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT2jynv_wZlhhh0Kc6i7-3C09sWVWSFDLQsCv-oL323p5C3FDdb4_-W1Vj3y34PJEC89ywL62_PxmBRaMAr9S-DSfPKk_u5S67oaU3b74ETfMROUGkfvPlPCP9NB_XMRkVyiX1SVZSj_jsq4
Facebook Facebook Email atau telepon Kata Sandi Lupa akun? Buat Akun Baru Anda Diblokir Sementara Anda Diblokir Sementara Sepertinya Anda menyalahgunakan fitur ini dengan menggunakannya terlalu cepat. Anda dilarang menggunakan fitur ini untuk sementara. Back Bahasa Indonesia 한국어 English (US) Tiếng Việt ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Daftar Masuk Messenger Facebook Lite Video Meta Pay Meta Store Meta Quest Ray-Ban Meta Meta AI Konten Meta AI lainnya Instagram Threads Pusat Informasi Pemilu Kebijakan Privasi Pusat Privasi Tentang Buat Iklan Buat Halaman Developer Karier Cookie Pilihan Iklan Ketentuan Bantuan Pengunggahan Kontak & Non-Pengguna Pengaturan Log aktivitas Meta © 2026
2026-01-13T09:29:19
https://git-scm.com/book/es/v2/Inicio---Sobre-el-Control-de-Versiones-Instalaci%c3%b3n-de-Git
Git - Instalación de Git About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Inicio - Sobre el Control de Versiones 1.1 Acerca del Control de Versiones 1.2 Una breve historia de Git 1.3 Fundamentos de Git 1.4 La Línea de Comandos 1.5 Instalación de Git 1.6 Configurando Git por primera vez 1.7 ¿Cómo obtener ayuda? 1.8 Resumen 2. Fundamentos de Git 2.1 Obteniendo un repositorio Git 2.2 Guardando cambios en el Repositorio 2.3 Ver el Historial de Confirmaciones 2.4 Deshacer Cosas 2.5 Trabajar con Remotos 2.6 Etiquetado 2.7 Alias de Git 2.8 Resumen 3. Ramificaciones en Git 3.1 ¿Qué es una rama? 3.2 Procedimientos Básicos para Ramificar y Fusionar 3.3 Gestión de Ramas 3.4 Flujos de Trabajo Ramificados 3.5 Ramas Remotas 3.6 Reorganizar el Trabajo Realizado 3.7 Recapitulación 4. Git en el Servidor 4.1 Los Protocolos 4.2 Configurando Git en un servidor 4.3 Generando tu clave pública SSH 4.4 Configurando el servidor 4.5 El demonio Git 4.6 HTTP Inteligente 4.7 GitWeb 4.8 GitLab 4.9 Git en un alojamiento externo 4.10 Resumen 5. Git en entornos distribuidos 5.1 Flujos de trabajo distribuidos 5.2 Contribuyendo a un Proyecto 5.3 Manteniendo un proyecto 5.4 Resumen 6. GitHub 6.1 Creación y configuración de la cuenta 6.2 Participando en Proyectos 6.3 Mantenimiento de un proyecto 6.4 Gestión de una organización 6.5 Scripting en GitHub 6.6 Resumen 7. Herramientas de Git 7.1 Revisión por selección 7.2 Organización interactiva 7.3 Guardado rápido y Limpieza 7.4 Firmando tu trabajo 7.5 Buscando 7.6 Reescribiendo la Historia 7.7 Reiniciar Desmitificado 7.8 Fusión Avanzada 7.9 Rerere 7.10 Haciendo debug con Git 7.11 Submódulos 7.12 Agrupaciones 7.13 Replace 7.14 Almacenamiento de credenciales 7.15 Resumen 8. Personalización de Git 8.1 Configuración de Git 8.2 Git Attributes 8.3 Puntos de enganche en Git 8.4 Un ejemplo de implantación de una determinada política en Git 8.5 Recapitulación 9. Git y Otros Sistemas 9.1 Git como Cliente 9.2 Migración a Git 9.3 Resumen 10. Los entresijos internos de Git 10.1 Fontanería y porcelana 10.2 Los objetos Git 10.3 Referencias Git 10.4 Archivos empaquetadores 10.5 Las especificaciones para hacer referencia a…​ (refspec) 10.6 Protocolos de transferencia 10.7 Mantenimiento y recuperación de datos 10.8 Variables de entorno 10.9 Recapitulación A1. Apéndice A: Git en otros entornos A1.1 Interfaces gráficas A1.2 Git en Visual Studio A1.3 Git en Eclipse A1.4 Git con Bash A1.5 Git en Zsh A1.6 Git en Powershell A1.7 Resumen A2. Apéndice B: Integrando Git en tus Aplicaciones A2.1 Git mediante Línea de Comandos A2.2 Libgit2 A2.3 JGit A3. Apéndice C: Comandos de Git A3.1 Configuración A3.2 Obtener y Crear Proyectos A3.3 Seguimiento Básico A3.4 Ramificar y Fusionar A3.5 Compartir y Actualizar Proyectos A3.6 Inspección y Comparación A3.7 Depuración A3.8 Parcheo A3.9 Correo Electrónico A3.10 Sistemas Externos A3.11 Administración A3.12 Comandos de Fontanería 2nd Edition 1.5 Inicio - Sobre el Control de Versiones - Instalación de Git Instalación de Git Antes de empezar a utilizar Git, tienes que instalarlo en tu computadora. Incluso si ya está instalado, este es posiblemente un buen momento para actualizarlo a su última versión. Puedes instalarlo como un paquete, a partir de un archivo instalador o bajando el código fuente y compilándolo tú mismo. Nota Este libro fue escrito utilizando la versión 2.0.0 de Git. Aun cuando la mayoría de comandos que usaremos deben funcionar en versiones más antiguas de Git, es posible que algunos de ellos no funcionen o lo hagan ligeramente diferente si estás utilizando una versión anterior de Git. Debido a que Git es particularmente bueno en preservar compatibilidad hacia atrás, cualquier versión posterior a 2.0 debe funcionar bien. Instalación en Linux Si quieres instalar Git en Linux a través de un instalador binario, en general puedes hacerlo mediante la herramienta básica de administración de paquetes que trae tu distribución. Si estás en Fedora por ejemplo, puedes usar yum: $ yum install git Si estás en una distribución basada en Debian como Ubuntu, puedes usar apt-get: $ apt-get install git Para opciones adicionales, la página web de Git tiene instrucciones de instalación en diferentes tipos de Unix. Puedes encontrar esta información en http://git-scm.com/download/linux . Instalación en Mac Hay varias maneras de instalar Git en un Mac. Probablemente la más sencilla es instalando las herramientas Xcode de Línea de Comandos. En Mavericks (10.9 o superior) puedes hacer esto desde el Terminal si intentas ejecutar git por primera vez. Si no lo tienes instalado, te preguntará si deseas instalarlo. Si deseas una versión más actualizada, puedes hacerlo a partir de un instalador binario. Un instalador de Git para OSX es mantenido en la página web de Git. Lo puedes descargar en http://git-scm.com/download/mac . Figura 7. Instalador de Git en OS X. También puedes instalarlo como parte del instalador de Github para Mac. Su interfaz gráfica de usuario tiene la opción de instalar las herramientas de línea de comandos. Puedes descargar esa herramienta desde el sitio web de Github para Mac en http://mac.github.com . Instalación en Windows También hay varias maneras de instalar Git en Windows. La forma más oficial está disponible para ser descargada en el sitio web de Git. Solo tienes que visitar http://git-scm.com/download/win y la descarga empezará automáticamente. Fíjate que éste es un proyecto conocido como Git para Windows (también llamado msysGit), el cual es diferente de Git. Para más información acerca de este proyecto visita http://msysgit.github.io/ . Otra forma de obtener Git fácilmente es mediante la instalación de GitHub para Windows. El instalador incluye la versión de línea de comandos y la interfaz de usuario de Git. Además funciona bien con Powershell y establece correctamente "caching" de credenciales y configuración CRLF adecuada. Aprenderemos acerca de todas estas cosas un poco más adelante, pero por ahora es suficiente mencionar que éstas son cosas que deseas. Puedes descargar este instalador del sitio web de GitHub para Windows en http://windows.github.com . Instalación a partir del Código Fuente Algunas personas desean instalar Git a partir de su código fuente debido a que obtendrán una versión más reciente. Los instaladores binarios tienden a estar un poco atrasados. Sin embargo, esto ha hecho muy poca diferencia a medida que Git ha madurado en los últimos años. Para instalar Git desde el código fuente necesitas tener las siguientes librerías de las que Git depende: curl, zlib, openssl, expat y libiconv. Por ejemplo, si estás en un sistema que tiene yum (como Fedora) o apt-get (como un sistema basado en Debian), puedes usar estos comandos para instalar todas las dependencias: $ yum install curl-devel expat-devel gettext-devel \ openssl-devel zlib-devel $ apt-get install libcurl4-gnutls-dev libexpat1-dev gettext \ libz-dev libssl-dev Cuando tengas todas las dependencias necesarias, puedes descargar la versión más reciente de Git en diferentes sitios. Puedes obtenerla a partir del sitio Kernel.org en https://www.kernel.org/pub/software/scm/git , o su "mirror" en el sitio web de GitHub en https://github.com/git/git/releases . Generalmente la más reciente versión en la página web de GitHub es un poco mejor, pero la página de kernel.org también tiene ediciones con firma en caso de que desees verificar tu descarga. Luego tienes que compilar e instalar de la siguiente manera: $ tar -zxf git-2.0.0.tar.gz $ cd git-2.0.0 $ make configure $ ./configure --prefix=/usr $ make all doc info $ sudo make install install-doc install-html install-info Una vez hecho esto, también puedes obtener Git, a través del propio Git, para futuras actualizaciones: $ git clone git://git.kernel.org/pub/scm/git/git.git prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://git-scm.com/book/nl/v2/Git-op-de-server-GitLab
Git - GitLab About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Aan de slag 1.1 Over versiebeheer 1.2 Een kort historisch overzicht van Git 1.3 Wat is Git? 1.4 De commando-regel 1.5 Git installeren 1.6 Git klaarmaken voor eerste gebruik 1.7 Hulp krijgen 1.8 Samenvatting 2. Git Basics 2.1 Een Git repository verkrijgen 2.2 Wijzigingen aan de repository vastleggen 2.3 De commit geschiedenis bekijken 2.4 Dingen ongedaan maken 2.5 Werken met remotes 2.6 Taggen (Labelen) 2.7 Git aliassen 2.8 Samenvatting 3. Branchen in Git 3.1 Branches in vogelvlucht 3.2 Eenvoudig branchen en mergen 3.3 Branch-beheer 3.4 Branch workflows 3.5 Branches op afstand (Remote branches) 3.6 Rebasen 3.7 Samenvatting 4. Git op de server 4.1 De protocollen 4.2 Git op een server krijgen 4.3 Je publieke SSH sleutel genereren 4.4 De server opzetten 4.5 Git Daemon 4.6 Slimme HTTP 4.7 GitWeb 4.8 GitLab 4.9 Hosting oplossingen van derden 4.10 Samenvatting 5. Gedistribueerd Git 5.1 Gedistribueerde workflows 5.2 Bijdragen aan een project 5.3 Het beheren van een project 5.4 Samenvatting 6. GitHub 6.1 Account setup en configuratie 6.2 Aan een project bijdragen 6.3 Een project onderhouden 6.4 Een organisatie beheren 6.5 GitHub Scripten 6.6 Samenvatting 7. Git Tools 7.1 Revisie Selectie 7.2 Interactief stagen 7.3 Stashen en opschonen 7.4 Je werk tekenen 7.5 Zoeken 7.6 Geschiedenis herschrijven 7.7 Reset ontrafeld 7.8 Mergen voor gevorderden 7.9 Rerere 7.10 Debuggen met Git 7.11 Submodules 7.12 Bundelen 7.13 Vervangen 7.14 Het opslaan van inloggegevens 7.15 Samenvatting 8. Git aanpassen 8.1 Git configuratie 8.2 Git attributen 8.3 Git Hooks 8.4 Een voorbeeld van Git-afgedwongen beleid 8.5 Samenvatting 9. Git en andere systemen 9.1 Git als een client 9.2 Migreren naar Git 9.3 Samenvatting 10. Git Binnenwerk 10.1 Binnenwerk en koetswerk (plumbing and porcelain) 10.2 Git objecten 10.3 Git Referenties 10.4 Packfiles 10.5 De Refspec 10.6 Uitwisseling protocollen 10.7 Onderhoud en gegevensherstel 10.8 Omgevingsvariabelen 10.9 Samenvatting A1. Bijlage A: Git in andere omgevingen A1.1 Grafische interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in Eclipse A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Samenvatting A2. Bijlage B: Git in je applicaties inbouwen A2.1 Commando-regel Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Bijlage C: Git Commando’s A3.1 Setup en configuratie A3.2 Projecten ophalen en maken A3.3 Basic Snapshotten A3.4 Branchen en mergen A3.5 Projecten delen en bijwerken A3.6 Inspectie en vergelijking A3.7 Debuggen A3.8 Patchen A3.9 Email A3.10 Externe systemen A3.11 Beheer A3.12 Binnenwerk commando’s (plumbing commando’s) 2nd Edition 4.8 Git op de server - GitLab GitLab GitWeb is echter nogal simplistisch. Als je op zoek bent naar een meer moderne Git server met alle toeters en bellen, zijn er een aantal open source oplossingen die je als alternatief kunt installeren. Omdat GitLab een van de meer populaire alternatieven is, zullen we het installeren en gebruiken als voorbeeld bespreken. Dit is iets complexer dan de GitWeb optie en vergt waarschijnlijk meer onderhoud, maar het is een optie met veel meer mogelijkheden. Installatie GitLab is een applicatie die door een database wordt ondersteund, de installatie houdt daarom iets meer in dan andere Git servers. Gelukkig is dit proces zeer goed gedocumenteerd en ondersteund. Er zijn een aantal methoden die je kunt volgen om GitLab te installeren. Om iets snel in de lucht te krijgen, kan je een image van een virtuele machine downloaden of een klik op de knop installatie programma van https://bitnami.com/stack/gitlab , en de configuratie wat bijstellen voor jouw specifieke omgeving. Bitnami heeft een prettig detail toegevoegd endat is het login scherm (bereikbaar door alt-→ te typen); het geeft je het IP adres en standaard gebruikersnaam en wachtwoord voor de geïnstalleerde GitLab. Figuur 50. Het Bitnami GitLab virtual machine login scherm. Voor de rest, volg de handleiding in het GitLab Community Edition readme-bestand, welke gevonden kan worden op https://gitlab.com/gitlab-org/gitlab-ce/tree/master . Daar vind je ondersteuning voor het installeren van GitLab gebruikmakend van Chef-recepten, een virtual machine op Digital Ocean en RPM en DEB-pakketten (die, ten tijde van schrijven, in beta zijn). Er is ook een “onofficieel” handleiding over hoe GitLab op een niet standaard besturingssysteem en database aan de praat te krijgen, een installatie-script voor het volledig handmatig installeren, en vele andere onderwerpen. Beheer De beheer interface van GitLab is via het web benaderbaar. Simpelweg de hostnaam of IP adres waar GitLab is geïnstalleerd in je browser invullen, en inloggen als beheer-gebruiker. De standaard gebruikersnaam is admin@local.host , en het standaard wachtwoord is 5iveL!fie (en je wordt er aan herinnerd om dit te wijzigen zodra je het intypt). Zodra je aangemeld bent, klik op het “Admin area” icoon in het menu rechts boven. Figuur 51. Het “Admin area” item in het GitLab menu. Gebruikers Gebruikers in GitLab zijn accounts die overeenkomen met personen. Gebruiker accounts hebben niet veel complexiteit; het is voornamelijk een verzameling van persoonlijke gegevens die bij login-gegevens horen. Elke gebruiker heeft een namespace , wat een logische groepering van projecten is die bij die gebruiker horen. Als de gebruiker jane een project genaamd project zou hebben, zou de URL van dat project http://server/jane/project zijn. Figuur 52. Het GitLab gebruiker beheer scherm. Een gebruiker kan op twee manieren worden verwijderd. Een gebruiker “Blocken” (blokkeren) verhindert ze om in te loggen in deze GitLab instantie, maar alle gegevens onder de namespace van die gebruiker blijven intact, en commits met het email adres van die gebruiker zullen nog steeds naar die profiel terugverwijzen. Een gebruiker “Destroyen” (vernietigen) echter verwijdert deze volledig van de database en het bestandssysteem. Alle projecten en gegevens in de namespace worden verwijderd, en alle groepen die met die gebruiker als eigenaar worden ook verwijderd. Dit is duidelijk een aktie met meer permanente en vernietigende gevolgen, en het wordt ook zelden gebruikt. Groepen Een GitLag groep is een verzameling van projecten, samen met gegevens hoe gebruikers deze projecten kunnen benaderen. Elke groep heeft een project namespace (op gelijke manier waarop gebruikers dit hebben), dus als de groep training een project genaamd materials heeft, zou de url http://server/training/materials zijn. Figuur 53. Het GitLab groep beheer scherm. Elke groep heeft een relatie met een aantal gebruikers, elk van hen heeft een mate van permissies op de projecten van de groep en de groep zelf. Deze varieren van “Guest” (gast) (alleen problemen en chat) tot “Owner” (eigenaar) (volledig beheer over de groep, haar leden en projecten). De lijst van permissies is te groot om hier weer te geven, maar GitLab heeft een behulpzame link op het beheerscherm. Projecten Een GitLab project komt grofweg overeen met een enkele Git repository. Elk project behoort tot één enkele namespace, ofwel een gebruiker of een groep. Als het project bij een gebruiker hoort, heeft de eigenaar van het project direct controle over wie toegang heeft tot het project; als het project tot een groep behoort, beginnen de permissies van de gebruikers binnen die groep ook een rol te spelen. Elk project heeft een niveau van zichtbaarheid, welke bepaalt wie lees rechten tot de pagina’s van het project en de repository heeft. Als een project Private is, moet de eigenaar van dat project expliciet toegang verlenen aan specifieke gebruikers. Als een project Internal (intern) is, is deze zichtbaar voor elke aangemelde gebruiker, en een Public project is zichtbaar voor iedereen. Let wel dat dit zowel de git fetch toegang als de toegang middels de web gebruikers interface voor dat project regelt. Hooks (haken) GitLab ondersteunt het gebruik van hooks, zowel op een project als systeem niveau. Voor beiden zal de GitLab server een HTTP POST uitvoeren met wat beschrijvende JSON elke keer als er relevante gebeurtenissen plaatvinden. Dit is een goede manier om je Git repositories en de GitLab instantie te verbinden met de rest van je ontwikkel-automatisering, zoals CI servers, chat rooms of deployment tools. Eenvoudig gebruik Het eerste wat je binnen GitLab zult willen doen is het maken van een nieuw project. Dit wordt gedaan door het “+” icoon te klikken in de toolbar. Er zal worden gevraagd naar de naam van het project, welke namespace het toe behoort en wat het niveau van zichtbaarheid het moet hebben. Het meeste wat je hier aangeeft is niet permanent, het kan later veranderd worden middels het settings (instellingen) interface. Klik op ``Create Project"`, en je bent klaar. Als het project eenmaal bestaat, zal je het waarschijnlijk met een lokale Git repository willen verbinden. Elk project is toegangkelijk via HTTPS of SSH, beide kunenn worden gebruikt om ee Git remote op te zetten. De URLs zijn te zien aan de bovenkant van de thuis-pagina van het project. Voor een bestaande lokale repository zal dit commando een remote genaamd gitlab aanmaken naar de gehoste locatie: $ git remote add gitlab https://server/namespace/project.git Als je geen lokale kopie hebt van de repository, kan je simpelweg dit doen: $ git clone https://server/namespace/project.git De web gebruikersinterface geeft toegang tot een aantal nuttige kijken op de repository. Elke thuis-pagina van een project laat de recente activiteit zien, en links aan de bovenkant zullen je naar overzichten voeren van de bestanden van het project en de commit log. Samenwerken De eenvoudigste manier van samenwerken op een GitLab project is door een andere gebruiker direct push-toegang te geven tot de Git repository. Je kunt een gebruiker aan een project toevoegen door naar het “Members” (leden) gedeelte te gaan van de instellingen van dat project, en de nieuwe gebruiker een toegangsniveau toe te wijzen (de verschillende niveaus worden een beetje besproken in Groepen ). Door een gebruiker een toegangsniveau van “Developer” of hoger te geven, kan die gebruiker commits pushen en straffeloos branches direct aan de repository toevoegen. Een andere, meer ontkoppelde manier van samenwerken is door gebruik te maken van merge-requests (verzoeken tot samenvoeging). Deze mogelijkheid maakt het mogelijk dat elke gebruiker die het project kan zien eraan kan bijdragen op een meer beheerde manier. Gebruikers met directe toegang kunnen simpelweg een branch maken, hier commits naar pushen en een merge request openen om vanuit hun branch naar master of elke ander branch te mergen. Gebruikers die geen push-toestemming hebben voor een repository kunnen deze “forken” (hun eigen kopie maken), commits naar die kopie pushen, en een merge request openen vanuit hun fork terug naar het hoofdproject. Dit model stelt de eigenaar in staat om alles wat en wanneer er in de repository gebeurt volledig te beheersen, en tegelijkertijd bijdragen van niet vertrouwde gebruikers toe te staan. Merge requests en issues (problemen) zijn de hoofdbestanddelen van langlopende discussies in GitLab. Elke merge request staat een regel-voor-regel discussie toe van de voorgestelde wijziging (wat de mogelijkheid opent voor een lichtgewicht code-review), alsook een generieke discussie thread. Beide kunnen aan gebruikers worden toegewezen of gegroepeerd in mijlpalen. Deze paragraaf is voornamelijk gericht op Git-gerelateerde mogelijkheden van GitLab; maar als een volwassen project biedt het vele andere mogelijkheden om je team samen te laten werken, zoals project wiki’s en systeem onderhoudsinstrumenten. Een voordeel van GitLab is dat, zodra de server is ingericht en loopt, je nauwelijks nog aanpassingen aan de configuratie bestand hoeft te doen of de server via SSH moet benaderen. De meeste beheer en generieke gebruikshandelingen kunnen via de browser interface plaatsvinden. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://vi-vn.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT2jynv_wZlhhh0Kc6i7-3C09sWVWSFDLQsCv-oL323p5C3FDdb4_-W1Vj3y34PJEC89ywL62_PxmBRaMAr9S-DSfPKk_u5S67oaU3b74ETfMROUGkfvPlPCP9NB_XMRkVyiX1SVZSj_jsq4
Facebook Facebook Email hoặc điện thoại Mật khẩu Bạn quên tài khoản ư? Tạo tài khoản mới Bạn tạm thời bị chặn Bạn tạm thời bị chặn Có vẻ như bạn đang dùng nhầm tính năng này do sử dụng quá nhanh. Bạn tạm thời đã bị chặn sử dụng nó. Back Tiếng Việt 한국어 English (US) Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Đăng ký Đăng nhập Messenger Facebook Lite Video Meta Pay Cửa hàng trên Meta Meta Quest Ray-Ban Meta Meta AI Nội dung khác do Meta AI tạo Instagram Threads Trung tâm thông tin bỏ phiếu Chính sách quyền riêng tư Trung tâm quyền riêng tư Giới thiệu Tạo quảng cáo Tạo Trang Nhà phát triển Tuyển dụng Cookie Lựa chọn quảng cáo Điều khoản Trợ giúp Tải thông tin liên hệ lên & đối tượng không phải người dùng Cài đặt Nhật ký hoạt động Meta © 2026
2026-01-13T09:29:19
https://git-scm.com/book/ru/v2/%d0%9e%d1%81%d0%bd%d0%be%d0%b2%d1%8b-Git-%d0%a0%d0%b0%d0%b1%d0%be%d1%82%d0%b0-%d1%81-%d1%83%d0%b4%d0%b0%d0%bb%d1%91%d0%bd%d0%bd%d1%8b%d0%bc%d0%b8-%d1%80%d0%b5%d0%bf%d0%be%d0%b7%d0%b8%d1%82%d0%be%d1%80%d0%b8%d1%8f%d0%bc%d0%b8
Git - Работа с удалёнными репозиториями About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Введение 1.1 О системе контроля версий 1.2 Краткая история Git 1.3 Что такое Git? 1.4 Командная строка 1.5 Установка Git 1.6 Первоначальная настройка Git 1.7 Как получить помощь? 1.8 Заключение 2. Основы Git 2.1 Создание Git-репозитория 2.2 Запись изменений в репозиторий 2.3 Просмотр истории коммитов 2.4 Операции отмены 2.5 Работа с удалёнными репозиториями 2.6 Работа с тегами 2.7 Псевдонимы в Git 2.8 Заключение 3. Ветвление в Git 3.1 О ветвлении в двух словах 3.2 Основы ветвления и слияния 3.3 Управление ветками 3.4 Работа с ветками 3.5 Удалённые ветки 3.6 Перебазирование 3.7 Заключение 4. Git на сервере 4.1 Протоколы 4.2 Установка Git на сервер 4.3 Генерация открытого SSH ключа 4.4 Настраиваем сервер 4.5 Git-демон 4.6 Умный HTTP 4.7 GitWeb 4.8 GitLab 4.9 Git-хостинг 4.10 Заключение 5. Распределённый Git 5.1 Распределённый рабочий процесс 5.2 Участие в проекте 5.3 Сопровождение проекта 5.4 Заключение 6. GitHub 6.1 Настройка и конфигурация учётной записи 6.2 Внесение собственного вклада в проекты 6.3 Сопровождение проекта 6.4 Управление организацией 6.5 Создание сценариев GitHub 6.6 Заключение 7. Инструменты Git 7.1 Выбор ревизии 7.2 Интерактивное индексирование 7.3 Припрятывание и очистка 7.4 Подпись 7.5 Поиск 7.6 Перезапись истории 7.7 Раскрытие тайн reset 7.8 Продвинутое слияние 7.9 Rerere 7.10 Обнаружение ошибок с помощью Git 7.11 Подмодули 7.12 Создание пакетов 7.13 Замена 7.14 Хранилище учётных данных 7.15 Заключение 8. Настройка Git 8.1 Конфигурация Git 8.2 Атрибуты Git 8.3 Хуки в Git 8.4 Пример принудительной политики Git 8.5 Заключение 9. Git и другие системы контроля версий 9.1 Git как клиент 9.2 Переход на Git 9.3 Заключение 10. Git изнутри 10.1 Сантехника и Фарфор 10.2 Объекты Git 10.3 Ссылки в Git 10.4 Pack-файлы 10.5 Спецификации ссылок 10.6 Протоколы передачи данных 10.7 Обслуживание репозитория и восстановление данных 10.8 Переменные окружения 10.9 Заключение A1. Приложение A: Git в других окружениях A1.1 Графические интерфейсы A1.2 Git в Visual Studio A1.3 Git в Visual Studio Code A1.4 Git в Eclipse A1.5 Git в IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.6 Git в Sublime Text A1.7 Git в Bash A1.8 Git в Zsh A1.9 Git в PowerShell A1.10 Заключение A2. Приложение B: Встраивание Git в ваши приложения A2.1 Git из командной строки A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Приложение C: Команды Git A3.1 Настройка и конфигурация A3.2 Клонирование и создание репозиториев A3.3 Основные команды A3.4 Ветвление и слияния A3.5 Совместная работа и обновление проектов A3.6 Осмотр и сравнение A3.7 Отладка A3.8 Внесение исправлений A3.9 Работа с помощью электронной почты A3.10 Внешние системы A3.11 Администрирование A3.12 Низкоуровневые команды 2nd Edition 2.5 Основы Git - Работа с удалёнными репозиториями Работа с удалёнными репозиториями Для того, чтобы внести вклад в какой-либо Git-проект, вам необходимо уметь работать с удалёнными репозиториями. Удалённые репозитории представляют собой версии вашего проекта, сохранённые в интернете или ещё где-то в сети. У вас может быть несколько удалённых репозиториев, каждый из которых может быть доступен для чтения или для чтения-записи. Взаимодействие с другими пользователями предполагает управление удалёнными репозиториями, а также отправку и получение данных из них. Управление репозиториями включает в себя как умение добавлять новые, так и умение удалять устаревшие репозитории, а также умение управлять различными удалёнными ветками, объявлять их отслеживаемыми или нет и так далее. В данном разделе мы рассмотрим некоторые из этих навыков. Примечание Удалённый репозиторий может находиться на вашем локальном компьютере. Вполне возможно, что удалённый репозиторий будет находиться на том же компьютере, на котором работаете вы. Слово «удалённый» не означает, что репозиторий обязательно должен быть где-то в сети или Интернет, а значит только — где-то ещё. Работа с таким удалённым репозиторием подразумевает выполнение стандартных операций отправки и получения, как и с любым другим удалённым репозиторием. Просмотр удалённых репозиториев Для того, чтобы просмотреть список настроенных удалённых репозиториев, вы можете запустить команду git remote . Она выведет названия доступных удалённых репозиториев. Если вы клонировали репозиторий, то увидите как минимум origin  — имя по умолчанию, которое Git даёт серверу, с которого производилось клонирование: $ git clone https://github.com/schacon/ticgit Cloning into 'ticgit'... remote: Reusing existing pack: 1857, done. remote: Total 1857 (delta 0), reused 0 (delta 0) Receiving objects: 100% (1857/1857), 374.35 KiB | 268.00 KiB/s, done. Resolving deltas: 100% (772/772), done. Checking connectivity... done. $ cd ticgit $ git remote origin Вы можете также указать ключ -v , чтобы просмотреть адреса для чтения и записи, привязанные к репозиторию: $ git remote -v origin https://github.com/schacon/ticgit (fetch) origin https://github.com/schacon/ticgit (push) Если у вас больше одного удалённого репозитория, команда выведет их все. Например, для репозитория с несколькими настроенными удалёнными репозиториями в случае совместной работы нескольких пользователей, вывод команды может выглядеть примерно так: $ cd grit $ git remote -v bakkdoor https://github.com/bakkdoor/grit (fetch) bakkdoor https://github.com/bakkdoor/grit (push) cho45 https://github.com/cho45/grit (fetch) cho45 https://github.com/cho45/grit (push) defunkt https://github.com/defunkt/grit (fetch) defunkt https://github.com/defunkt/grit (push) koke git://github.com/koke/grit.git (fetch) koke git://github.com/koke/grit.git (push) origin git@github.com:mojombo/grit.git (fetch) origin git@github.com:mojombo/grit.git (push) Это означает, что мы можем легко получить изменения от любого из этих пользователей. Возможно, что некоторые из репозиториев доступны для записи и в них можно отправлять свои изменения, хотя вывод команды не даёт никакой информации о правах доступа. Обратите внимание на разнообразие протоколов, используемых при указании адреса удалённого репозитория; подробнее мы рассмотрим протоколы в разделе Установка Git на сервер главы 4. Добавление удалённых репозиториев В предыдущих разделах мы уже упоминали и приводили примеры добавления удалённых репозиториев, сейчас рассмотрим эту операцию подробнее. Для того, чтобы добавить удалённый репозиторий и присвоить ему имя (shortname), просто выполните команду git remote add <shortname> <url> : $ git remote origin $ git remote add pb https://github.com/paulboone/ticgit $ git remote -v origin https://github.com/schacon/ticgit (fetch) origin https://github.com/schacon/ticgit (push) pb https://github.com/paulboone/ticgit (fetch) pb https://github.com/paulboone/ticgit (push) Теперь вместо указания полного пути вы можете использовать pb . Например, если вы хотите получить изменения, которые есть у Пола, но нету у вас, вы можете выполнить команду git fetch pb : $ git fetch pb remote: Counting objects: 43, done. remote: Compressing objects: 100% (36/36), done. remote: Total 43 (delta 10), reused 31 (delta 5) Unpacking objects: 100% (43/43), done. From https://github.com/paulboone/ticgit * [new branch] master -> pb/master * [new branch] ticgit -> pb/ticgit Ветка master из репозитория Пола сейчас доступна вам под именем pb/master . Вы можете слить её с одной из ваших веток или переключить на неё локальную ветку, чтобы просмотреть содержимое ветки Пола. Более подробно работа с ветками рассмотрена в главе Ветвление в Git . Получение изменений из удалённого репозитория — Fetch и Pull Как вы только что узнали, для получения данных из удалённых проектов, следует выполнить: $ git fetch [remote-name] Данная команда связывается с указанным удалённым проектом и забирает все те данные проекта, которых у вас ещё нет. После того как вы выполнили команду, у вас должны появиться ссылки на все ветки из этого удалённого проекта, которые вы можете просмотреть или слить в любой момент. Когда вы клонируете репозиторий, команда clone автоматически добавляет этот удалённый репозиторий под именем «origin». Таким образом, git fetch origin извлекает все наработки, отправленные на этот сервер после того, как вы его клонировали (или получили изменения с помощью fetch). Важно отметить, что команда git fetch забирает данные в ваш локальный репозиторий, но не сливает их с какими-либо вашими наработками и не модифицирует то, над чем вы работаете в данный момент. Вам необходимо вручную слить эти данные с вашими, когда вы будете готовы. Если ветка настроена на отслеживание удалённой ветки (см. следующий раздел и главу Ветвление в Git чтобы получить больше информации), то вы можете использовать команду git pull чтобы автоматически получить изменения из удалённой ветки и слить их со своей текущей. Этот способ может для вас оказаться более простым или более удобным. К тому же, по умолчанию команда git clone автоматически настраивает вашу локальную ветку master на отслеживание удалённой ветки master на сервере, с которого вы клонировали репозиторий. Название веток может быть другим и зависит от ветки по умолчанию на сервере. Выполнение git pull , как правило, извлекает (fetch) данные с сервера, с которого вы изначально клонировали, и автоматически пытается слить (merge) их с кодом, над которым вы в данный момент работаете. Примечание Начиная с версии 2.27, команда git pull выдаёт предупреждение, если настройка pull.rebase не установлена. Git будет выводить это предупреждение каждый раз пока настройка не будет установлена. Если хотите использовать поведение Git по умолчанию (простое смещение вперёд если возможно — иначе создание коммита слияния): git config --global pull.rebase "false" Если хотите использовать перебазирование при получении изменений: git config --global pull.rebase "true" Отправка изменений в удалённый репозиторий (Push) Когда вы хотите поделиться своими наработками, вам необходимо отправить их в удалённый репозиторий. Команда для этого действия простая: git push <remote-name> <branch-name> . Чтобы отправить вашу ветку master на сервер origin (повторимся, что клонирование обычно настраивает оба этих имени автоматически), вы можете выполнить следующую команду для отправки ваших коммитов: $ git push origin master Эта команда срабатывает только в случае, если вы клонировали с сервера, на котором у вас есть права на запись, и если никто другой с тех пор не выполнял команду push . Если вы и кто-то ещё одновременно клонируете, затем он выполняет команду push , а после него выполнить команду push попытаетесь вы, то ваш push точно будет отклонён. Вам придётся сначала получить изменения и объединить их с вашими и только после этого вам будет позволено выполнить push . Обратитесь к главе Ветвление в Git для более подробного описания, как отправлять изменения на удалённый сервер. Просмотр удалённого репозитория Если хотите получить побольше информации об одном из удалённых репозиториев, вы можете использовать команду git remote show <remote> . Выполнив эту команду с некоторым именем, например, origin , вы получите следующий результат: $ git remote show origin * remote origin Fetch URL: https://github.com/schacon/ticgit Push URL: https://github.com/schacon/ticgit HEAD branch: master Remote branches: master tracked dev-branch tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) Она выдаёт URL удалённого репозитория, а также информацию об отслеживаемых ветках. Эта команда любезно сообщает вам, что если вы, находясь на ветке master , выполните git pull , ветка master с удалённого сервера будет автоматически влита в вашу сразу после получения всех необходимых данных. Она также выдаёт список всех полученных ею ссылок. Это был пример для простой ситуации и вы наверняка встречались с чем-то подобным. Однако, если вы используете Git более интенсивно, вы можете увидеть гораздо большее количество информации от git remote show : $ git remote show origin * remote origin URL: https://github.com/my-org/complex-project Fetch URL: https://github.com/my-org/complex-project Push URL: https://github.com/my-org/complex-project HEAD branch: master Remote branches: master tracked dev-branch tracked markdown-strip tracked issue-43 new (next fetch will store in remotes/origin) issue-45 new (next fetch will store in remotes/origin) refs/remotes/origin/issue-11 stale (use 'git remote prune' to remove) Local branches configured for 'git pull': dev-branch merges with remote dev-branch master merges with remote master Local refs configured for 'git push': dev-branch pushes to dev-branch (up to date) markdown-strip pushes to markdown-strip (up to date) master pushes to master (up to date) Данная команда показывает какая именно локальная ветка будет отправлена на удалённый сервер по умолчанию при выполнении git push . Она также показывает, каких веток с удалённого сервера у вас ещё нет, какие ветки всё ещё есть у вас, но уже удалены на сервере, и для нескольких веток показано, какие удалённые ветки будут в них влиты при выполнении git pull . Удаление и переименование удалённых репозиториев Для переименования удалённого репозитория можно выполнить git remote rename . Например, если вы хотите переименовать pb в paul , вы можете это сделать при помощи git remote rename : $ git remote rename pb paul $ git remote origin paul Стоит упомянуть, что это также изменит имена удалённых веток в вашем репозитории. То, к чему вы обращались как pb/master , теперь стало paul/master . Если по какой-то причине вы хотите удалить удалённый репозиторий — вы сменили сервер или больше не используете определённое зеркало, или кто-то перестал вносить изменения — вы можете использовать git remote rm : $ git remote remove paul $ git remote origin При удалении ссылки на удалённый репозиторий все отслеживаемые ветки и настройки, связанные с этим репозиторием, так же будут удалены. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://git-scm.com/book/fr/v2/Les-bases-de-Git-Travailler-avec-des-d%c3%a9p%c3%b4ts-distants
Git - Travailler avec des dépôts distants About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Démarrage rapide 1.1 À propos de la gestion de version 1.2 Une rapide histoire de Git 1.3 Rudiments de Git 1.4 La ligne de commande 1.5 Installation de Git 1.6 Paramétrage à la première utilisation de Git 1.7 Obtenir de l’aide 1.8 Résumé 2. Les bases de Git 2.1 Démarrer un dépôt Git 2.2 Enregistrer des modifications dans le dépôt 2.3 Visualiser l’historique des validations 2.4 Annuler des actions 2.5 Travailler avec des dépôts distants 2.6 Étiquetage 2.7 Les alias Git 2.8 Résumé 3. Les branches avec Git 3.1 Les branches en bref 3.2 Branches et fusions : les bases 3.3 Gestion des branches 3.4 Travailler avec les branches 3.5 Branches de suivi à distance 3.6 Rebaser (Rebasing) 3.7 Résumé 4. Git sur le serveur 4.1 Protocoles 4.2 Installation de Git sur un serveur 4.3 Génération des clés publiques SSH 4.4 Mise en place du serveur 4.5 Démon (Daemon) Git 4.6 HTTP intelligent 4.7 GitWeb 4.8 GitLab 4.9 Git hébergé 4.10 Résumé 5. Git distribué 5.1 Développements distribués 5.2 Contribution à un projet 5.3 Maintenance d’un projet 5.4 Résumé 6. GitHub 6.1 Configuration et paramétrage d’un compte 6.2 Contribution à un projet 6.3 Maintenance d’un projet 6.4 Gestion d’un regroupement 6.5 Écriture de scripts pour GitHub 6.6 Résumé 7. Utilitaires Git 7.1 Sélection des versions 7.2 Indexation interactive 7.3 Remisage et nettoyage 7.4 Signer votre travail 7.5 Recherche 7.6 Réécrire l’historique 7.7 Reset démystifié 7.8 Fusion avancée 7.9 Rerere 7.10 Déboguer avec Git 7.11 Sous-modules 7.12 Empaquetage (bundling) 7.13 Replace 7.14 Stockage des identifiants 7.15 Résumé 8. Personnalisation de Git 8.1 Configuration de Git 8.2 Attributs Git 8.3 Crochets Git 8.4 Exemple de politique gérée par Git 8.5 Résumé 9. Git et les autres systèmes 9.1 Git comme client 9.2 Migration vers Git 9.3 Résumé 10. Les tripes de Git 10.1 Plomberie et porcelaine 10.2 Les objets de Git 10.3 Références Git 10.4 Fichiers groupés 10.5 La refspec 10.6 Les protocoles de transfert 10.7 Maintenance et récupération de données 10.8 Les variables d’environnement 10.9 Résumé A1. Annexe A: Git dans d’autres environnements A1.1 Interfaces graphiques A1.2 Git dans Visual Studio A1.3 Git dans Visual Studio Code A1.4 Git dans IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.5 Git dans Sublime Text A1.6 Git dans Bash A1.7 Git dans Zsh A1.8 Git dans PowerShell A1.9 Résumé A2. Annexe B: Embarquer Git dans vos applications A2.1 Git en ligne de commande A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Commandes Git A3.1 Installation et configuration A3.2 Obtention et création des projets A3.3 Capture d’instantané basique A3.4 Création de branches et fusion A3.5 Partage et mise à jour de projets A3.6 Inspection et comparaison A3.7 Débogage A3.8 Patchs A3.9 Courriel A3.10 Systèmes externes A3.11 Administration A3.12 Commandes de plomberie 2nd Edition 2.5 Les bases de Git - Travailler avec des dépôts distants Travailler avec des dépôts distants Pour pouvoir collaborer sur un projet Git, il est nécessaire de savoir comment gérer les dépôts distants. Les dépôts distants sont des versions de votre projet qui sont hébergées sur Internet ou le réseau d’entreprise. Vous pouvez en avoir plusieurs, pour lesquels vous pouvez avoir des droits soit en lecture seule, soit en lecture/écriture. Collaborer avec d’autres personnes consiste à gérer ces dépôts distants, en poussant ou tirant des données depuis et vers ces dépôts quand vous souhaitez partager votre travail. Gérer des dépôts distants inclut savoir comment ajouter des dépôts distants, effacer des dépôts distants qui ne sont plus valides, gérer des branches distantes et les définir comme suivies ou non, et plus encore. Dans cette section, nous traiterons des commandes de gestion distante. Note Les dépôts distants peuvent aussi être sur votre machine Il est tout à fait possible de travailler avec un dépôt « distant » qui est en fait, sur la même machine. Les mot « distant » ne veut pas forcément dire que le dépôt est ailleurs sur le réseau ou Internet, seulement qu’il est ailleurs. Travailler avec un tel dépôt distant impliquera tout de même les opérations de poussée, de tirage ou de récupération comme pour tout autre dépôt distant. Afficher les dépôts distants Pour visualiser les serveurs distants que vous avez enregistrés, vous pouvez lancer la commande git remote . Elle liste les noms des différentes références distantes que vous avez spécifiées. Si vous avez cloné un dépôt, vous devriez au moins voir l’origine origin  — c’est-à-dire le nom par défaut que Git donne au serveur à partir duquel vous avez cloné : $ git clone https://github.com/schacon/ticgit Clonage dans 'ticgit'... remote: Counting objects: 1857, done. remote: Total 1857 (delta 0), reused 0 (delta 0) Réception d'objets: 100% (1857/1857), 374.35 KiB | 243.00 KiB/s, fait. Résolution des deltas: 100% (772/772), fait. Vérification de la connectivité... fait. $ cd ticgit $ git remote origin Vous pouvez aussi spécifier -v , qui vous montre l’URL que Git a stockée pour chaque nom court : $ git remote -v origin https://github.com/schacon/ticgit (fetch) origin https://github.com/schacon/ticgit (push) Si vous avez plus d’un dépôt distant, la commande précédente les liste tous. Par exemple, un dépôt avec plusieurs dépôts distants permettant de travailler avec quelques collaborateurs pourrait ressembler à ceci. $ cd grit $ git remote -v bakkdoor https://github.com/bakkdoor/grit (fetch) bakkdoor https://github.com/bakkdoor/grit (push) cho45 https://github.com/cho45/grit (fetch) cho45 https://github.com/cho45/grit (push) defunkt https://github.com/defunkt/grit (fetch) defunkt https://github.com/defunkt/grit (push) koke git://github.com/koke/grit.git (fetch) koke git://github.com/koke/grit.git (push) origin git@github.com:mojombo/grit.git (fetch) origin git@github.com:mojombo/grit.git (push) Cela signifie que vous pouvez facilement tirer les contributions de tous ces utilisateurs. Nous pouvons de plus avoir les permissions en poussée sur certain d’entre eux, bien qu’on ne puisse pas le dire ici. Notez que ces dépôts distants sont accessibles au moyen de différents protocoles ; nous traiterons des protocoles au chapitre Installation de Git sur un serveur . Ajouter des dépôts distants J’ai expliqué et donné des exemples d’ajout de dépôts distants dans les chapitres précédents, mais voici spécifiquement comment faire. Pour ajouter un nouveau dépôt distant Git comme nom court auquel il est facile de faire référence, lancez git remote add [nomcourt] [url]  : $ git remote origin $ git remote add pb https://github.com/paulboone/ticgit $ git remote -v origin https://github.com/schacon/ticgit (fetch) origin https://github.com/schacon/ticgit (push) pb https://github.com/paulboone/ticgit (fetch) pb https://github.com/paulboone/ticgit (push) Maintenant, vous pouvez utiliser le mot-clé pb sur la ligne de commande au lieu de l’URL complète. Par exemple, si vous voulez récupérer toute l’information que Paul a mais que vous ne souhaitez pas l’avoir encore dans votre branche, vous pouvez lancer git fetch pb  : $ git fetch pb remote: Counting objects: 43, done. remote: Compressing objects: 100% (36/36), done. remote: Total 43 (delta 10), reused 31 (delta 5) Dépaquetage des objets: 100% (43/43), fait. Depuis https://github.com/paulboone/ticgit * [nouvelle branche] master -> pb/master * [nouvelle branche] ticgit -> pb/ticgit La branche master de Paul est accessible localement en tant que pb/master  — vous pouvez la fusionner dans une de vos propres branches, ou vous pouvez extraire une branche localement si vous souhaitez l’inspecter. Nous traiterons plus en détail de la nature des branches et de leur utilisation au chapitre Les branches avec Git . Récupérer et tirer depuis des dépôts distants Comme vous venez tout juste de le voir, pour obtenir les données des dépôts distants, vous pouvez lancer : $ git fetch [remote-name] Cette commande s’adresse au dépôt distant et récupère toutes les données de ce projet que vous ne possédez pas encore. Après cette action, vous possédez toutes les références à toutes les branches contenues dans ce dépôt, que vous pouvez fusionner ou inspecter à tout moment. Si vous clonez un dépôt, le dépôt distant est automatiquement ajouté sous le nom « origin ». Donc, git fetch origin récupère tout ajout qui a été poussé vers ce dépôt depuis que vous l’avez cloné ou la dernière fois que vous avez récupéré les ajouts. Il faut noter que la commande fetch tire les données dans votre dépôt local mais sous sa propre branche — elle ne les fusionne pas automatiquement avec aucun de vos travaux ni ne modifie votre copie de travail. Vous devez volontairement fusionner ses modifications distantes dans votre travail lorsque vous le souhaitez. Si vous avez créé une branche pour suivre l’évolution d’une branche distante (cf. la section suivante et le chapitre Les branches avec Git pour plus d’information), vous pouvez utiliser la commande git pull qui récupère et fusionne automatiquement une branche distante dans votre branche locale. Ce comportement peut correspondre à une méthode de travail plus confortable, sachant que par défaut la commande git clone paramètre votre branche locale pour qu’elle suive la branche master du dépôt que vous avez cloné (en supposant que le dépôt distant ait une branche master ). Lancer git pull récupère généralement les données depuis le serveur qui a été initialement cloné et essaie de les fusionner dans votre branche de travail actuel. Note Depuis Git version 2.27, git pull affichera un avertissement si la variable pull.rebase n’est pas paramétrée. Git continuera à vous avertir tant que la variable ne sera pas paramétrée. Si vous souhaitez le comportement par défaut de git (avance-rapide si possible, sinon créer un commit de fusion) : git config --global pull.rebase "false" Si vous souhaitez rebaser lors du tirage : git config --global pull.rebase "true" Pousser son travail sur un dépôt distant Lorsque votre dépôt vous semble prêt à être partagé, il faut le pousser en amont. La commande pour le faire est simple : git push <nom-distant> <nom-de-branche> . Si vous souhaitez pousser votre branche master vers le serveur origin (pour rappel, cloner un dépôt définit automatiquement ces noms pour vous), alors vous pouvez lancer ceci pour pousser votre travail vers le serveur amont : $ git push origin master Cette commande ne fonctionne que si vous avez cloné depuis un serveur sur lequel vous avez des droits d’accès en écriture et si personne n’a poussé dans l’intervalle. Si vous et quelqu’un d’autre clonez un dépôt au même moment et que cette autre personne pousse ses modifications et qu’après vous tentez de pousser les vôtres, votre poussée sera rejetée à juste titre. Vous devrez tout d’abord tirer les modifications de l’autre personne et les fusionner avec les vôtres avant de pouvoir pousser. Référez-vous au chapitre Les branches avec Git pour de plus amples informations sur les techniques pour pousser vers un serveur distant. Inspecter un dépôt distant Si vous souhaitez visualiser plus d’informations à propos d’un dépôt distant particulier, vous pouvez utiliser la commande git remote show [nom-distant] . Si vous lancez cette commande avec un nom court particulier, tel que origin , vous obtenez quelque chose comme : $ git remote show origin * distante origin URL de rapatriement : https://github.com/schacon/ticgit URL push : https://github.com/schacon/ticgit Branche HEAD : master Branches distantes : master suivi ticgit suivi Branche locale configurée pour 'git pull' : master fusionne avec la distante master Référence locale configurée pour 'git push' : master pousse vers master (à jour) Cela donne la liste des URL pour le dépôt distant ainsi que la liste des branches distantes suivies. Cette commande vous informe que si vous êtes sur la branche master et si vous lancez git pull , il va automatiquement fusionner la branche master du dépôt distant après avoir récupéré toutes les références sur le serveur distant. Cela donne aussi la liste des autres références qu’il aura tirées. Le résultat ci-dessus est un exemple simple mais réaliste de dépôt distant. Lors d’une utilisation plus intense de Git, la commande git remote show fournira beaucoup d’information : $ git remote show origin * distante origin URL: https://github.com/my-org/complex-project URL de rapatriement : https://github.com/my-org/complex-project URL push : https://github.com/my-org/complex-project Branche HEAD : master Branches distantes : master suivi dev-branch suivi markdown-strip suivi issue-43 nouveau (le prochain rapatriement (fetch) stockera dans remotes/origin) issue-45 nouveau (le prochain rapatriement (fetch) stockera dans remotes/origin) refs/remotes/origin/issue-11 dépassé (utilisez 'git remote prune' pour supprimer) Branches locales configurées pour 'git pull' : dev-branch fusionne avec la distante dev-branch master fusionne avec la distante master Références locales configurées pour 'git push' : dev-branch pousse vers dev-branch (à jour) markdown-strip pousse vers markdown-strip (à jour) master pousse vers master (à jour) Cette commande affiche les branches poussées automatiquement lorsqu’on lance git push dessus. Elle montre aussi les branches distantes qui n’ont pas encore été rapatriées, les branches distantes présentes localement mais effacées sur le serveur, et toutes les branches qui seront fusionnées quand on lancera git pull . Retirer et renommer des dépôts distants Vous pouvez lancer git remote rename pour modifier le nom court d’un dépôt distant. Par exemple, si vous souhaitez renommer pb en paul , vous pouvez le faire avec git remote rename  : $ git remote rename pb paul $ git remote origin paul Il faut mentionner que ceci modifie aussi les noms de branches distantes. Celles qui étaient référencées sous pb/master le sont maintenant sous paul/master . Si vous souhaitez retirer un dépôt distant pour certaines raisons — vous avez changé de serveur ou vous n’utilisez plus ce serveur particulier, ou peut-être un contributeur a cessé de contribuer — vous pouvez utiliser git remote rm  : $ git remote rm paul $ git remote origin Une fois que vous avez supprimé la référence à un dépôt distant de cette manière, toutes les branches de suivi à distance et les réglages de configuration associés à ce dépôt distant sont aussi effacés. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy
2026-01-13T09:29:19
https://ko-kr.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT317rgIlXz-cdvSK7V3LNAW1GKivKoxP492C7CHQSWKr68LB-a6DQY1S5UOUfUGRFieCiBqleSokpucOfQfOC7nQYePlwBPeZ9TfjNHiOTH6yXDb7ru42jbguKFKkBPmym9lwctwh7bOFDY
Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/appendix/glossary.html#crate
Appendix: Glossary - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Glossary Artifact An artifact is the file or set of files created as a result of the compilation process. This includes linkable libraries, executable binaries, and generated documentation. Cargo Cargo is the Rust package manager , and the primary topic of this book. Cargo.lock See lock file . Cargo.toml See manifest . Crate A Rust crate is either a library or an executable program, referred to as either a library crate or a binary crate , respectively. Every target defined for a Cargo package is a crate . Loosely, the term crate may refer to either the source code of the target or to the compiled artifact that the target produces. It may also refer to a compressed package fetched from a registry . The source code for a given crate may be subdivided into modules . Edition A Rust edition is a developmental landmark of the Rust language. The edition of a package is specified in the Cargo.toml manifest , and individual targets can specify which edition they use. See the Edition Guide for more information. Feature The meaning of feature depends on the context: A feature is a named flag which allows for conditional compilation. A feature can refer to an optional dependency, or an arbitrary name defined in a Cargo.toml manifest that can be checked within source code. Cargo has unstable feature flags which can be used to enable experimental behavior of Cargo itself. The Rust compiler and Rustdoc have their own unstable feature flags (see The Unstable Book and The Rustdoc Book ). CPU targets have target features which specify capabilities of a CPU. Index The index is the searchable list of crates in a registry . Lock file The Cargo.lock lock file is a file that captures the exact version of every dependency used in a workspace or package . It is automatically generated by Cargo. See Cargo.toml vs Cargo.lock . Manifest A manifest is a description of a package or a workspace in a file named Cargo.toml . A virtual manifest is a Cargo.toml file that only describes a workspace, and does not include a package. Member A member is a package that belongs to a workspace . Module Rust’s module system is used to organize code into logical units called modules , which provide isolated namespaces within the code. The source code for a given crate may be subdivided into one or more separate modules. This is usually done to organize the code into areas of related functionality or to control the visible scope (public/private) of symbols within the source (structs, functions, and so on). A Cargo.toml file is primarily concerned with the package it defines, its crates, and the packages of the crates on which they depend. Nevertheless, you will see the term “module” often when working with Rust, so you should understand its relationship to a given crate. Package A package is a collection of source files and a Cargo.toml manifest file which describes the package. A package has a name and version which is used for specifying dependencies between packages. A package contains multiple targets , each of which is a crate . The Cargo.toml file describes the type of the crates (binary or library) within the package, along with some metadata about each one — how each is to be built, what their direct dependencies are, etc., as described throughout this book. The package root is the directory where the package’s Cargo.toml manifest is located. (Compare with workspace root .) The package ID specification , or SPEC , is a string used to uniquely reference a specific version of a package from a specific source. Small to medium sized Rust projects will only need a single package, though it is common for them to have multiple crates. Larger projects may involve multiple packages, in which case Cargo workspaces can be used to manage common dependencies and other related metadata between the packages. Package manager Broadly speaking, a package manager is a program (or collection of related programs) in a software ecosystem that automates the process of obtaining, installing, and upgrading artifacts. Within a programming language ecosystem, a package manager is a developer-focused tool whose primary functionality is to download library artifacts and their dependencies from some central repository; this capability is often combined with the ability to perform software builds (by invoking the language-specific compiler). Cargo is the package manager within the Rust ecosystem. Cargo downloads your Rust package ’s dependencies ( artifacts known as crates ), compiles your packages, makes distributable packages, and (optionally) uploads them to crates.io , the Rust community’s package registry . Package registry See registry . Project Another name for a package . Registry A registry is a service that contains a collection of downloadable crates that can be installed or used as dependencies for a package . The default registry in the Rust ecosystem is crates.io . The registry has an index which contains a list of all crates, and tells Cargo how to download the crates that are needed. Source A source is a provider that contains crates that may be included as dependencies for a package . There are several kinds of sources: Registry source — See registry . Local registry source — A set of crates stored as compressed files on the filesystem. See Local Registry Sources . Directory source — A set of crates stored as uncompressed files on the filesystem. See Directory Sources . Path source — An individual package located on the filesystem (such as a path dependency ) or a set of multiple packages (such as path overrides ). Git source — Packages located in a git repository (such as a git dependency or git source ). See Source Replacement for more information. Spec See package ID specification . Target The meaning of the term target depends on the context: Cargo Target — Cargo packages consist of targets which correspond to artifacts that will be produced. Packages can have library, binary, example, test, and benchmark targets. The list of targets are configured in the Cargo.toml manifest , often inferred automatically by the directory layout of the source files. Target Directory — Cargo places built artifacts in the target directory. By default this is a directory named target at the workspace root, or the package root if not using a workspace. The directory may be changed with the --target-dir command-line option, the CARGO_TARGET_DIR environment variable , or the build.target-dir config option . For more information see the build cache documentation. Target Architecture — The OS and machine architecture for the built artifacts are typically referred to as a target . Target Triple — A triple is a specific format for specifying a target architecture. Triples may be referred to as a target triple which is the architecture for the artifact produced, and the host triple which is the architecture that the compiler is running on. The target triple can be specified with the --target command-line option or the build.target config option . The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> where: arch = The base CPU architecture, for example x86_64 , i686 , arm , thumb , mips , etc. sub = The CPU sub-architecture, for example arm has v7 , v7s , v5te , etc. vendor = The vendor, for example unknown , apple , pc , nvidia , etc. sys = The system name, for example linux , windows , darwin , etc. none is typically used for bare-metal without an OS. abi = The ABI, for example gnu , android , eabi , etc. Some parameters may be omitted. Run rustc --print target-list for a list of supported targets. Test Targets Cargo test targets generate binaries which help verify proper operation and correctness of code. There are two types of test artifacts: Unit test — A unit test is an executable binary compiled directly from a library or a binary target. It contains the entire contents of the library or binary code, and runs #[test] annotated functions, intended to verify individual units of code. Integration test target — An integration test target is an executable binary compiled from a test target which is a distinct crate whose source is located in the tests directory or specified by the [[test]] table in the Cargo.toml manifest . It is intended to only test the public API of a library, or execute a binary to verify its operation. Workspace A workspace is a collection of one or more packages that share common dependency resolution (with a shared Cargo.lock lock file ), output directory, and various settings such as profiles. A virtual workspace is a workspace where the root Cargo.toml manifest does not define a package, and only lists the workspace members . The workspace root is the directory where the workspace’s Cargo.toml manifest is located. (Compare with package root .)
2026-01-13T09:29:19
https://huggingface.co/datasets/MiniMaxAI/VIBE
MiniMaxAI/VIBE · Datasets at Hugging Face Hugging Face Models Datasets Spaces Community Docs Enterprise Pricing Log In Sign Up Datasets: MiniMaxAI / VIBE like 249 Follow MiniMax 3.42k Tasks: Text Generation Modalities: Text Formats: parquet Languages: English Size: < 1K Tags: benchmark web-development app-development agent-as-a-verifier full-stack vibe-coding Libraries: Datasets pandas Polars + 1 License: mit Dataset card Data Studio Files Files and versions xet Community 4 Dataset Viewer Auto-converted to Parquet API Embed Duplicate Data Studio Subset (1) default · 200 rows default (200 rows) Split (1) train · 200 rows train (200 rows) SQL Console idx int64 1 200 query string lengths 326 1.6k domain string classes 5 values difficulty string classes 3 values 1 Design and build a portfolio site for a top‑tier design agency that feels modern, minimal, and premium. We want near‑instant navigation with fluid page transitions and a restrained layer of micro‑interactions on hover and scroll. The work should be front‑and‑center: a responsive masonry gallery that opens into rich case studies with high‑resolution imagery and video, while keeping loading snappy through progressive, on‑demand asset loading. The site also includes a Journal and an About page with team profiles. Deliver this as a fast, interactive experience using a SolidJS front end and Tailwind CSS, and use the browser’s native Transitions API for page‑to‑page motion. Ensure accessibility, responsive behavior, and graceful fallbacks. web easy 2 Reimagine our boutique hotels’ booking experience so it feels premium and trustworthy. Guests should browse immersive, high‑resolution photo galleries, check live room availability on a calendar, pick a room, and complete a smooth, step‑by‑step reservation that ends with a secure Stripe checkout. Use Next.js with server‑side rendering for fast first loads and strong SEO, and use React Query so availability and pricing stay fresh without blocking the UI. The design should be corporate‑professional with an elegant, high‑end feel, fully responsive and accessible. Include clear validation and error states (e.g., sold‑out nights, minimum‑stay rules) and provide a polished confirmation screen after payment. web easy 3 Create a SvelteKit server health dashboard that defaults to dark mode. It should pull live metrics (CPU, memory, disk) from an existing REST API and show CPU and memory as two separate line charts that update automatically. Include a 'Live Logs' area that streams log lines over a WebSocket. If CPU stays above 90% for more than one minute, display a persistent red alert banner at the top that cannot be dismissed. web easy 4 I’m launching a technical blog and want it to feel fast, clean, and opinionated. Prefer a statically generated site (Astro) with a few small interactive touches via Vue 3—like a simple client-side search or lightweight comments. Design-wise, think minimalist/brutalist: monospace type, high contrast, and no visual fluff. Must-haves: dependable syntax highlighting for multiple languages in articles, an auto-generated RSS feed for subscribers, and a dark/light theme that follows the reader’s system preference. I’m aiming for excellent Lighthouse scores and a snappy experience on mobile and desktop. web easy 5 Design and build the browser UI for a real-time collaborative whiteboard. People should be able to sketch on a canvas, add text, and drop basic shapes at the same time, seeing each other’s cursors and edits live. Include a compact tool palette and a properties panel with a subtle glassmorphism look. Each person must have their own undo/redo history that never rewinds other users’ actions. Users sign in with JWT and remain authenticated while working. Please implement this with React and TypeScript, manage state with Redux Toolkit, and connect to a WebSocket-based service (e.g., Pusher or your own server) for real-time updates. web easy 6 Design a marketer-friendly A/B testing control panel for landing pages. Build a React single-page app where users can create campaigns, define URL-based variants (e.g., /promo-A, /promo-B), map each variant to content in a headless CMS (e.g., Strapi), and ensure each visitor consistently sees the same variant via a cookie. From the app, marketers should be able to browse existing Mailchimp templates, duplicate one, edit content for a specific variant, and publish it back to Mailchimp as a new template for email campaigns. Please propose the architecture and implementation plan (key screens, data model, cookie bucketing strategy, and the Mailchimp integration flow—auth, read, edit, create) with performance and caching considerations for high-traffic pages. web easy 7 We run a React-based P2P auction marketplace and want the product page to show live bidding, not a static price. Please build a self-contained bidding widget that displays the current highest bid and a countdown to the auction end, and updates in real time via our Firebase Realtime Database. Buyers should be able to place a bid and see an immediate (optimistic) UI response while the request is confirmed. The widget must reject bids that aren’t higher than the current amount and clearly communicate states like 'placing bid', 'bid accepted', 'error', and 'outbid'. Use React hooks and a lightweight state machine to manage the flow and keep the experience smooth and trustworthy. web easy 8 Build a web soundscape mixer that feels like a physical desktop console with a soft, neumorphic look. I should be able to turn on ambient tracks like Rain, Thunder, and Cafe Chatter by pressing deep, pillowy buttons that pop to a flush "on" state, and adjust each track with a smooth volume slider set into a soft trough. The UI should animate subtly, feel responsive, and reflect state instantly. Use React for the UI, mix the audio with the Web Audio API so multiple loops blend cleanly, and keep a single source of truth for which sounds are active and their volumes (Zustand is fine). web easy 9 We’re kicking off a fast, content‑first microsite for the DevCon 2024 virtual event. Visitors should be able to scan the agenda, browse speaker bios, and open session detail pages (time, track, and links). The look and feel should be cyber‑retro: dark UI, neon glow accents, and a terminal‑like typeface. Please build it with a lightweight, content‑driven approach (we’re leaning toward Astro) so it’s easy to update. Expect a launch‑day traffic spike—plan a GCP deployment and wire up real‑time performance and debugging using Cloud Trace and Cloud Debugger. web easy 10 We need a self-serve reporting area so managers can build custom reports inside our project management app. Please create a new Reports dashboard as a lazy‑loaded Angular feature module to keep the initial load fast. Managers should be able to pick a date range and select multiple projects and users. Show interactive charts—specifically a burndown and a task completion rate—plus a sortable table of the underlying tasks. Include an Export to CSV action. Use D3.js for the charts to match our stack, and keep the look clean and corporate, consistent with our existing UI kit. web easy 11 Build an ultra-fast, reading-first blog that feels effortless to browse. Use Astro with Tailwind for styling, and keep client-side JavaScript to a bare minimum (progressive enhancement only). The look should reflect Swiss modernism: a precise typographic scale, a consistent grid, and generous negative space. Authors write posts in Markdown; every article page is statically generated; an RSS feed is produced automatically; and the home page allows simple tag-based filtering without a heavy framework. No backend—just a clean, accessible, static site that loads lightning-fast and prioritizes readability. web easy 12 Our About page should tell our story in a way that feels alive—not just paragraphs of text. Please create an interactive, vertical timeline in React where milestones reveal smoothly as you scroll. Each point can be tapped or clicked to open a lightweight modal with more detail, photos, or short video clips. The look should be clean and disciplined—Swiss‑style typography, generous white space, and clear hierarchy—and it must feel intuitive on both desktop and mobile. web easy 13 I want a fast, distraction-free AI & robotics news site with a single-column reading experience and great typography. It should pull articles from a small, curated list of RSS feeds and let me filter posts by source in the browser. Include a simple dark mode toggle. Please ship it as a statically generated site (Astro preferred) and keep it privacy-friendly—no ads, no trackers, and no unnecessary UI. web easy 14 Build the main project dashboard as a Kanban board so users can track work at a glance. Show status columns (e.g., To Do, In Progress, Done). Each task card should display its title, assignee avatar, and due date, and support dragging between columns and reordering within a column. Clicking a card opens a modal with full task details and an activity log. Keep the board state—including column membership and card order—in a single Redux Toolkit slice so changes are reflected immediately in the store. Use React with TypeScript for the UI; mock data is fine and no backend is required. web medium 15 We’re building an internal knowledge base for employees that feels fast, focused, and visually clean with a grid‑based Swiss‑style look. People should be able to browse a nested documentation tree, create and edit pages with a rich editor (including tables and code blocks), and review version history with the ability to compare and restore prior versions. Access to content must respect roles and sign‑in via the company SSO. Please implement the front end in Angular 17+ with TypeScript, using RxJS to model state and asynchronous flows. All content comes from a single REST API. The UI should clearly communicate loading, empty, and error states and recover gracefully. web medium 16 Build CodeLeads: a members-only web app where developers pay to access a curated feed of software project leads. Users should be able to sign up, manage a profile, choose from tiered monthly/annual plans, and have access controlled by their plan (which leads they can view and how many applications they can send). Provide a clean dashboard to browse, search and filter leads, plus an apply flow that captures a short pitch and notifies the lead contact. Include an account area with invoices and payment methods. Subscriptions and payments must run through Stripe Billing (trials, upgrades/downgrades, proration, SCA). Invoices must be created with Stripe Invoicing and automatically include VAT/GST via Stripe Tax based on the customer’s location. Admins need simple tools to publish leads and see basic metrics. Use Next.js and set up secure webhook handling. Deliver production-ready auth, subscription gating, the leads dashboard and apply flow, and automated invoicing/tax. web medium 17 Our furniture product pages feel flat with static photos. We need a web-based 3D configurator where the product stays front and center: shoppers can spin it 360°, zoom into details, and swap materials—wood species, fabric colors, and metal finishes—with instant visual feedback and price recalculation. Material changes should update the model’s textures in real time. Keep the surrounding UI minimal and intuitive. Track selections in a central state so we can generate a final configuration summary and add it to the cart. Prefer SvelteKit for the app and Three.js for the viewer, but keep the architecture simple and robust. web medium 18 We’re building a modern, modular news site that’s fast, accessible, and search‑friendly. On the homepage, top stories should automatically stand out with larger cards while lower‑priority items use smaller tiles. Article pages should open quickly with lazy loading, show a reading progress bar, and use clean, highly readable typography. Each author needs a profile page that aggregates all of their articles. Content comes from our REST API. We prefer Angular 17 with SCSS and a predictable state flow (NgRx) to keep things maintainable and scalable, while meeting WCAG 2.1 AA and good SEO practices. web medium 19 As a restaurant manager, I want a dead-simple, tablet-friendly admin page where I can sign into my location, enter today’s soup and special, attach a photo, and tap Publish so it appears on my location’s page on the company website within seconds—no IT help needed. The UI should be mobile-first with large, touchable controls and clear success/error feedback. Each location must have its own secure login and only access its own data. Build the front end with Svelte and use Firebase (Firestore for real-time updates and Storage for images) on the back end. web medium 20 Build a premium brand site for a luxury watchmaker. The hero experience is a real‑time 3D watch configurator with a luxury, sophisticated feel: shoppers can switch case materials, strap styles, dial colors, and add engravings, and see the changes instantly on a photorealistic model with accurate lighting and reflections. Complement this with story pages that use tasteful parallax to tell the brand’s heritage, and product detail pages that fetch pricing, variants, and inventory from a headless commerce backend. We’ll use Next.js and Three.js—please propose the UX/architecture and implement the configurator and data integration. web medium 21 Build a simple, instructor-friendly quiz builder for our online learning platform. In one guided flow, teachers should add basic quiz details (title, subject, time limit), create questions (multiple choice, true/false, fill-in-the-blank), drag to reorder them, and preview everything before publishing. This should be delivered as a standalone Angular SPA using the Clarity design system, with the steps handled via routed wizard pages. Please keep state unified and reactive with RxJS services so changes in one step instantly reflect elsewhere. web medium 22 Design a calm, high-end booking experience for our wellness spa, paired with an intuitive admin calendar. On the client side, let people browse services, pick a therapist and an open time, and pay upfront—everything should feel effortless and quiet. On the admin side, provide a weekly calendar that makes it easy to see capacity across rooms and therapists, drag-and-drop to move appointments, and trigger reminder messages automatically. Build it as a web app (React + TypeScript) and lean into a luxury_sophisticated look: soft colors, subtle micro-interactions, and generous whitespace. web medium 23 Build a fast, static website for an experimental digital zine. The look should feel raw and anti-grid—brutalist, with intentionally jarring micro-animations and oversized type. All articles live as local Markdown files. Each article can declare a theme in its front matter that changes layout, typography, color, and motion presets so pieces can look radically different. Use Astro for the site and introduce interactive islands only where interaction is truly needed (Solid preferred). Keep pages lightweight and quick on mobile by minimizing JavaScript and assets. No user accounts or backend beyond the static build. Provide a clear content structure, a theme system, and a few sample themes to demonstrate variety without sacrificing performance. web medium 24 We need a clean, responsive internal sales dashboard our team can rely on throughout the day. Build a single-page React app (Vite) that shows: a live revenue trend for the last 24 hours as a line chart, a transactions table with strong sorting and filtering, and KPI tiles for total sales, average order value, and conversion rate. Fetch and cache data from our REST API with TanStack Query and poll every 30 seconds. Keep UI control state (e.g., filters) in Zustand. Use Chart.js for the chart and AG-Grid for the table. Prioritize speed, clarity, and a utilitarian design over visual flair. web medium 25 Build an internal design system website that serves as the single source of truth for our teams. It should embody Swiss design—tight grid, grotesque typography, and clear visual hierarchy—so everything reads cleanly and consistently. People should be able to browse components, design tokens, and usage guidelines, with interactive previews shown in context. Content is authored in MDX to blend narrative and React examples. Use Next.js so the site can be statically exported. Include a fast, client‑side search that quickly surfaces components, tokens, and guidelines, and make Storybook embeds feel native and seamless within the pages. web medium 26 Build a portfolio that feels like exploring a calm, low‑poly digital world instead of reading a resume. Users glide and orbit with the mouse through an abstract landscape where each major project is an interactive island. Selecting an island opens a modal or smoothly transports the user into a focused scene with the project’s story, tech stack, screenshots, and links to the live site/GitHub. Keep the experience immersive with gentle camera easing, ambient animations, and lightweight interactions. Implement with SvelteKit and three.js, prioritize solid performance on mid‑range devices, and include a simple fallback for browsers without WebGL. web medium 27 Build an internal knowledge base to replace our scattered Google Docs. It should feel instant and be delivered as a static site using Astro. Authors write in Markdown and contribute via a dedicated Git repo with a PR workflow. Make search the centerpiece: integrate Algolia for instant, typo‑tolerant results across all articles, with clear highlighting and keyboard navigation. The look should be clean and functional, inspired by Swiss‑style typography—strong grid, readable type, and generous whitespace. Keep navigation obvious (global search, categories, breadcrumbs). Set up CI/CD so every push to the main branch triggers a fresh build and deployment. web medium 28 We’re launching a new line of customizable gaming PCs and need a web configurator that feels like a high-end builder, built in React. Guide customers through a clear flow—chassis → motherboard → CPU → GPU → RAM → storage → cooling. As they pick parts, the next choices should automatically filter to what’s compatible; anything not compatible appears disabled with a short reason. Keep a live price total and a performance score meter/chart visible at all times. Visual style is ‘gaming_esports’: dark, angular, techy with subtle RGB glow/animated accents. Make it responsive, fast, and accessible. Provide a simple parts data model and compatibility logic so we can plug in our catalog later. web hard 29 We’re refreshing our corporate website to better express precision and modernity. The experience should feel minimal, grid‑first, and typographically disciplined (one sans‑serif, clear hierarchy, generous whitespace). Marketing must be able to update content without engineering help, so use a static site powered by a headless CMS. Key areas: - Case Studies: a browsable list with category filters and detail pages; filters must be shareable via URL. - Services: reusable content blocks that can be cross‑linked across pages. - About Us: a timeline of company milestones. Please propose the information architecture, content model, and component library, then build the site with Next.js and Tailwind (or an equivalent static setup) integrated with a CMS like Contentful. Prioritize performance, SEO, accessibility, and responsive behavior. Include image optimization, metadata/OG tags, and an editorial preview mode so the team can review changes before publishing. web hard 30 Our marketing site’s jQuery front end feels slow and is hard to maintain. Please rebuild it on a modern Jamstack using Astro (island architecture) so it’s fast and easier to iterate on. All content—blog posts, case studies, and team profiles—should be sourced from our existing Contentful CMS via its GraphQL API. The new site must be fully responsive, achieve a 95+ Lighthouse score, and faithfully reflect our ‘corporate_professional’ design system implemented with Tailwind CSS. web hard 31 Our marketing team needs a web-based Content Hub to replace the spreadsheets we use for our content calendar. The app should read and write to our Airtable base (our single source of truth) to display upcoming content, statuses, and assigned authors, with simple inline edits. For any record tagged as a “Social Media Post,” a user should be able to click “Create graphic,” choose a predefined Canva template, auto-fill variables (e.g., headline, date, channel), generate the image, and attach it back to that Airtable record. Keep the interface clean and data-focused with filters by status/channel and clear progress/error states. Please build it with React and TypeScript, using Redux Toolkit for state management, and handle authentication/rate limits for Airtable and Canva. web hard 32 Create an internal admin dashboard that lets engineering managers control repository access across our GitHub organization. Build it as a single-page app (Vue.js is fine) that uses GitHub APIs to list repos and manage permissions. Managers should be able to: browse all org repositories in a dense table with search and sort; add or remove collaborators; change permission levels (read, write, admin) in bulk for a selected repo or project; and review access/audit activity for a chosen repository. Prioritize utility over visual polish—fast filtering, clear bulk actions, and an audit trail view. web hard 33 Sales reps need a quick, spreadsheet‑free way to produce client quotes. Please create a single‑page React app that loads our product catalog and pricing rules from an API, lets reps choose products, adjust quantities, and apply only pre‑approved discounts, shows a clean quote preview, and exports the quote as a downloadable PDF. When a quote is finalized, record it in Salesforce as an Opportunity linked to either a new or existing Account. Keep the workflow simple and fast; assume we’ll provide the product/pricing API and Salesforce sandbox access. web hard 34 Editors need more than page-view counts to understand how stories perform. Please build an internal analytics dashboard for our news site in Angular that pulls article-level data from our data warehouse API. The dashboard should let the team filter by date range and category and provide three core views: a scroll-depth heatmap over an article preview to show where readers drop off; a chart that plots average engagement time against article length to surface patterns and outliers; and a navigation map (network diagram) that visualizes how readers move between articles. It should feel fast and easy to use for editors and analysts. web hard 35 Design a crypto portfolio web dashboard that gives users a single place to track everything they hold. It should connect to multiple wallets across Ethereum and Solana, pull balances and DeFi positions from major protocols like Aave and Compound, and present a unified, easy-to-scan portfolio view. The app must feel fast and dependable: run background jobs on a schedule (e.g., serverless functions on Google Cloud) to fetch, normalize, and store data in one database that the site reads from, rather than hitting chains directly on every page load. In the UI, include historical performance charts, clear realized/unrealized P&L, and a breakdown of allocation by asset and chain. Build the frontend with Next.js and provide a simple, trustworthy multi-wallet connect flow. Prioritize clarity, accuracy, and a data-rich experience over flashy visuals. web hard 36 We need a real-time logistics dashboard our clients can use to monitor their fleets. The main view should be a live map that shows vehicle locations as they move, and a set of data-dense KPI cards—Packages in Transit, On‑time Delivery Rate, and Average Stop Time—that update instantly. Historical charts and reports should pull from our PlanetScale database via Prisma. The app runs on Next.js, and live positions/metrics come in over a WebSocket. The design should feel clean, professional, and optimized for dark mode. web hard 37 Our team keeps losing code snippets in Slack threads and personal notes. We want a single, no-frills web app that acts as a shared snippet vault—fast, clear, and distraction-free. The look should be brutally minimal: monospaced type, raw HTML feel, high-contrast text. No animations, no rounded corners. Core features: - Create, edit, and view snippets with syntax highlighting (use CodeMirror). - Organize with tags and filter by them. - Blazing-fast full-text search across titles, tags, and content, backed by PostgreSQL with pg_trgm. - Real-time collaboration so teammates can see live presence and edits via WebSockets. - Sign in with GitHub OAuth. Please build it as a single-page app using Next.js and Tailwind CSS, prioritizing speed and utility over aesthetics. web hard 38 Reimagine Innovest Financial’s public website to feel corporate and trustworthy, showcasing our services, case studies, and thought leadership. Build it in Angular with a modular component approach and strict TypeScript, and power content via a headless CMS (Strapi or Contentful) so the marketing team can manage updates without engineering. The experience must be mobile‑first and fully responsive, meet WCAG 2.1 AA, and perform strongly on Core Web Vitals. Use SCSS modules for scoped styles. Deliver a clear site structure and reusable components with CMS-backed pages for blog, case studies, and services. web hard 39 Build a self‑service registration site for our annual tech conference, “Innovate 2025.” Visitors should land on a simple intro page, sign in or sign up with Google or GitHub, choose a ticket type (Early Bird, Student, VIP), optionally add workshops, and complete checkout via Stripe. After a successful payment, issue a unique ticket with a QR code that appears on the attendee’s profile and is also emailed to them. Please use Next.js with the App Router; forms should work even if JavaScript is disabled (server-side actions). Use Auth.js for OAuth and connect Stripe in test mode. web hard 40 Design and build an interactive portfolio website for a visual artist where the art takes center stage. Aim for a minimalist, artful look with smooth transitions, fast load times, and excellent behavior on mobile and desktop. Include two galleries: a public showcase and a client-proofs area behind login. Use Google Cloud Identity Platform for authentication. The main gallery should automatically pull high-resolution images and basic metadata from the artist’s Adobe Creative Cloud library so new work appears without manual uploads. Please implement it with SolidJS and prioritize performance and subtle motion. web hard 41 I’d like a small, self-contained browser demo of a rocket launch. Please scaffold a folder named "rocket-launch" with index.html, style.css, and main.js, and include Three.js 0.160.x plus the needed addons (OrbitControls and the post-processing pipeline: EffectComposer, RenderPass, BokehPass/BokehShader) as local ES modules—no CDNs in the final HTML. In main.js, build a simple procedural rocket and launch pad, add a shader-driven particle system for exhaust/smoke, and simulate a physics-based ascent (thrust vs. gravity/drag) after liftoff. The camera should smoothly track the rocket, and a depth-of-field (Bokeh) effect should keep the rocket in focus as it climbs. Show an on-screen altitude readout. Deliver code that runs offline when the libraries are placed locally and is clean enough to verify by static inspection (correct imports, paths, and module usage). simulation easy 42 I want to create a small browser-based flight simulator prototype with a wireframe, sci‑fi aesthetic: terrain rendered as grid lines. It needs two viewpoints: (1) a third-person camera that follows the aircraft, and (2) a top-down 2.5D instrument panel/map view where the flight trajectory is clearly visible. Please implement it using React Three Fiber, with a custom GLSL grid shader for the terrain. I’d like smooth camera transitions (triggered via a key press or a button), and provide runnable prototype code. simulation easy 43 Build a small animation editor prototype with a scrubbable timeline: the scene is a pure-white, hand-drawn sketch-style room viewed from a fixed 45° overhead angle. A timeline slider at the bottom can be dragged to fast-forward or rewind. Place several spheres in the scene that move along predefined trajectories; when spheres collide with each other or with the walls, play short percussive sound effects and display the “collision count” in real time in a corner of the UI. During arbitrary timeline scrubbing, positions, collisions, and audio must stay synchronized with time (collisions should not be counted more than once). The prototype should run directly in a web page, preferably using Three.js for rendering and the Web Audio API for playback. simulation easy 44 Create a small web demo that uses the native HTML5 Canvas 2D API to procedurally generate a cyberpunk-style city skyline. The camera should look up from ground level toward skyscrapers, with a wide-angle perspective (near objects appear larger, distant ones smaller; vertical lines converge upward). Overlay a neon-styled HUD that displays “height” and “speed” in real time (simulated data is acceptable). Without relying on WebGL or third-party libraries, hand-implement a simple bloom post-processing pass: threshold the bright neon regions, blur them, then composite them back using additive blending to enhance the glow. The overall experience should maintain a smooth frame rate, clear visual layering, and an easy-to-maintain code structure. simulation easy 45 I want to build a browser-based city flythrough demo: the ground uses satellite imagery as the base layer, with the city’s 3D buildings overlaid on top. Visually, the buildings should have crisp outlines, and the overall lighting should be side-lit to create a dramatic feel. When the user clicks any building, a short cue tone synthesized in real time via the Web Audio API should play. The camera should use a “drone perspective,” automatically locking onto and smoothly following moving vehicles in the scene. Prefer implementing this with React + Three.js (React-Three-Fiber), and please provide the key component breakdown, the setup for post-process outlining and side lighting, and the core logic for vehicle motion and the follow camera. simulation easy 46 I want to build a web-based educational visualization experiment themed “Wind Effects on Vegetation.” The scene should be a cartoon-style forest meadow, with pronounced height-based fog in the distance to enhance depth. The focus is to make grass blades undulate and sway with the wind; wind speed and direction should be adjustable and displayed in real time on a dashboard. The system will score based on how long ecological balance is maintained. Use a low-angle, wide-angle upward camera view of the grassland to emphasize the wind-blown effect. Implement it with Babylon.js, and keep the interface as simple as possible, providing Start/Pause, a wind-speed slider, Reset, and other controls. simulation easy 47 We’d like to build an immersive aerodynamics demo page: users can view a wing in a 3D scene and Pinch/Zoom the view to inspect details. The visuals should evoke a warm, sunset ambience, with airflow around the wing depicted using heat-haze distortion effects (implemented via custom GLSL post-processing) to illustrate how pressure differentials and flow velocity under Bernoulli’s principle influence the visuals. Data will be presented as floating 3D info panels within the scene rather than a flat overlay UI. Technically, it can be implemented with React-Three-Fiber/WebGL, with emphasis on immersion, visual expressiveness, and a smooth touch-based Pinch/Zoom experience. simulation easy 48 I want to build a small terrain viewer: the camera travels along a predefined route, hugging the ground, and the scene is a grassland that appears boundless. Under soft, overcast diffuse lighting, the grass blades should sway naturally; the system needs to accumulate flight distance in real time based on speed, and the wind sound should vary with speed. Technically, React Three Fiber can be used; grass lighting and sway should be implemented in GLSL, and wind sound synthesized with the Web Audio API. Please design an overall solution according to these goals and provide the core implementation approach. simulation easy 49 I want to create an interactive web experience: standing on a low-poly city street and looking up at the night sky, you can see swaths of meteors streaking by and hear the corresponding sounds. Keep the camera as close to the ground as possible and use an ultra-wide angle to produce a strong sense of perspective on the tall buildings. Users can switch meteor density via UI buttons (e.g., sparse/normal/high density). Build the scene with Three.js; implement the meteors’ glowing trails and streaking effect using custom GLSL shaders; synthesize audio in real time with the Web Audio API, and synchronize it with meteor flyby and “impact” events. Please provide complete, runnable code and a simple UI. simulation easy 50 A small browser-based app for free-roaming a cyberpunk district. Users can smoothly zoom and rotate the view with mouse or touch while exploring the streets. Neon signs hanging from building facades should exhibit subtle wind-driven sway with a physical simulation (real-time mass–spring system with parameters for wind force, damping, etc.). Several clickable props are hidden in the scene; when picked up, they increase the “Exploration” score on a quest/progress panel on the left side of the screen and play a synthesizer-style notification sound. The project should be implemented with Three.js, using standard real-time rasterized rendering. Please provide the core code and architecture, ensuring stable animation, smooth camera controls, and easy extensibility for adding more buildings and signs. simulation easy 51 We want to build a small gravitational-orbit experiment for classroom demonstrations: within a fixed two-dimensional plane, multiple particles move in a gravitational field around a central mass. Users can orbit and zoom the view with the mouse, but the camera is always constrained to that plane (2.5D). Particle positions and velocities should be updated on the GPU in real time (using Three.js together with GLSL for GPGPU, e.g., texture ping‑pong) to ensure smooth performance with large particle counts. The screen should display a “stability score” indicating the continuous duration the orbits remain in a non-chaotic state (resetting to zero and restarting the timer when fluctuations in orbital radius or energy exceed a threshold). Provide basic interactions: start/pause, reset, and parameter controls such as gravitational constant, central mass, time step/damping, and initial velocity perturbations. simulation easy 52 Build a single-file, web-based Rubik’s Cube playable directly in the browser. Do not rely on third-party 3D libraries such as Three.js; render using the Canvas 2D API with custom projection and rasterization. Provide an orbit-camera view with mouse/touch controls for full 360° viewing; support selecting and rotating layers, and include “Scramble” and “Reset” buttons. Aim for a semi-transparent jade aesthetic (green translucency, glossy highlights, and inner-shadow effects). The page should display the move count, starting at scramble and continuing until the cube is solved. simulation easy 53 We need a “Holographic Simulator” to demonstrate the operating state of a robotic arm. The scene should be presented in a 2D orthographic view with a holographic projection aesthetic, featuring scan lines and subtle jitter; the jitter amplitude should respond in real time to the audio spectrum intensity available in the browser (from music playback or microphone input). The system will loop the robotic arm’s motion sequence and calculate an “efficiency score” based on the duration and stability of a complete cycle (e.g., grasp → move → release), displaying it in the UI. Implementation can use Three.js to load the robotic arm model, with custom GLSL shaders to achieve the holographic scan-line and noise flicker effects. simulation easy 54 Develop an education-oriented 3D interactive demonstration that fully presents the key steps of DNA replication. Users will explore a dark intracellular environment where key molecules are highlighted with fluorescent effects. When the demonstration reaches key checkpoints, users must click the correct molecules/sites to earn points, with feedback displayed in the UI. The camera will traverse along a double-helix path and support an orbit-style viewing mode toggle. Implement with Babylon.js, able to load fairly detailed molecular models (e.g., PDB and glTF), use step-by-step binding/separation animations to recreate pairing and polymerization processes, and provide a stable scoring system and a clean instructional UI. simulation medium 55 I want to create a browser-based crowd simulation demo: the scene is a busy street, with a high-angle camera that slowly cruises along the street, gliding past the crowd. The visuals should have a collage-style look with low-fidelity character silhouettes; realism is not the goal, but it should render hundreds to thousands of pedestrians while remaining smooth. Please implement it with Babylon.js and use optimization strategies suitable for large numbers of objects (e.g., the Solid Particle System (SPS) or mesh instancing) to draw the pedestrians; keep an orbit/cruise camera perspective. Also integrate Web Audio and, based on the distance between the camera and the crowd, implement 3D spatialized ambient noise so the crowd noise varies naturally with distance and position. Provide a directly runnable example and key parameters to help me verify performance and visual results. simulation medium 56 Build an immersive first-person roller coaster web experience: the track floats in a blue sky, and the material has a flowing effect along the spline direction (implemented with a custom GLSL shader). As the coaster car accelerates along the spline track, the wind noise and the pitch/frequency of screams will vary with speed, and the UI displays a real-time “Fear Index” score. The camera should be tightly attached to the car’s nose point of view and include motion blur that becomes more pronounced at higher speeds. Implementation should prioritize React Three Fiber and the Web Audio API, and it must run smoothly in common browsers. simulation medium 57 I want to create a small interactive demo: a jelly-like soft-body cube placed on a floating platform, with a glass-like refractive appearance. Users can drag it with the mouse or a finger; upon release, it should oscillate back and forth according to a spring-damper model. When it hits the platform edges, there should be clear elastic collision feedback and a synthesized "impact" sound should play. The camera is fixed at an approximately 45° top-down angle aimed at the cube. It must run smoothly in both desktop and mobile browsers, and audio should trigger properly on mobile. Technically, use Three.js to build the scene and interactions, implement the glass/refraction effect with a custom GLSL material, and synthesize the sound with Web Audio—no external audio files required. simulation medium 58 I want to build a small 3D city visualization demo that runs smoothly in the browser: use satellite imagery as the basemap, and buildings can be simple low-poly blocks; the camera should support WASD + mouse free roaming. The scene needs dynamic road traffic flow, with vehicles represented as particles moving along the streets. I’d also like to play procedurally generated city ambient noise (white noise) based on the current viewpoint position. For interaction, I want to use the mouse to “draw” a temporary magnetic field on the screen; nearby vehicles will be attracted or deflected, and will change color when affected. Implementation should prioritize Babylon.js in combination with the Web Audio API. simulation medium 59 Create a 3D maze “navigation demo” application: the maze walls should have a high-gloss, ceramic-like finish, and the walkable path should feature a dynamic ripple distortion effect, like flowing water guiding the direction of travel. Use a third-person camera that smoothly follows a character with automatic pathfinding, and add depth-of-field post-processing that always keeps the focus locked on that character. You may use a custom GLSL shader to generate the ripple effect and integrate it with the scene, pathfinding, and camera systems. Provide a runnable example or an implementation plan. simulation medium 60 I want to create a small interactive piece: the scene shows a soft cloth swaying, rendered with a rough charcoal sketch aesthetic and a visually flattened, two-dimensional look. Include a prominent button to toggle “Wind On/Off”: when Off, the cloth is affected only by gravity and damping and gradually comes to rest; when On, gusts of wind that vary over time should be visible. Use an orthographic camera to emphasize the 2D feel. The tech stack should use Three.js to build the scene and GLSL to implement the sketch/charcoal style (noise, paper texture, edges/brush strokes, etc.), with cloth motion driven by a physics simulation. Please provide a complete page that can run directly in the browser, along with the necessary UI logic. simulation medium 61 I want to build a small in-browser “wind tunnel” experiment with a fixed 2D cross-sectional view, using HTML Canvas. Users can drag and drop different wing cross-sections (e.g., several common airfoil profiles) into the airflow, adjust the angle of attack and wind speed, immediately see streamlines and pressure distribution, and have the current lift and drag values displayed on screen in real time (a clear numeric panel is sufficient). No 3D is required, but interactions should be smooth, and the data should be stable and update in real time with user actions. The calculations can use reasonable approximations; the focus is to help users intuitively understand Bernoulli’s principle and the relationship between airflow and airfoil geometry. simulation medium 62 I want to build a browser-based interactive “Underwater Canyon Exploration” tool: the user descends from the entry point into the depths while the camera follows a predefined path, with play/pause and speed controls. The visuals should deliver realistic water and wet rock materials (showing reflection and refraction), within a dark underwater volumetric light-scattering atmosphere. Implementation will use React Three Fiber, with materials and lighting tuned to closely match underwater effects. Additionally, include sonar feedback: emit periodic pulse sounds as the journey progresses, with adjustable frequency and attenuation, synthesized via the Web Audio API. The target is smooth performance on desktop, with a small yet polished scene. simulation medium 63 Please create an instant-play web puzzle mini-level (single-file HTML) with 2.5D visuals. The player controls a caped character to traverse obstacles and solve puzzles to progress. The cape should exhibit noticeable physical dynamics (similar to cloth affected by gravity, inertia, and wind), naturally swaying with the character’s movement. Use the mouse wheel to adjust the camera’s zoom or height for easier viewing. The interface should include a fixed score panel displaying the current level’s score/progress. The background should feature a depth-of-field effect: distant elements slightly blurred, near elements sharp. Implement this directly using the Canvas 2D context, minimizing external libraries; a single HTML file is sufficient. simulation medium 64 I want to build a side-scrolling 2D diving game prototype, with a glowing jellyfish as the protagonist swimming in the deep sea. The visuals should feature realistic underwater lighting: visible water caustics and subtle refraction, with overall illumination primarily driven by the jellyfish’s bioluminescence and a faint diffuse fill from above to simulate under-ice light. The jellyfish’s tentacles should exhibit soft-body behavior—flexible and drifting—with lag, sway, and spring-back in response to speed, turning, and currents. The gameplay only needs basic movement, collision, and camera follow; the water lighting effects can be implemented using GLSL shaders. The goal is a runnable prototype with switchable effects, tunable parameters, and stable operation on common hardware. simulation medium 65 I want to create a web-based sci-fi spaceship showcase: open with a wide-angle, low-angle shot from beneath the ship to emphasize its monumental scale; the thrusters should feature realistic emissive glow/heat bloom, with lens flare visible in the frame; as the ship approaches or recedes from the camera, the thruster audio should exhibit a pronounced Doppler shift. For implementation, use Three.js to load the spacecraft model, write a small amount of custom shaders to achieve engine emission and post-processing bloom, and use the Web Audio API to dynamically synthesize audio based on the distance and relative velocity between the ship and the camera. Please provide an example that can run directly in the browser (clear structure, adjustable parameters, and easy model swapping), and explain the camera setup, the post-processing pipeline, and the key points of the Doppler calculation. simulation medium 66 I want to build an interactive fabric showcase: a soft piece of cloth appears on screen, and users can grab any point with the mouse and drag it in real time, with natural deformation and elastic rebound. The scene should feature strong backlighting to reveal the fabric’s translucency and thickness variations. The camera must support free rotation, panning, and zooming to inspect details from different angles. The implementation should be based on Web 3D (e.g., Three.js) and integrate physics simulation to ensure smooth performance. simulation medium 67 I want to create a black-hole traversal experience in the browser: the visuals have a “glitch” aesthetic, and the accretion disk is warped and deformed. The player uses the keyboard (WASD) to fine-tune thrust speed and the viewpoint, as if piloting a spacecraft into a wormhole. Use Three.js to build the scene, have the camera move along a path, and render to a Canvas. Use a GLSL post-processing shader to implement an adjustable distortion/screen-shake effect, and provide the key implementation and interaction logic. simulation medium 68 I want to create a small immersive web experience: flying through volumetric clouds during a nighttime meteor shower. Use Babylon.js to build the scene, render the clouds with a ray marching technique, and have the camera smoothly traverse the clouds along a predefined “shuttle path.” Use the Web Audio API to synthesize continuous ambient white noise as the background sound. Add a simple gameplay mechanic: when a meteor streaks through the center of the screen, play a synthesized cue sound and increase the on-screen score by +1. Please provide a complete implementation, including the cloud shader, meteor generation and movement, center-crossing detection logic, scoring UI, and audio triggers, ensuring it runs smoothly in the browser. simulation hard 69 Create a web-based ocean wave physics demonstration: the scene should be a close, water-level, eye-level view with a vintage CRT monitor aesthetic and a subtle chromatic aberration effect. The wave undulation and surface turbulence must respond in real time to changes in microphone input: use the Web Audio API to capture audio from the microphone and perform spectral analysis, mapping low frequencies to overall wave height and high frequencies to fine detail/turbulence intensity. Render exclusively with the native Canvas 2D API, without WebGL or any third-party libraries. simulation hard 70 I want to build a small browser-based 3D data visualization app: display dynamic data using a bar chart where each bar looks like it’s carved from transparent ice, set in a warm sunset ambience. The camera should be fixed to a wide-angle view looking upward from a low position to emphasize the monumental scale of the bars. For interaction, users can click lower-value ice bars to “shatter” them; the system automatically calculates and accumulates points based on the volume that is shattered. On the rendering side, I’d like to use a ray marching–based approach to achieve a realistic ice material (e.g., a sense of internal volume, refraction/micro-scattering visual effects) while maintaining usable frame rates. Please provide an implementation in Three.js that includes data-to-bar-height mapping, Golden Hour–style lighting, a fixed wide-angle low-angle camera, click detection and shatter animation/volume estimation, and scoring rules with a basic score panel. simulation hard 71 I want to build an immersive 3D data exploration page: a large transparent cube with an amber-like material, enclosing cloud-like volumetric data that changes over time. Users can enter the cube in first person and freely roam and observe within the data; as the camera traverses the volume, the stratification of density changes and light refraction/absorption effects should be clearly visible. Meanwhile, I want to generate low-frequency ambient sound in
2026-01-13T09:29:19
https://www.linkedin.com/products/categories/chatbot-software
Best Chatbot Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Marketing Manager (13) Customer Service Specialist (10) Owner (9) Sales And Marketing Specialist (9) Marketing Specialist (9) See all products Find top products in Chatbot Software category Software used to simulate conversation based on natural language input. - Engage users with automated chat for marketing, customer service, or information - Use text or text-to-speech input processing to provide scripted conversation - Respond contextually with standalone language recognition solution - Execute pre-built responses on apps, websites, and social media 273 results Blip Chatbot Software by Blip Use a Inteligência Conversacional da Blip para aumentar receita e fidelidade! Com a plataforma Blip integrada ao super poder da Inteligência Artificial, você oferece uma experiência excepcional ao longo da jornada do cliente — desde marketing, vendas até o suporte. View product RD Station Conversas Chatbot Software by RD Station RD Station Conversas é uma plataforma de atendimento e vendas digitais que promove a integração de diversos canais de comunicação e números de WhatsApp. Assim, você pode centralizar no RD Station Conversas todos os seus canais (WhatsApp. Telegram, Instagram Messenger e Facebook Messenger), permitindo que o time inteiro de atendimento relacione-se com Leads e clientes em um só local. RD Station Conversas pode ser usada por qualquer tipo de negócio que conta com times de vendas e atendimento que usam os canais digitais. Quanto maior for o investimento e relevância digital da marca, maiores são os resultados com o RD Station Conversas. View product Customer Engagement Chatbot Software by Truora Inc. Create automated conversations on WhatsApp and provide customer service, marketing and sales chatbots. View product Chatbot Chatbot Software by Unifonic Unifonic Chatbot is a visual tool for building fully functional chatbots using a drag-and-drop creator. Users can build conversational flows with ease and flexibility in minutes without any coding required. It currently supports WhatsApp, Webchat, Twitter Direct Messenger, and Facebook Messenger. View product Omnichat Chatbot Software by Omnichat (1) Omnichannel CRM Integration: Manage all communication channels in one place, including Official WhatsApp Business API, Facebook, Instagram, WeChat, LINE and Website live chat. 24/7 Chatbot automation:Respond to messages without delay (2) Marketing Automation: Customer Browsing Behaviour Tracking, Abandoned Cart Remarketing through WhatsApp, Facebook and LINE. Gamification and Coupon Marketing. (3) OMO ( Online-Merge-Offline ) Sales: Omnichat’ s system will automatically bind the customer with the specific salesperson for WhatsApp/ LINE 1 on 1 selling. Once customers complete the purchase, the system will calculate the revenue of the salesperson and facilitate cross-channel revenue tracking. (4) Social CDP:Map different social channel identities to an unique profile, Send out messages at specific time frame automatically with a comprehensive customer journey. eg When customers pay first visit, join membership, second purchase, etc. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights ELX Chatbot Chatbot Software by EchoLogyx Ltd Ultimate AI-powered assistant for eCommerce businesses. Automate customer support, boost sales, and improve customer engagement. View product ChatGuru Chatbot Software by ChatGuru Nossas soluções de automação Chatbot ChatGuru para WhatsApp são focadas em aumento de vendas, otimização do atendimento e organização da equipe e atenderão perfeitamente sua empresa, seja qual for seu segmento. Alguns de nossos recursos: • Vários números de WhatsApp em uma única conta. #OTIMIZAÇÃO • Use 1 número WhatsApp com múltiplos usuários #PRATICIDADE • Mensagens rápidas para maior agilidade #AGILIDADE • Gerenciamento de permissões dos agentes #CONTROLE • Integração com CRMs #INTEGRAÇÃO • Funil automático e inteligente #ACOMPANHAMENTO • Banco de respostas automáticas com I.A. #INTELIGÊNCIA • Rodízio de atendimento #ATRIBUIÇÃO • Relatórios customizáveis #PRATICIDADE • Campanhas personalizadas #PERSONALIZAÇÃO • Campos personalizáveis #ADAPTAÇÃO • Anotações internas em atendimentos e contatos #TAGS • Alertas de novos leads gerados no WhatsApp #OTIMIZAÇÃO • Organização por status #ORGANIZAÇÃO Solicite uma demonstração e descubra o CHATGURU! View product Manychat for Facebook Messenger Chatbot Software by Manychat Manychat for Messenger automates conversations to fuel more sales, generate leads, automate FAQs and run marketing campaigns View product Leadster Chatbot Software by Leadster Leadster is a smart chatbot designed to optimize the conversion of website visitors into qualified leads. Unlike traditional chat tools, our technology acts as an automated “sales consultant”: engaging visitors, qualifying opportunities, and routing sales-ready leads to your team. The platform operates as a 24/7 qualification engine—capturing buying intent at the right moment, reducing CAC, and accelerating the sales cycle. Instead of generic forms, we offer an interactive approach that enhances both conversion rates and user experience. With customizable flows and seamless integration with CRMs and marketing tools, Leadster turns your website traffic into a real sales pipeline. All this with easy setup, free trial, and a strong focus on performance. Perfect for companies looking to scale acquisition without relying solely on paid media or SDRs. View product BeyondChats Chatbot Software by BeyondChats - Automate user inquiries on your website with our advanced AI chatbot - Identify users who are most likely to buy your product / service - Analytics to helps you understand: What are your users looking for, Why are they leaving without registering / buying, What is missing on your website View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language
2026-01-13T09:29:19
https://www.linkedin.com/products/workvivo--the-employee-experience-app/
Workvivo by Zoom | LinkedIn Skip to main content LinkedIn Workvivo by Zoom in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Workvivo by Zoom Employee Communications Software by Workvivo by Zoom See who's skilled in this Add as skill Request demo Report this product About The employee experience app that brings your entire company together This product is intended for Chief Executive Officer Chief Human Resources Officer Chief People Officer Head of Human Resources Vice President Human Resources Chief Information Officer Chief Digital Officer Chief Communications Officer Head of Internal Communication Director Internal Communications Media Products media viewer No more previous content Workvivo - The Employee Experience App No more next content Featured customers of Workvivo by Zoom Amazon Software Development 36,032,701 followers Exos Wellness and Fitness Services 91,499 followers Evri Transportation, Logistics, Supply Chain and Storage 76,173 followers RSK Group Environmental Services 81,054 followers Virgin Venture Capital and Private Equity Principals 320,544 followers Keyloop Software Development 97,081 followers Madison Square Garden Entertainment Corp. Entertainment 110,136 followers AIB Banking 135,993 followers Morgan McKinley Staffing and Recruiting 953,602 followers Everton Football Club Spectator Sports 74,196 followers Clipper Logistics plc Warehousing and Storage 28,043 followers Kent Engineering Services 434,815 followers Hoover Institution, Stanford University Think Tanks 43,478 followers Bupa UK Hospitals and Health Care 401,734 followers Dollar General Retail 366,895 followers Hearst Networks EMEA Broadcast Media Production and Distribution 15,095 followers Bus Éireann Ground Passenger Transportation 9,885 followers Kenvue Personal Care Product Manufacturing 454,789 followers Ryanair - Europe's Favourite Airline Airlines and Aviation 866,770 followers Show more Show less Similar products Zoho Cliq Zoho Cliq Employee Communications Software HCL Sametime HCL Sametime Employee Communications Software Flip Flip Employee Communications Software Poppulo Poppulo Employee Communications Software Staffbase Staffbase Employee Communications Software Firstup Firstup Employee Communications Software Sign in to see more Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language
2026-01-13T09:29:19
https://doc.rust-lang.org/cargo/commands/cargo-publish.html#publish-options
cargo publish - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-publish(1) NAME cargo-publish — Upload a package to the registry SYNOPSIS cargo publish [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory and upload it to a registry. The default registry is https://crates.io . This performs the following steps: Performs a few checks, including: Checks the package.publish key in the manifest for restrictions on which registries you are allowed to publish to. Create a .crate file by following the steps in cargo-package(1) . Upload the crate to the registry. The server will perform additional checks on the crate. The client will poll waiting for the package to appear in the index, and may timeout. In that case, you will need to check for completion manually. This timeout does not affect the upload. This command requires you to be authenticated using cargo-login(1) or environment variables of the registry.token and registries.<name>.token config fields. See the reference for more details about packaging and publishing. OPTIONS Publish Options --dry-run Perform all checks without uploading. --no-verify Don’t verify the contents by building them. --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --index index The URL of the registry index to use. --registry registry Name of the registry to publish to. Registry names are defined in Cargo config files . If not specified, and there is a package.publish field in Cargo.toml with a single registry, then it will publish to that registry. Otherwise it will use the default registry, which is defined by the registry.default config key which defaults to crates-io . Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Publish only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Publish all members in the workspace. --all Deprecated alias for --workspace . --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Publish for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo publish -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo publish -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Publish the current package: cargo publish SEE ALSO cargo(1) , cargo-package(1) , cargo-login(1)
2026-01-13T09:29:19