url stringlengths 11 2.25k | text stringlengths 88 50k | ts timestamp[s]date 2026-01-13 08:47:33 2026-01-13 09:30:40 |
|---|---|---|
https://docs.aws.amazon.com/es_es/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Configuración de los nombres de entorno y servicio del agente de CloudWatch para las entidades relacionadas - Amazon CloudWatch Configuración de los nombres de entorno y servicio del agente de CloudWatch para las entidades relacionadas - Amazon CloudWatch Documentación Amazon CloudWatch Guía del usuario Configuración de los nombres de entorno y servicio del agente de CloudWatch para las entidades relacionadas El agente de CloudWatch puede enviar métricas y registros con datos de entidades para admitir el panel Explorar lo relacionado en la consola de CloudWatch. El nombre del servicio o el nombre del entorno se pueden configurar mediante la configuración JSON del agente de CloudWatch . nota La configuración del agente puede anularse. Para obtener más información sobre la manera en que el agente decide qué datos enviar a las entidades relacionadas, consulte Uso del agente de CloudWatch con la telemetría relacionada . En cuanto a las métricas, se puede configurar en el nivel del agente, de las métricas o del complemento. En el caso de los registros, se puede configurar en el nivel del agente, de los registros o de los archivos. Siempre se utiliza la configuración más específica. Por ejemplo, si la configuración existe en el nivel del agente y de las métricas, las métricas utilizarán la configuración las métricas, y todo lo demás (registros) utilizará la configuración del agente. En el siguiente ejemplo, se muestran diferentes formas de configurar el nombre del servicio y el nombre del entorno. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento Configuración de la recopilación de métricas de Prometheus en instancias de Amazon EC2 Inicie el agente de CloudWatch ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#overview | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://git-scm.com/book/fa/v2/%d8%b4%d8%b1%d9%88%d8%b9-%d8%a8%d9%87-%da%a9%d8%a7%d8%b1-getting-started-%d9%86%d8%b5%d8%a8-%da%af%db%8c%d8%aa-Installing-Git | Git - نصب گیت (Installing Git) About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. شروع به کار (getting started) 1.1 درباره ورژن کنترل (About Version Control) 1.2 تاریخچه کوتاهی از گیت (A Short History of Git) 1.3 گیت چیست؟ (What is Git) 1.4 نصب گیت (Installing Git) 1.5 ستاپ اولیه گیت (First-Time Git Setup) 1.6 دریافت کمک (Getting Help) 1.7 خلاصه (summary) 2. مقدمات گیت (git basics chapter) 2.1 گرفتن یک مخزن گیت (Getting a Git Repository) 2.2 ثبت تغییرات در مخزن (Recording Changes to the Repository) 2.3 مشاهده تاریخچه کامیتها (Viewing the Commit History) 2.4 بازگرداندن تغییرات (Undoing Things) 2.5 کار کردن با ریموت ها (Working with Remotes) 2.6 تگ کردن (Tagging) 2.7 نام مستعار گیت (Git Aliases) 2.8 خلاصه (summary) 3. انشعابگیری در گیت (Git Branching) 3.1 شاخهها در یک نگاه (Branches in a Nutshell) 3.2 شاخهبندی و ادغام پایهای (Basic Branching and Merging) 3.3 مدیریت شاخهها (Branch Management) 3.4 روندهای کاری شاخهها (Branching Workflows) 3.5 شاخههای راه دور (Remote Branches) 3.6 بازپایهگذاری (Rebasing) 3.7 خلاصه (Summary) 4. گیت روی سرور (Git on the server) 4.1 پروتکلها (The Protocols) 4.2 راهاندازی گیت روی یک سرور (Getting Git on a Server) 4.3 ایجاد کلید عمومی SSH شما (Generating Your SSH Public Key) 4.4 نصب و راهاندازی سرور (Setting up server) 4.5 سرویسدهنده گیت (Git Daemon) 4.6 HTTP هوشمند (Smart HTTP) 4.7 گیتوب (GitWeb) 4.8 گیتلب (GitLab) 4.9 گزینههای میزبانی شخص ثالث (Third Party Hosted Options) 4.10 خلاصه (Summary) 5. گیت توزیعشده (Distributed git) 5.1 جریانهای کاری توزیعشده (Distributed Workflows) 5.2 مشارکت در یک پروژه (Contributing to a Project) 5.3 نگهداری یک پروژه (Maintaining a Project) 5.4 خلاصه (Summary) 6. گیت هاب (GitHub) 6.1 راهاندازی و پیکربندی حساب کاربری (Account Setup and Configuration) 6.2 مشارکت در یک پروژه (Contributing to a Project) 6.3 نگهداری یک پروژه (Maintaining a Project) 6.4 مدیریت یک سازمان (Managing an organization) 6.5 اسکریپتنویسی در گیتهاب (Scripting GitHub) 6.6 خلاصه (Summary) 7. ابزارهای گیت (Git Tools) 7.1 انتخاب بازبینی (Revision Selection) 7.2 مرحلهبندی تعاملی (Interactive Staging) 7.3 ذخیره موقت و پاکسازی (Stashing and Cleaning) 7.4 امضای کارهای شما (Signing Your Work) 7.5 جستجو (Searching) 7.6 بازنویسی تاریخچه (Rewriting History) 7.7 بازنشانی به زبان ساده (Reset Demystified) 7.8 ادغام پیشرفته (Advanced Merging) 7.9 بازاستفاده خودکار از حل تضادها (Rerere) 7.10 اشکالزدایی با گیت (Debugging with Git) 7.11 سابماژول ها (Submodules) 7.12 بستهبندی (Bundling) 7.13 جایگزینی (Replace) 7.14 ذخیرهسازی اطلاعات ورود (Credential Storage) 7.15 خلاصه (Summary) 8. سفارشیسازی Git (Customizing Git) 8.1 پیکربندی گیت (Git Configuration) 8.2 ویژگیهای گیت (Git Attributes) 8.3 هوکهای گیت (Git Hooks) 8.4 یک نمونه سیاست اعمال شده توسط گیت (An Example Git-Enforced Policy) 8.5 خلاصه (Summary) 9. گیت و سیستمهای دیگر (Git and Other Systems) 9.1 گیت بهعنوان کلاینت (Git as a Client) 9.2 مهاجرت به گیت (Migrating to Git) 9.3 خلاصه (Summary) 10. مباحث درونی گیت (Git Internals) 10.1 ابزارها و دستورات سطح پایین (Plumbing and Porcelain) 10.2 اشیا گیت (Git Objects) 10.3 مراجع گیت (Git References) 10.4 فایلهای بسته (Packfiles) 10.5 نگاشت (The Refspec) 10.6 پروتکلهای انتقال (Transfer Protocols) 10.7 نگهداری و بازیابی دادهها (Maintenance and Data Recovery) 10.8 متغیرهای محیطی (Environment Variables) 10.9 خلاصه (Summary) A1. پیوست A: گیت در محیطهای دیگر (Git in Other Environments) A1.1 رابط های گرافیکی (Graphical Interfaces) A1.2 گیت در ویژوال استودیو (Git in Visual Studio) A1.3 گیت در Visual Studio Code (Git in Visual Studio Code) A1.4 گیت در IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine (Git in IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine) A1.5 گیت در Sublime Text (Git in Sublime Text) A1.6 گیت در بش (Git in Bash) A1.7 گیت در Zsh (Git in Zsh) A1.8 گیت در PowerShell (Git in PowerShell) A1.9 خلاصه (Summary) A2. پیوست B: گنجاندن گیت در برنامههای شما (Embedding Git in your Applications) A2.1 خط فرمان گیت (Command-line Git) A2.2 کتابخانهٔ گیت به زبان سی (Libgit2) A2.3 کتابخانه گیت برای زبان جاوا (JGit) A2.4 کتابخانه گیت برای زبان گو (go-git) A2.5 کتابخانه گیت پایتون (Dulwich) A3. پیوست C: دستورات گیت (Git Commands) A3.1 تنظیم و پیکربندی (Setup and Config) A3.2 گرفتن و ایجاد پروژهها (Getting and Creating Projects) A3.3 نمونهبرداری پایهای (Basic Snapshotting) A3.4 انشعابگیری و ادغام (Branching and Merging) A3.5 بهاشتراکگذاری و بهروزرسانی پروژهها (Sharing and Updating Projects) A3.6 بازرسی و مقایسه (Inspection and Comparison) A3.7 عیبیابی (Debugging) A3.8 اعمال تغییرات به صورت پچ (Patching) A3.9 ایمیل (Email) A3.10 سیستمهای خارجی (External Systems) A3.11 مدیریت (Administration) A3.12 دستورات سطح پایین گیت (Plumbing Commands) 2nd Edition 1.4 شروع به کار (getting started) - نصب گیت (Installing Git) نصب گیت (Installing Git) قبل از اینکه شروع به استفاده از گیت کنید، باید آن را روی کامپیوتر خود در دسترس قرار دهید. حتی اگر قبلاً نصب شده باشد، احتمالاً بهروزرسانی به آخرین نسخه ایده خوبی است. شما میتوانید آن را بهصورت بسته نرمافزاری یا از طریق یک نصبکننده دیگر نصب کنید، یا کد منبع را دانلود کرده و خودتان آن را کامپایل کنید. این کتاب با استفاده از نسخه ۲ گیت نوشته شده است. از آنجا که گیت در حفظ سازگاری با نسخههای قبلی بسیار خوب عمل میکند، هر نسخه جدیدتری باید بهخوبی کار کند. اگرچه بیشتر دستورات استفادهشده حتی در نسخههای قدیمیتر گیت نیز باید کار کنند، اما ممکن است برخی از آنها کار نکنند یا کمی متفاوت رفتار کنند. نصب در لینوکس (Installing on Linux) اگر میخواهید ابزارهای پایه گیت را روی لینوکس از طریق یک نصبکننده باینری نصب کنید، معمولاً میتوانید این کار را با استفاده از ابزار مدیریت بستهای که همراه توزیع شماست انجام دهید. اگر از فدورا (یا هر توزیع مبتنی بر RPM مشابه مانند RHEL یا CentOS) استفاده میکنید، میتوانید از دستور dnf استفاده کنید: $ sudo dnf install git-all اگر از توزیعی مبتنی بر دبیان مانند اوبونتو استفاده میکنید، دستور apt را امتحان کنید: $ sudo apt install git-all برای گزینههای بیشتر، دستورالعمل نصب در چند توزیع مختلف یونیکس در وبسایت گیت به نشانی https://git-scm.com/download/linux موجود است. نصب در مک (Installing on macOS) روشهای مختلفی برای نصب گیت روی مکاواس وجود دارد. سادهترین راه احتمالاً نصب ابزارهای خط فرمان Xcode است. در نسخههای Mavericks (10.9) به بعد، میتوانید این کار را بهسادگی با اجرای دستور git برای اولین بار در ترمینال انجام دهید. $ git --version اگر قبلاً آن را نصب نکردهاید، از شما خواسته میشود که نصبش کنید. اگر نسخه بهروزتری میخواهید، میتوانید آن را از طریق یک نصبکننده باینری نیز نصب کنید. نصبکننده Git برای macOS نگهداری میشود و قابل دانلود در وبسایت Git به آدرس https://git-scm.com/download/mac است. نمودار 7. Git macOS installer نصب در ویندوز (Installing on Windows) چند روش مختلف برای نصب گیت بر روی ویندوز وجود دارد. رسمیترین نسخه را میتوانید از وبسایت گیت دانلود کنید. کافی است به آدرس https://git-scm.com/download/win مراجعه کنید تا دانلود بهصورت خودکار آغاز شود. توجه داشته باشید که این پروژه به نام Git for Windows است که جدا از خود گیت میباشد؛ برای اطلاعات بیشتر میتوانید به https://gitforwindows.org مراجعه کنید. برای نصب خودکار میتوانید از بسته گیت در Chocolatey به آدرس https://community.chocolatey.org/packages/git استفاده کنید. توجه داشته باشید که بسته Chocolatey توسط جامعه کاربران نگهداری میشود. نصب از طریق سورس (Installing from Source) برخی افراد ممکن است ترجیح دهند Git را از سورس نصب کنند، زیرا نسخه بهروزتری دریافت میکنند. نصبکنندههای باینری معمولاً کمی عقبتر هستند، اما با پیشرفت Git در سالهای اخیر، این تفاوت کمتر شده است. اگر قصد دارید Git را از سورس نصب کنید، باید کتابخانههای مورد نیاز Git را داشته باشید: autotools، curl، zlib، openssl، expat و libiconv. برای مثال، اگر در سیستمی هستید که دارای dnf (مانند Fedora) یا apt-get (مانند سیستمهای مبتنی بر Debian) است، میتوانید از یکی از دستورات زیر برای نصب حداقل وابستگیهای لازم جهت کامپایل و نصب باینریهای Git استفاده کنید: $ sudo dnf install dh-autoreconf curl-devel expat-devel gettext-devel \ openssl-devel perl-devel zlib-devel $ sudo apt-get install dh-autoreconf libcurl4-gnutls-dev libexpat1-dev \ gettext libz-dev libssl-dev برای اینکه بتوان مستندات را در قالبهای مختلف (doc، html، info) اضافه کرد، به این وابستگیهای اضافی نیاز است: $ sudo dnf install asciidoc xmlto docbook2X $ sudo apt-get install asciidoc xmlto docbook2x یادداشت Users of RHEL and RHEL-derivatives like CentOS and Scientific Linux will have to enable the EPEL repository to download the docbook2X package. اگر از یک توزیع مبتنی بر دبیان (دبیان/اوبونتو/مشتقات اوبونتو) استفاده میکنید، به بستهی install-info نیز نیاز دارید: $ sudo apt-get install install-info اگر از توزیع مبتنی بر RPM (مانند فدورا، RHEL یا مشتقات RHEL) استفاده میکنید، همچنین به بستهی getopt نیاز دارید (که این بسته در توزیعهای مبتنی بر دبیان بهصورت پیشفرض نصب شده است): $ sudo dnf install getopt علاوه بر این، اگر از فدورا/آر.اِی.اِل/مشتقات آر.اِی.اِل استفاده میکنید، باید این کار را انجام دهید: $ sudo ln -s /usr/bin/db2x_docbook2texi /usr/bin/docbook2x-texi به دلیل تفاوتهای نام باینری. وقتی تمام وابستگیهای لازم را دارید، میتوانید نسخهی فشردهی آخرین انتشار برچسبخورده را از چندین مکان دریافت کنید. میتوانید آن را از سایت kernel.org به آدرس https://www.kernel.org/pub/software/scm/git یا آینهی آن در وبسایت GitHub به آدرس https://github.com/git/git/tags دریافت کنید. معمولاً در صفحه GitHub کمی واضحتر است که آخرین نسخه کدام است، اما صفحه kernel.org نیز امضاهای انتشار را دارد اگر بخواهید دانلود خود را تأیید کنید. سپس، کامپایل و نصب کنید: $ tar -zxf git-2.8.0.tar.gz $ cd git-2.8.0 $ make configure $ ./configure --prefix=/usr $ make all doc info $ sudo make install install-doc install-html install-info سپس، کامپایل و نصب کنید: پس از انجام این کار، میتوانید برای بهروزرسانیها نیز از طریق خود Git، Git را دریافت کنید: $ git clone https://git.kernel.org/pub/scm/git/git.git prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Ftechnarts-numerus%3FviewConnections%3Dtrue&trk=products_details_guest_face-pile-cta | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#discover-by-profile-url | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-common-scenarios.html | CloudWatch 에이전트를 사용하는 일반적인 시나리오 - Amazon CloudWatch CloudWatch 에이전트를 사용하는 일반적인 시나리오 - Amazon CloudWatch 설명서 Amazon CloudWatch 사용자 가이드 다른 사용자로 CloudWatch 에이전트 실행 CloudWatch 에이전트가 희소 로그 파일을 처리하는 방법 CloudWatch 에이전트가 수집한 지표에 사용자 지정 측정기준 추가 CloudWatch 에이전트가 수집한 지표 집계 또는 롤업 CloudWatch 에이전트로 고분해능 지표 수집 다른 계정에 지표, 로그, 추적 전송 CloudWatch 에이전트와 이전 CloudWatch Logs 에이전트 간 타임스탬프 차이 OpenTelemetry 수집기 구성 파일 추가 CloudWatch 에이전트를 사용하는 일반적인 시나리오 이 섹션에서는 CloudWatch 에이전트에 대한 일반적인 구성 및 사용자 지정 태스크를 완료하는 방법을 설명하는 다양한 시나리오를 제공합니다. 주제 다른 사용자로 CloudWatch 에이전트 실행 CloudWatch 에이전트가 희소 로그 파일을 처리하는 방법 CloudWatch 에이전트가 수집한 지표에 사용자 지정 측정기준 추가 CloudWatch 에이전트가 수집한 지표 집계 또는 롤업 CloudWatch 에이전트로 고분해능 지표 수집 다른 계정에 지표, 로그, 추적 전송 CloudWatch 에이전트와 이전 CloudWatch Logs 에이전트 간 타임스탬프 차이 OpenTelemetry 수집기 구성 파일 추가 다른 사용자로 CloudWatch 에이전트 실행 Linux 서버에서 CloudWatch는 기본적으로 루트 사용자로 실행됩니다. 에이전트를 다른 사용자로 실행하려면 CloudWatch 에이전트 구성 파일의 agent 섹션에 있는 run_as_user 파라미터를 사용합니다. 이 옵션은 Linux 서버에서만 사용할 수 있습니다. 이미 루트 사용자로 에이전트를 실행 중인데 다른 사용자를 사용하도록 변경하려면 다음 절차 중 하나를 사용하세요. Linux를 실행하는 EC2 인스턴스에서 CloudWatch 에이전트를 다른 사용자로 실행하려면 새 CloudWatch 에이전트 패키지를 다운로드하여 설치합니다. 새 Linux 사용자를 생성하거나 RPM 또는 DEB 파일이 생성한 기본 사용자 cwagent 를 사용합니다. 다음 중 한 가지 방법으로 이 사용자에 대한 자격 증명을 제공합니다. .aws/credentials 파일이 루트 사용자의 홈 디렉터리에 있는 경우 CloudWatch 에이전트를 실행하는 데 사용할 사용자의 자격 증명 파일을 생성해야 합니다. 이 자격 증명 파일은 /home/ username /.aws/credentials 입니다. 그런 다음 common-config.toml 에 있는 shared_credential_file 파라미터 값을 자격 증명 파일의 경로 이름으로 설정합니다. 자세한 내용은 AWS Systems Manager를 사용하여 CloudWatch 에이전트 설치 섹션을 참조하세요. .aws/credentials 파일이 루트 사용자의 홈 디렉터리에 없는 경우 다음 중 하나를 수행할 수 있습니다. CloudWatch 에이전트를 실행하는 데 사용할 사용자의 자격 증명 파일을 생성합니다. 이 자격 증명 파일은 /home/ username /.aws/credentials 입니다. 그런 다음 common-config.toml 에 있는 shared_credential_file 파라미터 값을 자격 증명 파일의 경로 이름으로 설정합니다. 자세한 내용은 AWS Systems Manager를 사용하여 CloudWatch 에이전트 설치 섹션을 참조하세요. 자격 증명 파일을 생성하는 대신 인스턴스에 IAM 역할을 연결합니다. 에이전트는 이 역할을 자격 증명 공급자로 사용합니다. CloudWatch 에이전트 구성 파일에서 agent 섹션에 다음 줄을 추가합니다. "run_as_user": " username " 필요에 따라 구성 파일을 추가로 수정합니다. 자세한 내용은 CloudWatch 에이전트 구성 파일 생성 섹션을 참조하세요. 사용자에게 필수 권한을 부여하세요. 사용자는 수집할 로그 파일에 대한 읽기(r) 권한이 있어야 하며 로그 파일 경로의 모든 디렉터리에 대해 실행(x) 권한이 있어야 합니다. 방금 수정한 구성 파일로 에이전트를 시작합니다. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file: configuration-file-path Linux를 실행하는 온프레미스 서버에서 CloudWatch 에이전트를 다른 사용자로 실행하려면 새 CloudWatch 에이전트 패키지를 다운로드하여 설치합니다. 새 Linux 사용자를 생성하거나 RPM 또는 DEB 파일이 생성한 기본 사용자 cwagent 를 사용합니다. 이 사용자의 자격 증명을 사용자가 액세스할 수 있는 경로(예: /home/ username /.aws/credentials )에 저장합니다. common-config.toml 에 있는 shared_credential_file 파라미터 값을 자격 증명 파일의 경로 이름으로 설정합니다. 자세한 내용은 AWS Systems Manager를 사용하여 CloudWatch 에이전트 설치 섹션을 참조하세요. CloudWatch 에이전트 구성 파일에서 agent 섹션에 다음 줄을 추가합니다. "run_as_user": " username " 필요에 따라 구성 파일을 추가로 수정합니다. 자세한 내용은 CloudWatch 에이전트 구성 파일 생성 단원을 참조하세요. 사용자에게 필수 권한을 부여하세요. 사용자는 수집할 로그 파일에 대한 읽기(r) 권한이 있어야 하며 로그 파일 경로의 모든 디렉터리에 대해 실행(x) 권한이 있어야 합니다. 방금 수정한 구성 파일로 에이전트를 시작합니다. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file: configuration-file-path CloudWatch 에이전트가 희소 로그 파일을 처리하는 방법 희소 파일은 빈 블록과 실제 내용이 모두 포함된 파일입니다. 희소 파일은 블록을 구성하는 실제 null 바이트 대신 빈 블록을 나타내는 간단한 정보를 디스크에 작성하여 디스크 공간을 보다 효율적으로 사용합니다. 이렇게 하면 일반적으로 희소 파일의 실제 크기가 명백한 크기보다 훨씬 작아집니다. 그러나 CloudWatch 에이전트는 희소 파일을 일반 파일 처리 방법과 다르게 처리하지 않습니다. 에이전트가 희소 파일을 읽을 때 빈 블록은 null 바이트로 채워진 “실제” 블록으로 처리됩니다. 이 때문에 CloudWatch 에이전트는 희소 파일의 외관상 크기만큼의 바이트를 CloudWatch에 게시합니다. 희소 파일을 게시하도록 CloudWatch 에이전트를 구성하면 예상보다 높은 CloudWatch 비용이 발생할 수 있으므로 그렇게 하지 않는 것이 좋습니다. 예를 들어 Linux의 /var/logs/lastlog 는 일반적으로 희소 파일이므로 CloudWatch에 게시하지 않는 것이 좋습니다. CloudWatch 에이전트가 수집한 지표에 사용자 지정 측정기준 추가 에이전트에 의해 수집되는 지표에 태그와 같은 사용자 지정 측정기준을 추가하려면 해당 지표를 나열하는 에이전트 구성 파일의 섹션에 append_dimensions 필드를 추가하세요. 예를 들어, 구성 파일의 다음 예제 섹션에서는 stackName 의 값이 포함된 Prod 라는 사용자 지정 측정기준을 에이전트에 의해 수집된 cpu 및 disk 지표에 추가합니다. "cpu": { "resources":[ "*" ], "measurement":[ "cpu_usage_guest", "cpu_usage_nice", "cpu_usage_idle" ], "totalcpu":false, "append_dimensions": { "stackName":"Prod" } }, "disk": { "resources":[ "/", "/tmp" ], "measurement":[ "total", "used" ], "append_dimensions": { "stackName":"Prod" } } 에이전트 구성 파일을 변경할 때마다 에이전트를 다시 시작하여 변경 사항을 적용해야 합니다. CloudWatch 에이전트가 수집한 지표 집계 또는 롤업 에이전트에 의해 수집되는 지표를 집계하거나 롤업하려면 에이전트 구성 파일의 해당 지표에 대한 섹션에 aggregation_dimensions 필드를 추가하세요. 예를 들어, 다음 구성 파일 조각은 AutoScalingGroupName 측정기준에서 지표를 롤업합니다. 각 Auto Scaling 그룹의 모든 인스턴스에서 지표가 집계되어 전체적으로 표시될 수 있습니다. "metrics": { "cpu": { ...} "disk": { ...} "aggregation_dimensions" : [["AutoScalingGroupName"]] } Auto Scaling 그룹 이름에서 롤업할 뿐 아니라 InstanceId 및 InstanceType 측정기준 각각의 조합을 따라 롤업하려면 다음을 추가합니다. "metrics": { "cpu": { ...} "disk": { ...} "aggregation_dimensions" : [["AutoScalingGroupName"], ["InstanceId", "InstanceType"]] } 대신 지표를 하나의 모음에 롤업하려면 [] 를 사용합니다. "metrics": { "cpu": { ...} "disk": { ...} "aggregation_dimensions" : [[]] } 에이전트 구성 파일을 변경할 때마다 에이전트를 다시 시작하여 변경 사항을 적용해야 합니다. CloudWatch 에이전트로 고분해능 지표 수집 metrics_collection_interval 필드는 수집되는 지표의 시간 간격을 초 단위로 지정합니다. 이 필드에 60 미만의 값을 지정하면 지표가 고분해능 지표로 수집됩니다. 예를 들어, 모든 지표가 고해상도 지표이며 10초마다 수집되어야 하는 경우 agent 섹션의 metrics_collection_interval 에서 글로벌 지표 수집 간격 값으로 10을 지정합니다. "agent": { "metrics_collection_interval": 10 } 또는 다음 예에서는 cpu 지표는 1초마다 수집되도록 설정하고 다른 모든 지표는 1분마다 수집되도록 설정합니다. "agent": { "metrics_collection_interval": 60 }, "metrics": { "metrics_collected": { "cpu": { "resources":[ "*" ], "measurement":[ "cpu_usage_guest" ], "totalcpu":false, "metrics_collection_interval": 1 }, "disk": { "resources":[ "/", "/tmp" ], "measurement":[ "total", "used" ] } } } 에이전트 구성 파일을 변경할 때마다 에이전트를 다시 시작하여 변경 사항을 적용해야 합니다. 다른 계정에 지표, 로그, 추적 전송 CloudWatch 에이전트가 지표, 로그, 추적을 다른 계정에 전송하도록 하려면 전송 서버의 에이전트 구성 파일에서 role_arn 파라미터를 지정합니다. role_arn 값은 에이전트가 데이터를 대상 계정에 전송할 때 사용하는 대상 계정의 IAM 역할을 지정합니다. 지표 또는 로그를 대상 계정에 전달할 때는 이 역할을 통해 전송 계정이 대상 계정의 해당 역할을 맡을 수 있습니다. 에이전트 구성 파일에 지표를 보낼 때 사용할 문자열, 로그를 보낼 때 사용할 문자열, 추적을 보낼 때 사용할 문자열 등 role_arn 문자열을 별도로 지정할 수도 있습니다. 구성 파일의 agent 섹션 부분에 대한 다음의 예에서는 데이터를 다른 계정에 보낼 때 CrossAccountAgentRole 을 사용하도록 에이전트를 설정합니다. { "agent": { "credentials": { "role_arn": "arn:aws:iam::123456789012:role/CrossAccountAgentRole" } }, ..... } 또는 다음 예에서는 지표, 로그 및 추적 전송에 사용할 전송 계정에 대해 서로 다른 역할을 설정합니다. "metrics": { "credentials": { "role_arn": "RoleToSendMetrics" }, "metrics_collected": { .... "logs": { "credentials": { "role_arn": "RoleToSendLogs" }, .... 필요한 정책 에이전트 구성 파일에서 role_arn 을 지정할 때 전송 및 대상 계정의 IAM 역할에 특정 정책이 있는지도 확인해야 합니다. 전송 계정과 대상 계정의 역할에는 모두 CloudWatchAgentServerPolicy 가 있어야 합니다. 이 정책을 역할에 지정하는 방법에 대한 자세한 내용은 사전 조건 단원을 참조하세요. 전송 계정의 역할은 다음의 정책을 포함해야 합니다. 역할을 편집할 때 IAM 콘솔의 [ 권한(Permissions) ] 탭에서 이 정책을 추가합니다. JSON { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Resource": [ "arn:aws:iam:: 111122223333 :role/ agent-role-in-target-account " ] } ] } 대상 계정의 역할에는 전송 계정에서 사용하는 IAM 역할을 인식하도록 다음 정책이 포함되어야 합니다. 역할을 편집할 때 IAM 콘솔의 [ 신뢰 관계(Trust relationships) ] 탭에서 이 정책을 추가합니다. 이 역할은 전송 계정에서 사용한 정책의 agent-role-in-target-account 에 지정된 역할입니다. JSON { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam:: 111122223333 :role/ role-in-sender-account " ] }, "Action": "sts:AssumeRole" } ] } CloudWatch 에이전트와 이전 CloudWatch Logs 에이전트 간 타임스탬프 차이 CloudWatch 에이전트는 이전 CloudWatch Logs 에이전트와 비교했을 때 타임스탬프 형식에 대해 다른 기호 집합을 지원합니다. 이러한 차이는 다음 표에 표시됩니다. 두 에이전트 모두에서 지원하는 기호 CloudWatch 에이전트에서만 지원하는 기호 이전 CloudWatch Logs 에이전트에서만 지원하는 기호 %A, %a, %b, %B, %d, %f, %H, %l, %m, %M, %p, %S, %y, %Y, %Z, %z %-d, %-l, %-m, %-M, %-S %c,%j, %U, %W, %w 새 CloudWatch 에이전트에서 지원하는 기호의 의미에 대한 자세한 내용은 Amazon CloudWatch 사용 설명서 의 CloudWatch 에이전트 구성 파일: 로그 섹션 단원을 참조하세요. CloudWatch Logs 에이전트에서 지원하는 기호에 대한 자세한 내용은 Amazon CloudWatch Logs 사용 설명서 의 에이전트 구성 파일 단원을 참조하세요. OpenTelemetry 수집기 구성 파일 추가 CloudWatch 에이전트는 자체 구성 파일과 함께 추가 OpenTelemetry 수집기 구성 파일을 지원합니다. 이 기능을 사용하면 CloudWatch 에이전트 구성을 통해 CloudWatch 에이전트 기능(예: CloudWatch Application Signals 또는 Container Insights)을 사용하고, 단일 에이전트를 활용하여 기존 OpenTelemetry 수집기 구성을 가져올 수 있습니다. CloudWatch 에이전트가 자동으로 생성한 파이프라인과의 병합 충돌을 방지하려면, OpenTelemetry 수집기 구성의 각 구성 요소 및 파이프라인에 사용자 지정 접미사를 추가하는 것이 좋습니다. receivers: otlp/custom-suffix: protocols: http: exporters: awscloudwatchlogs/custom-suffix: log_group_name: "test-group" log_stream_name: "test-stream" service: pipelines: logs/custom-suffix: receivers: [otlp/custom-suffix] exporters: [awscloudwatchlogs/custom-suffix] CloudWatch 에이전트를 구성하려면 fetch-config 옵션을 사용하여 CloudWatch 에이전트를 시작하고, CloudWatch 에이전트의 구성 파일을 지정합니다. CloudWatch 에이전트에는 하나 이상의 CloudWatch 에이전트 구성 파일이 필요합니다. /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -c file:/tmp/agent.json -s 그런 다음, append-config 옵션을 사용하여 OpenTelemetry 수집기 구성 파일을 지정합니다. /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -c file:/tmp/otel.yaml -s 에이전트는 시작 시 2가지 구성 파일을 병합하고, 해결된 구성을 로깅합니다. javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 관련 원격 측정과 함께 CloudWatch 에이전트 사용 CloudWatch 에이전트 자격 증명 기본 설정 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#discover-by-keywords | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/synchronous-requests#parameter-format | Synchronous Requests - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Web Scraper API Synchronous Requests Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API Scrape data and return it directly in the response. cURL Copy curl --request POST \ --url https://api.brightdata.com/datasets/v3/scrape \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data ' { "input": [ { "url": "www.linkedin.com/in/bulentakar" } ], "custom_output_fields": "url|about.updated_on" } ' 200 202 Copy "OK" Web Scraper API Synchronous Requests Copy page This endpoint allows users to fetch data efficiently and ensures seamless integration with their applications or workflows. Copy page POST / datasets / v3 / scrape Try it Scrape data and return it directly in the response. cURL Copy curl --request POST \ --url https://api.brightdata.com/datasets/v3/scrape \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data ' { "input": [ { "url": "www.linkedin.com/in/bulentakar" } ], "custom_output_fields": "url|about.updated_on" } ' 200 202 Copy "OK" How It Works This synchronous API endpoint allows users to send a scraping request and receive the results in real-time directly in the response, at the point of request - such as a terminal or application - without the need for external storage or manual downloads. This approach streamlines the data collection process by eliminating additional steps for retrieving results. You can specify the desired output format using the format parameter. If no format is provided, the response will default to JSON. Timeout Limit Please note that this synchronous request is subject to a 1 minute timeout limit. If the data retrieval process exceeds this limit, the API will return an HTTP 202 response, indicating that the request is still being processed. In such cases, you will receive a snapshot ID to monitor and retrieve the results asynchronously via the Monitor Snapshot and Download Snapshot endpoints. Example response on timeout: 202 Copy { "snapshot_id" : "s_xxx" , "message" : "Your request is still in progress and cannot be retrieved in this call. Use the provided Snapshot ID to track progress via the Monitor Snapshot endpoint and download it once ready via the Download Snapshot endpoint." } Authorizations Authorization string header required Use your Bright Data API Key as a Bearer token in the Authorization header. How to authenticate: Obtain your API Key from the Bright Data account settings at https://brightdata.com/cp/setting/users Include the API Key in the Authorization header of your requests Format: Authorization: Bearer YOUR_API_KEY Example: Authorization: Bearer b5648e1096c6442f60a6c4bbbe73f8d2234d3d8324554bd6a7ec8f3f251f07df Learn how to get your Bright Data API key: https://docs.brightdata.com/api-reference/authentication Query Parameters dataset_id string required Dataset ID for which data collection is triggered. custom_output_fields string List of output columns, separated by | (e.g., url|about.updated_on ). Filters the response to include only the specified fields. Example : "url|about.updated_on" include_errors boolean Include errors report with the results. format enum<string> default: json Specifies the format of the response (default: ndjson). Available options : ndjson , json , csv Body application/json input object[] required List of input items to scrape. Show child attributes custom_output_fields string List of output columns, separated by | (e.g., url|about.updated_on ). Filters the response to include only the specified fields. Example : "url|about.updated_on" Response 200 text/plain OK The response is of type string . Example : "OK" Was this page helpful? Yes No Asynchronous Requests Crawl API ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/LogsAnomalyDetection.html | Log anomaly detection - Amazon CloudWatch Logs Log anomaly detection - Amazon CloudWatch Logs Documentation Amazon CloudWatch User Guide Severity and priority of anomalies and patterns Anomaly visibility time Suppressing an anomaly Frequently asked questions Log anomaly detection You can detect anomalies in your log data in two ways: by creating a log anomaly detector for continuous monitoring, or by using the anomaly detection command in CloudWatch Logs Insights queries for on-demand analysis. A log anomaly detector scans the log events ingested into a log group and finds anomalies in the log data automatically. Anomaly detection uses machine-learning and pattern recognition to establish baselines of typical log content. For on-demand analysis, you can use the anomaly detection command in CloudWatch Logs Insights queries to identify unusual patterns in time-series data. For more information about query-based anomaly detection, see Using anomaly detection in CloudWatch Logs Insights . After you create an anomaly detector for a log group, it trains using the past two weeks of log events in the log group for training. The training period can take up to 15 minutes. After the training is complete, it begins to analyze incoming logs to identify anomalies, and the anomalies are displayed in the CloudWatch Logs console for you to examine. CloudWatch Logs pattern recognition extracts log patterns by identifying static and dynamic content in your logs. Patterns are useful for analyzing large log sets because a large number of log events can often be compressed into a few patterns. For example, see the following sample of three log events. 2023-01-01 19:00:01 [INFO] Calling DynamoDB to store for ResourceID: 12342342k124-12345 2023-01-01 19:00:02 [INFO] Calling DynamoDB to store for ResourceID: 324892398123-1234R 2023-01-01 19:00:03 [INFO] Calling DynamoDB to store for ResourceID: 3ff231242342-12345 In the previous sample, all three log events follow one pattern: <Date-1> <Time-2> [INFO] Calling DynamoDB to store for resource id <ResourceID-3> Fields within a pattern are called tokens . Fields that vary within a pattern, such as a request ID or timestamp, are referred to as dynamic tokens . Each different value found for a dynamic token is called a token value . If CloudWatch Logs can infer the type of data that a dynamic token represents, it displays the token as < string - number > . The string is a description of the type of data that the token represents. The number shows where in the pattern this token appears, compared to the other dynamic tokens. CloudWatch Logs assigns the string part of the name based on analyzing the content of the log events that contain it. If CloudWatch Logs can't infer the type of data that a dynamic token represents, it displays the token as <Token- number >, and number indicates where in the pattern this token appears, compared to the other dynamic tokens. Common examples of dynamic tokens include error codes, IP addresses, timestamps, and request IDs. Logs anomaly detection uses these patterns to find anomalies. After the anomaly detector model training period, logs are evaluated against known trends. The anomaly detector flags significant fluctuations as anomalies. This chapter describes how to enable anomaly detection, view anomalies, create alarms for log anomaly detectors, and metrics that log anomaly detectors publish. It also describes how to encrypt anomaly detector and its results with AWS Key Management Service. Creating log anomaly detectors doesn't incur charges. Severity and priority of anomalies and patterns Each anomaly found by a log anomaly detector is assigned a priority . Each pattern found is assigned a severity . Priority is automatically computed, and is based on both the severity level of the pattern and the amount of deviation from expected values. For example, if a certain token value suddenly increases by 500%, that anomaly might be designated as HIGH priority even if its severity is NONE . Severity is based only on keywords found in the patterns such as FATAL , ERROR , and WARN . If none of these keywords are found, the severity of a pattern is marked as NONE . Anomaly visibility time When you create an anomaly detector, you specify the maximum anomaly visibility period for it. This is the number of days that the anomaly is displayed in the console and is returned by the ListAnomalies API operation. After this time period has elapsed for an anomaly, if it continues to happen, it's automatically accepted as regular behavior and the anomaly detector model stops flagging it as an anomaly. If you don't adjust the visibility time when you create an anomaly detector, 21 days is used as the default. Suppressing an anomaly After an anomaly has been found, you can choose to suppress it temporarily or permanently. Suppressing an anomaly causes the anomaly detector to stop flagging this occurrence as an anomaly for the amount of time that you specify. When you suppress an anomaly, you can choose to suppress only that specific anomaly, or suppress all anomalies related to the pattern that the anomaly was found in. You can still view suppressed anomalies in the console. You can also choose to stop suppressing them. Frequently asked questions Does AWS use my data to train machine-learning algorithms for AWS use or for other customers? No. The anomaly detection model created by the training is based on the log events in a log group and is used only within that log group and that AWS account. What types of log events work well with anomaly detection? Log anomaly detection is well-suited for: Application logs and other types of logs where most log entries fit typical patterns. Log groups with events that contain a log level or severity keywords such as INFO , ERROR , and DEBUG are especially well-suited to log anomaly detection. Log anomaly detection is not suited for: Log events with extremely long JSON structures, such as CloudTrail Logs. Pattern analysis analyzes only up to the first 1500 characters of a log line, so any characters beyond that limit are skipped. Audit or access logs, such as VPC flow logs, will also have less success with anomaly detection. Anomaly detection is meant to find application issues, so it might not be well-suited for network or access anomalies. To help you determine whether an anomaly detector is suited to a certain log group, use CloudWatch Logs pattern analysis to find the number of patterns in the log events in the group. If the number of patterns is no more than about 300, anomaly detection might work well. For more information about pattern analysis, see Pattern analysis . What gets flagged as an anomaly? The following occurrences can cause a log event to be flagged as an anomaly: A log event with a pattern not seen before in the log group. A significant variation to a known pattern. A new value for a dynamic token that has a discrete set of usual values. A large change in the number of occurrences of a value for a dynamic token. While all the preceding items might be flagged as anomalies, they don't all mean that the application is performing poorly. For example, a higher-than-usual number of 200 success values might be flagged as an anomaly. In cases like this, you might consider suppressing these anomalies that don't indicate problems. What happens with sensitive data that is being masked? Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Troubleshooting scheduled queries Using anomaly detection in CloudWatch Logs Insights Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#param-country | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#param-search-url | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Famdocs-lowcode-experience-platform&trk=products_details_guest_primary_call_to_action | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/vimeo#content-area | Vimeo API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs Vimeo API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Posts API Collect by URL Discover by URL Discover by Keywords and License Social Media APIs Vimeo API Scrapers Copy page Copy page Overview The Vimeo API Suite offers multiple types of APIs, each designed for specific data collection needs from Vimeo. Below is an overview of how these APIs connect and interact, based on the available features: Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : Discover by profile URL. Discover by Keywords and License. Interesting Columns : title , video_length , views , likes . Posts API Collect by URL This API allows users to collect detailed information about a specific Vimeo video using the provided video URL. Input Parameters URL string required The URL of the Vimeo video. Output Structure : Includes comprehensive data points: Video Details video_id , title , url , video_url , video_length , description , data_posted , transcript . For all data points, click here . Engagement & Metrics views , likes , comments , collections . Uploader Details uploader , uploader_url , uploader_id , avatar_img_uploader . Video Media & Content preview_image , related_videos , music_track . License & Quality license , license_info , video_quality . Dimensions height , width . This API provides detailed insights into a Vimeo video, including video content, uploader information, media links, engagement metrics, and more, enabling efficient video analysis and content tracking. Discover by URL This API allows users to discover Vimeo videos based on a specific URL and associated keywords, providing detailed video information and insights. Input Parameters URL string required The URL of the Vimeo post. keyword string required The keyword to search for within the video’s content. pages number required The number of pages of results to collect. Output Structure : Includes comprehensive data points: Video Details video_id , title , url , video_url , video_length , description , data_posted , transcript . For all data points, click here . Engagement & Metrics views , likes , comments , collections . Uploader Details uploader , uploader_url , uploader_id , avatar_img_uploader . Video Media & Content preview_image , related_videos , music_track . License & Quality license , license_info , video_quality . Dimensions height , width . This API allows users to discover Vimeo videos by URL and keyword, offering detailed insights into video content, uploader information, and engagement metrics. Discover by Keywords and License This API allows users to discover Vimeo videos based on specific keywords and license types, providing detailed video information and insights. Input Parameters keyword string required The keyword to search for in the video content. license string required The license type to filter the videos by (e.g., Creative Commons, Standard License). pages number required The number of pages of results to collect. Output Structure : Includes comprehensive data points: Video Details video_id , title , url , video_url , video_length , description , data_posted , transcript . For all data points, click here . Engagement & Metrics views , likes , comments , collections . Uploader Details uploader , uploader_url , uploader_id , avatar_img_uploader . Video Media & Content preview_image , related_videos , music_track . License & Quality license , license_info , video_quality . Dimensions height , width . This API allows users to discover Vimeo videos based on specific keywords and license types, offering detailed insights into video content, uploader information, and engagement metrics. Was this page helpful? Yes No Quora YouTube ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/LogsAnomalyDetection.html | Log anomaly detection - Amazon CloudWatch Logs Log anomaly detection - Amazon CloudWatch Logs Documentation Amazon CloudWatch User Guide Severity and priority of anomalies and patterns Anomaly visibility time Suppressing an anomaly Frequently asked questions Log anomaly detection You can detect anomalies in your log data in two ways: by creating a log anomaly detector for continuous monitoring, or by using the anomaly detection command in CloudWatch Logs Insights queries for on-demand analysis. A log anomaly detector scans the log events ingested into a log group and finds anomalies in the log data automatically. Anomaly detection uses machine-learning and pattern recognition to establish baselines of typical log content. For on-demand analysis, you can use the anomaly detection command in CloudWatch Logs Insights queries to identify unusual patterns in time-series data. For more information about query-based anomaly detection, see Using anomaly detection in CloudWatch Logs Insights . After you create an anomaly detector for a log group, it trains using the past two weeks of log events in the log group for training. The training period can take up to 15 minutes. After the training is complete, it begins to analyze incoming logs to identify anomalies, and the anomalies are displayed in the CloudWatch Logs console for you to examine. CloudWatch Logs pattern recognition extracts log patterns by identifying static and dynamic content in your logs. Patterns are useful for analyzing large log sets because a large number of log events can often be compressed into a few patterns. For example, see the following sample of three log events. 2023-01-01 19:00:01 [INFO] Calling DynamoDB to store for ResourceID: 12342342k124-12345 2023-01-01 19:00:02 [INFO] Calling DynamoDB to store for ResourceID: 324892398123-1234R 2023-01-01 19:00:03 [INFO] Calling DynamoDB to store for ResourceID: 3ff231242342-12345 In the previous sample, all three log events follow one pattern: <Date-1> <Time-2> [INFO] Calling DynamoDB to store for resource id <ResourceID-3> Fields within a pattern are called tokens . Fields that vary within a pattern, such as a request ID or timestamp, are referred to as dynamic tokens . Each different value found for a dynamic token is called a token value . If CloudWatch Logs can infer the type of data that a dynamic token represents, it displays the token as < string - number > . The string is a description of the type of data that the token represents. The number shows where in the pattern this token appears, compared to the other dynamic tokens. CloudWatch Logs assigns the string part of the name based on analyzing the content of the log events that contain it. If CloudWatch Logs can't infer the type of data that a dynamic token represents, it displays the token as <Token- number >, and number indicates where in the pattern this token appears, compared to the other dynamic tokens. Common examples of dynamic tokens include error codes, IP addresses, timestamps, and request IDs. Logs anomaly detection uses these patterns to find anomalies. After the anomaly detector model training period, logs are evaluated against known trends. The anomaly detector flags significant fluctuations as anomalies. This chapter describes how to enable anomaly detection, view anomalies, create alarms for log anomaly detectors, and metrics that log anomaly detectors publish. It also describes how to encrypt anomaly detector and its results with AWS Key Management Service. Creating log anomaly detectors doesn't incur charges. Severity and priority of anomalies and patterns Each anomaly found by a log anomaly detector is assigned a priority . Each pattern found is assigned a severity . Priority is automatically computed, and is based on both the severity level of the pattern and the amount of deviation from expected values. For example, if a certain token value suddenly increases by 500%, that anomaly might be designated as HIGH priority even if its severity is NONE . Severity is based only on keywords found in the patterns such as FATAL , ERROR , and WARN . If none of these keywords are found, the severity of a pattern is marked as NONE . Anomaly visibility time When you create an anomaly detector, you specify the maximum anomaly visibility period for it. This is the number of days that the anomaly is displayed in the console and is returned by the ListAnomalies API operation. After this time period has elapsed for an anomaly, if it continues to happen, it's automatically accepted as regular behavior and the anomaly detector model stops flagging it as an anomaly. If you don't adjust the visibility time when you create an anomaly detector, 21 days is used as the default. Suppressing an anomaly After an anomaly has been found, you can choose to suppress it temporarily or permanently. Suppressing an anomaly causes the anomaly detector to stop flagging this occurrence as an anomaly for the amount of time that you specify. When you suppress an anomaly, you can choose to suppress only that specific anomaly, or suppress all anomalies related to the pattern that the anomaly was found in. You can still view suppressed anomalies in the console. You can also choose to stop suppressing them. Frequently asked questions Does AWS use my data to train machine-learning algorithms for AWS use or for other customers? No. The anomaly detection model created by the training is based on the log events in a log group and is used only within that log group and that AWS account. What types of log events work well with anomaly detection? Log anomaly detection is well-suited for: Application logs and other types of logs where most log entries fit typical patterns. Log groups with events that contain a log level or severity keywords such as INFO , ERROR , and DEBUG are especially well-suited to log anomaly detection. Log anomaly detection is not suited for: Log events with extremely long JSON structures, such as CloudTrail Logs. Pattern analysis analyzes only up to the first 1500 characters of a log line, so any characters beyond that limit are skipped. Audit or access logs, such as VPC flow logs, will also have less success with anomaly detection. Anomaly detection is meant to find application issues, so it might not be well-suited for network or access anomalies. To help you determine whether an anomaly detector is suited to a certain log group, use CloudWatch Logs pattern analysis to find the number of patterns in the log events in the group. If the number of patterns is no more than about 300, anomaly detection might work well. For more information about pattern analysis, see Pattern analysis . What gets flagged as an anomaly? The following occurrences can cause a log event to be flagged as an anomaly: A log event with a pattern not seen before in the log group. A significant variation to a known pattern. A new value for a dynamic token that has a discrete set of usual values. A large change in the number of occurrences of a value for a dynamic token. While all the preceding items might be flagged as anomalies, they don't all mean that the application is performing poorly. For example, a higher-than-usual number of 200 success values might be flagged as an anomaly. In cases like this, you might consider suppressing these anomalies that don't indicate problems. What happens with sensitive data that is being masked? Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Troubleshooting scheduled queries Using anomaly detection in CloudWatch Logs Insights Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:25 |
https://git-scm.com/book/be/v2/%d0%9f%d0%b5%d1%80%d1%88%d1%8b%d1%8f-%d0%ba%d1%80%d0%be%d0%ba%d1%96-The-Command-Line | Git - The Command Line About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Першыя крокі 1.1 About Version Control 1.2 A Short History of Git 1.3 What is Git? 1.4 The Command Line 1.5 Installing Git 1.6 First-Time Git Setup 1.7 Getting Help 1.8 Падсумаваньне 2. Git Basics 2.1 Getting a Git Repository 2.2 Recording Changes to the Repository 2.3 Viewing the Commit History 2.4 Undoing Things 2.5 Working with Remotes 2.6 Tagging 2.7 Git Aliases 2.8 Summary 3. Git Branching 3.1 Branches in a Nutshell 3.2 Basic Branching and Merging 3.3 Branch Management 3.4 Branching Workflows 3.5 Remote Branches 3.6 Rebasing 3.7 Summary 4. Git on the Server 4.1 The Protocols 4.2 Getting Git on a Server 4.3 Generating Your SSH Public Key 4.4 Setting Up the Server 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Third Party Hosted Options 4.10 Summary 5. Distributed Git 5.1 Distributed Workflows 5.2 Contributing to a Project 5.3 Maintaining a Project 5.4 Summary 6. GitHub 6.1 Account Setup and Configuration 6.2 Contributing to a Project 6.3 Maintaining a Project 6.4 Managing an organization 6.5 Scripting GitHub 6.6 Summary 7. Git Tools 7.1 Revision Selection 7.2 Interactive Staging 7.3 Stashing and Cleaning 7.4 Signing Your Work 7.5 Searching 7.6 Rewriting History 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugging with Git 7.11 Submodules 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 Summary 8. Customizing Git 8.1 Git Configuration 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Summary 9. Git and Other Systems 9.1 Git as a Client 9.2 Migrating to Git 9.3 Summary 10. Git Internals 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 The Refspec 10.6 Transfer Protocols 10.7 Maintenance and Data Recovery 10.8 Environment Variables 10.9 Summary A1. Дадатак A: Git in Other Environments A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Visual Studio Code A1.4 Git in IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine A1.5 Git in Sublime Text A1.6 Git in Bash A1.7 Git in Zsh A1.8 Git in PowerShell A1.9 Summary A2. Дадатак B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A2.4 go-git A2.5 Dulwich A3. Дадатак C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 1.4 Першыя крокі - The Command Line The Command Line There are a lot of different ways to use Git. There are the original command-line tools, and there are many graphical user interfaces of varying capabilities. For this book, we will be using Git on the command line. For one, the command line is the only place you can run all Git commands — most of the GUIs implement only a partial subset of Git functionality for simplicity. If you know how to run the command-line version, you can probably also figure out how to run the GUI version, while the opposite is not necessarily true. Also, while your choice of graphical client is a matter of personal taste, all users will have the command-line tools installed and available. So we will expect you to know how to open Terminal in macOS or Command Prompt or PowerShell in Windows. If you don’t know what we’re talking about here, you may need to stop and research that quickly so that you can follow the rest of the examples and descriptions in this book. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://git-scm.com/book/zh-tw/v2/%e4%bc%ba%e6%9c%8d%e5%99%a8%e4%b8%8a%e7%9a%84-Git-GitWeb | Git - GitWeb About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. 開始 1.1 關於版本控制 1.2 Git 的簡史 1.3 Git 基礎要點 1.4 命令列 1.5 Git 安裝教學 1.6 初次設定 Git 1.7 取得說明文件 1.8 摘要 2. Git 基礎 2.1 取得一個 Git 倉儲 2.2 紀錄變更到版本庫中 2.3 檢視提交的歷史記錄 2.4 復原 2.5 與遠端協同工作 2.6 標籤 2.7 Git Aliases 2.8 總結 3. 使用 Git 分支 3.1 簡述分支 3.2 分支和合併的基本用法 3.3 分支管理 3.4 分支工作流程 3.5 遠端分支 3.6 衍合 3.7 總結 4. 伺服器上的 Git 4.1 通訊協定 4.2 在伺服器上佈署 Git 4.3 產生你的 SSH 公鑰 4.4 設定伺服器 4.5 Git 常駐程式 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 第3方 Git 託管方案 4.10 總結 5. 分散式的 Git 5.1 分散式工作流程 5.2 對專案進行貢獻 5.3 維護一個專案 5.4 Summary 6. GitHub 6.1 建立帳戶及設定 6.2 參與一個專案 6.3 維護專案 6.4 Managing an organization 6.5 Scripting GitHub 6.6 總結 7. Git 工具 7.1 Revision Selection 7.2 Interactive Staging 7.3 Stashing and Cleaning 7.4 Signing Your Work 7.5 Searching 7.6 Rewriting History 7.7 Reset Demystified 7.8 Advanced Merging 7.9 Rerere 7.10 Debugging with Git 7.11 Submodules 7.12 Bundling 7.13 Replace 7.14 Credential Storage 7.15 總結 8. Customizing Git 8.1 Git Configuration 8.2 Git Attributes 8.3 Git Hooks 8.4 An Example Git-Enforced Policy 8.5 Summary 9. Git and Other Systems 9.1 Git as a Client 9.2 Migrating to Git 9.3 Summary 10. Git Internals 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 The Refspec 10.6 Transfer Protocols 10.7 Maintenance and Data Recovery 10.8 Environment Variables 10.9 Summary A1. 附錄 A: Git in Other Environments A1.1 Graphical Interfaces A1.2 Git in Visual Studio A1.3 Git in Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git in Powershell A1.7 Summary A2. 附錄 B: Embedding Git in your Applications A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. 附錄 C: Git Commands A3.1 Setup and Config A3.2 Getting and Creating Projects A3.3 Basic Snapshotting A3.4 Branching and Merging A3.5 Sharing and Updating Projects A3.6 Inspection and Comparison A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Plumbing Commands 2nd Edition 4.7 伺服器上的 Git - GitWeb GitWeb Now that you have basic read/write and read-only access to your project, you may want to set up a simple web-based visualizer. Git comes with a CGI script called GitWeb that is sometimes used for this. 圖表 49. The GitWeb web-based user interface. If you want to check out what GitWeb would look like for your project, Git comes with a command to fire up a temporary instance if you have a lightweight server on your system like lighttpd or webrick . On Linux machines, lighttpd is often installed, so you may be able to get it to run by typing git instaweb in your project directory. If you’re running a Mac, Leopard comes preinstalled with Ruby, so webrick may be your best bet. To start instaweb with a non-lighttpd handler, you can run it with the --httpd option. $ git instaweb --httpd=webrick [2009-02-21 10:02:21] INFO WEBrick 1.3.1 [2009-02-21 10:02:21] INFO ruby 1.8.6 (2008-03-03) [universal-darwin9.0] That starts up an HTTPD server on port 1234 and then automatically starts a web browser that opens on that page. It’s pretty easy on your part. When you’re done and want to shut down the server, you can run the same command with the --stop option: $ git instaweb --httpd=webrick --stop If you want to run the web interface on a server all the time for your team or for an open source project you’re hosting, you’ll need to set up the CGI script to be served by your normal web server. Some Linux distributions have a gitweb package that you may be able to install via apt or yum , so you may want to try that first. We’ll walk through installing GitWeb manually very quickly. First, you need to get the Git source code, which GitWeb comes with, and generate the custom CGI script: $ git clone git://git.kernel.org/pub/scm/git/git.git $ cd git/ $ make GITWEB_PROJECTROOT="/opt/git" prefix=/usr gitweb SUBDIR gitweb SUBDIR ../ make[2]: `GIT-VERSION-FILE' is up to date. GEN gitweb.cgi GEN static/gitweb.js $ sudo cp -Rf gitweb /var/www/ Notice that you have to tell the command where to find your Git repositories with the GITWEB_PROJECTROOT variable. Now, you need to make Apache use CGI for that script, for which you can add a VirtualHost: <VirtualHost *:80> ServerName gitserver DocumentRoot /var/www/gitweb <Directory /var/www/gitweb> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgi DirectoryIndex gitweb.cgi </Directory> </VirtualHost> Again, GitWeb can be served with any CGI or Perl capable web server; if you prefer to use something else, it shouldn’t be difficult to set up. At this point, you should be able to visit http://gitserver/ to view your repositories online. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://forum.babelfish.money/ | BabelFish - Stablecoin research and development to sustainably grow BabelFish's AUM and community BabelFish BabelFish Protocol Governance Category Topics General Discussions This is a place for discussions on various topics related to the BabelFish protocol, Stablecoins , DeFi , etc. 6 Governance Participate in discussions on ideas and proposals to vote in BabelFish bitocracy 1 Learning Center Do You need help? Use our FAQ, linked materials, search through existing topics or as a last resort start a new Topic - You will be helped as soon as possible! 0 Forum Feedback Discussion about this forum, its organization, how it works, and how we can improve it. 2 Home Categories Guidelines Terms of Service Privacy Policy Powered by Discourse , best viewed with JavaScript enabled Forum Guidelines Terms of Service Privacy Policy BabelFish.money A.D. 2021 | 2026-01-13T09:29:25 |
https://git-scm.com/book/tl/v2/Git-sa-Server-Ang-Mga-Protokol | Git - Ang Mga Protokol About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Pagsisimula 1.1 Tungkol sa Bersyon Kontrol 1.2 Isang Maikling Kasaysayan ng Git 1.3 Pangunahing Kaalaman sa Git 1.4 Ang Command Line 1.5 Pag-install ng Git 1.6 Unang Beses na Pag-Setup ng Git 1.7 Pagkuha ng Tulong 1.8 Buod 2. Mga Pangunahing Kaalaman sa Git 2.1 Pagkuha ng Repositoryo ng Git 2.2 Pagtatala ng mga Pagbabago sa Repositoryo 2.3 Pagtitingin sa Kasaysayan ng Commit 2.4 Pag-Undo ng mga Bagay 2.5 Paggawa gamit ang mga Remote 2.6 Pag-tag 2.7 Mga Alyas sa Git 2.8 Buod 3. Pag-branch ng Git 3.1 Mga Branch sa Maikling Salita 3.2 Batayan ng Pag-branch at Pag-merge 3.3 Pamamahala ng Branch 3.4 Mga Daloy ng Trabaho sa Pag-branch 3.5 Remote na mga Branch 3.6 Pag-rebase 3.7 Buod 4. Git sa Server 4.1 Ang Mga Protokol 4.2 Pagkuha ng Git sa isang Server 4.3 Ang paglikha ng iyong Pampublikong Susi ng SSH 4.4 Pag-Setup ng Server 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Mga Opsyon ng Naka-host sa Third Party 4.10 Buod 5. Distributed Git 5.1 Distributed Workflows 5.2 Contributing to a Project 5.3 Maintaining a Project 5.4 Summary 6. GitHub 6.1 Pag-setup at pagsasaayos ng Account 6.2 Pag-aambag sa isang Proyekto 6.3 Pagpapanatili ng isang Proyekto 6.4 Pamamahala ng isang organisasyon 6.5 Pag-iiskrip sa GitHub 6.6 Buod 7. Mga Git na Kasangkapan 7.1 Pagpipili ng Rebisyon 7.2 Staging na Interactive 7.3 Pag-stash at Paglilinis 7.4 Pag-sign sa Iyong Trabaho 7.5 Paghahanap 7.6 Pagsulat muli ng Kasaysayan 7.7 Ang Reset Demystified 7.8 Advanced na Pag-merge 7.9 Ang Rerere 7.10 Pagdebug gamit ang Git 7.11 Mga Submodule 7.12 Pagbibigkis 7.13 Pagpapalit 7.14 Kredensyal na ImbakanCredential Storage 7.15 Buod 8. Pag-aangkop sa Sariling Pangangailagan ng Git 8.1 Kompigurasyon ng Git 8.2 Mga Katangian ng Git 8.3 Mga Hook ng Git 8.4 An Example Git-Enforced Policy 8.5 Buod 9. Ang Git at iba pang mga Sistema 9.1 Git bilang isang Kliyente 9.2 Paglilipat sa Git 9.3 Buod 10. Mga Panloob ng GIT 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 Ang Refspec 10.6 Transfer Protocols 10.7 Pagpapanatili At Pagbalik ng Datos 10.8 Mga Variable sa Kapaligiran 10.9 Buod A1. Appendix A: Git in Other Environments A1.1 Grapikal Interfaces A1.2 Git in Visual Studio A1.3 Git sa Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git sa Powershell A1.7 Summary A2. Appendix B: Pag-embed ng Git sa iyong Mga Aplikasyon A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. Appendix C: Mga Kautusan ng Git A3.1 Setup at Config A3.2 Pagkuha at Paglikha ng Mga Proyekto A3.3 Pangunahing Snapshotting A3.4 Branching at Merging A3.5 Pagbabahagi at Pagbabago ng mga Proyekto A3.6 Pagsisiyasat at Paghahambing A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Pagtutuberong mga Utos 2nd Edition 4.1 Git sa Server - Ang Mga Protokol Sa puntong ito, maaari mo na dapat gawin ang mga pang-araw-araw na gawain kung saan gagamitin mo ang Git. Gayunpaman, upang makagawa ng anumang kolaborasyon sa Git, kakailanganin mong magkaroon ng remote na respositoryo ng Git. Kahit na technically maaari mong i-push ang iyong mga pagbabago at i-push ang mga pagbabago mula sa mga repositoryo ng mga indibidwal, ang paggawa nito ay hindi hinihikayat dahil madali malito sa kanilang pinagtrabuhan kapag hindi ka maingat. At saka, gusto mo ma-access ang repositoryo ng mga tagapagtulong kahit offline ang iyong kompyuter — Ang pagkakaroon ng mas maaasahang pangkaraniwang repositoryo ay kadalasan kapaki-pakinabang. Sa gayon, ang ginusto na pamamaraan sa pakikipagtulungan ng may kasama ay ang pag-set up ng isang intermediate na repositoryo na kung saan may access kayong dalawa, at mag push at mag pull mula doon. Ang pagpapatakbo ng isang Git Server ay tuwiran. Una, pipili ka kung anong mga protokol ang gusto mo gamitin ng iyong server sa pag-usap. Sinasaklaw ng unang bahagi ng kabanatang ito ang mga magagamit na mga protokol at ang mga kalamangan at kahinaan ng bawat isa. Ang mga susunod na mga bahagi ay magpapaliwanag sa ilang tipikal na mga set up na gamit ang mga protokol na iyon at kung papaano mapapatakbo ang iyong server gamit ang mga iyon. Sa wakas, tatalakayin natin ang ilang mga naka-host na opsyon, kapag wala kang pakialam na i-host ang iyong code sa server ng ibang tao at ayaw mong dumaan sa abala ng pag-set up at pagpapanatili ng sariling server. Kapag hindi ka interesado sa pagpapatakbo ng sarili mong server, maaari kang lumaktaw sa huling bahagi ng kabanata upang makita ang ilang mga opsyon sa pag-set up ng isang naka-host na akawnt at pagkatapos ay lumipat sa susunod na kabanata, kung saan ay tatalakayin natin ang iba’t ibang ins at outs sa pagtatrabaho sa isang distributed source control environment. Ang isang remote na repositoryo ay sa pangkalahatan ay isang hubad na repositoryo — isang repositoryo sa Git na walang gumagana na direktoryo. Dahil ang repositoryo ay ginagamit lamang bilang punto ng kolaborasyon, walang dahilan upang magkaroon ng snapshot na naka-check out sa disk; ito ay data ng Git lamang. Sa madaling salita, ang isang hubad na repositoryo ay naglalaman ng .git na direktoryo ng iyong proyekto at wala ng iba. Ang Mga Protokol Maaaring gamitin ng Git ang apat na magkakaibang mga protokol upang lumipat ng mga data: Lokal, HTTP, Secure Shell (SSH) at Git. Dito tatalakayin natin kung ano sila at kung saang mga pangunahing pangyayari gusto mo (o di gusto) sila gamitin. Lokal na Protokol Ang pinaka-pangunahin ay ang Lokal na protokol , na kung saan ang remote na repositoryo ay nasa ibang direktoryo sa parehong host. Madalas itong ginagamit kapag lahat kayo sa inyong koponan ay may access sa isang ibinahagi na filesystem tulad ng isang NFS mount, o sa malabong kaso na lahat ay nag-log sa parehong kompyuter. Ang huli ay hindi tamang-tama, kasi lahat ng instansya ng repositoryo ng iyong code ay naninirahan sa parehong kompyuter, mas malaki ang posibilidad ng pagkawala ng sakuna. Kapag ikaw ay may isang ibinahagi na naka-mount na filesystem, maaari kang mag clone, mag-push sa, at mag pull mula sa isang lokal na nakabatay sa file na repositoryo. Para i-clone ang isang repositoryo na tulad nito, o idagdag bilang isang remote sa umiiral na proyekto, gamitin ang landas patungo sa repositoryo bilang URL. Halimbawa, upang i-clone ang isang lokal na repositoryo, maaari kang magpagana ng ganito: $ git clone /srv/git/project.git O maaari mong gawin ito: $ git clone file:///srv/git/project.git Ang Git ay gumagana ng bahagyang maiba kapag tahasang tinukoy mo ang file:// sa simula ng URL. Kapag ang landas lamang ang tinukoy mo, Sinusubukan ng Git na gumamit ng mga hardlink o direktang kinokopya ang mga file na kailangan nito. Kapag tinukoy mo ang file:// , pinapagana ng Git ang mga proseso na karanawing ginagamit upang lumipat ng data sa isang network, na kung saan ay karaniwang mas mahina. Ang pinakarason upang tukuyin ang file:// na prefix ay kung gusto mo ng isang malinis na kopya ng repositoryo na may mga reperensiya o mga object na naiwan — Sa pangkalahatan ay pagkatapos ng pag-import mula sa ibang VCS o sa isang bagay na katulad (tingnan Mga Panloob ng GIT para sa mga gawain ng pagpapanatili). Gagamitin natin ang normal na landas dito dahil ang paggawa nito ay halos parati na mas mabilis. Upang magdagdag ng isang lokal na repositoryo sa umiiral na proyekto sa Git, maaari kang magpagana ng ganito: $ git remote add local_proj /srv/git/project.git Pagkatapos, maaaring i-push papunta o i-pull mula sa remote na iyon sa pamamagitan ng paggamit ng iyong bagong remote na pangalan local_proj parang ginagawa mo sa isang network. Ang mga Kalamangan Ang mga kalamangan ng isang repositoryo na nakabatay sa file ay ang mga ito ay simple at ginagamit ang mga umiiral na mga pahintulot ng file at access sa network. Kapag ibinahagi mo na ang isang filesystem na kung saan may-access ang lahat ng iyong team, ang pag-set up ng isang repositoryo ay napakadali. Ilalagay mo ang kopya ng hubad na repositoryo sa kung saan may access ang lahat at i-set ang pagbasa/pagsulat na mga pahintulot katulad ng kahit anong ibinahagi na direktoryo. Tatalakayin natin kung papaano mag-export ng isang kopya ng hubad na repositoryo para sa itong layunin sa Pagkuha ng Git sa isang Server . Ito rin ay isang magandang opsyon para sa mabilis na pagkuha ng trabaho mula sa gumagana na repositoryo ng iba. Kapag ikaw at ang isang katrabaho ay nagtatrabaho sa isang proyekto at gusto nila na may kunin ka, ang pagpapagana ng isang utos tulad ng git pull /home/john/project ay kadalasan mas madali kaysa mag-push sila sa isang remote na server at pagkatapos ay kukunin mo. Ang mga Kahinaan Ang mga kahinaan ng paraan na ito ay ang mga ibinahagi na access ay sa pangkalahatan mas mahirap i-set up at maabot mula sa maramihang mga lokasyon kaysa sa pangunahing network access. Kapag gusto mong mag-push mula sa iyong laptop kapag ikaw ay nasa bahay, kailangan mong i-mount ang remote dist, na maaaring maging mahirap at mahina kumpara sa batay sa network na access. Mahalagang banggitin na hindi ito ang pinakamabilis na opsyon kung ikaw ay gumagamit ng isang uri ng ibinahagi na mount. Ang isang lokal na repositoryo ay mabilis lamang kapag ikaw ay may mabilis na access sa data. Ang repositoryo na nasa NFS ay madalas mas mahina kumpara sa repositoryo na nasa SSH sa parehong server, nagpapahintulot na ipagana ang Git gamit ang lokal na mga disk sa bawat sistema. Sa wakas, hindi pinoprotektahan ng protokol na ito ang repositoryo mula sa mga hindi sinasadya na mga aksidente. Ang bawat gumagamit ay may buong access ng shell sa “remote” na direktoryo, at walang pumipigil sa kanila sa pagbago o pagtanggal ng mga Git files sa loob at pag-corrupt sa repositoryo. Ang mga Protokol ng HTTP Ang Git ay maaaring makipag-usap sa HTTP gamit ang dalawang magkaibang mga mode. Bago ang Git 1.6.6, Mayroon lamang isang paraan upang maisagawa ito na kung saan ay madali at sa pangkalahatan ay read-only. Sa bersyon 1.6.6, isang bago, mas matalinong protokol ang ipinakilala na kasangkot ang Git na magagawang mas matalino na pag-usap ng paglipat ng data sa parehong paraan ng SSH. Sa mga nakaraang mga taon, itong bago na protokol ng HTTP ay naging popular dahil ito ay mas madali para sa gumagamit at mas matalino kung papaano nakikipag-usap. Ang mas bagong bersyon ay kadalasan mas tinutukoy bilang ang Smart HTTP na protokol at ang lumang paraan ay ang Dumb HTTP. Una natin tatalakayin ang mas bago na Smart HTTP protokol. Smart HTTP Ang Smart HTTP ay gumagana katulad ng mga protokol ng SSH o Git pero gumagana sa mga standard na port ng HTTPS at makakagamit ng iba’t ibang mekanismo ng pagpapatunay ng HTTP, ibig sabihin madalas na mas madali para sa gumagamit kaysa sa katulad ng SSH, dahil ikaw ang makakagamit ng mga bagay katulad ng pagpapatunay ng username/password kumpara sa pag-set up ng mga key ng SSH. Marahil ay ito ng ang pinaka-popular na pamamaraan sa paggamit sa Git, dahil ito ay maaaring i-set up upang parehong maghatid ng hindi nagpapakilala katulad ng protokol ng git:// , at maaaring i-push kasama ang pagpapatunay at encryption katulad ng protokol ng SSH. Sa halip ng pagkakaroon ng pag-set up ng iba’t ibang mga URL para sa mga ganitong bagay, maaaring gamitin ang isang URL para sa dalawa. Kapag susubukan mong mag-push at nangangailangan ng pag-awtentik ang repositoryo (na kung saan ito ay normal), Ang server ay maaaring mag prompt para sa isang username at password. Ito ay pareho para sa access sa pagbasa. Sa katunayan, para sa mga serbisyo katulad ng GitHub,Ang URL na ginagamit mo upang tingnan ang repositoryo online (halimbawa, https://github.com/schacon/simplegit ) ay parehong URL na maaari mong gamitin upang mag-clone at, kung ikaw ay may-access, mag-push. Dumb HTTP Kapag ang server ay hindi tumugon ng isang matalinong serbisyo ng Git HTTP, Susubukan ng kliyente ng Git na bumalik sa mas madaling Dumb HTTP na protokol. Inaasahan ng Dumb na protokol ang hubad na repositoryo ng Git ang ihahatid tulad ng mga normal na mga file mula sa web server. Ang kagandahan ng Dumb HTTP ay ang madaling pag set-up nito. Sa totoo lang, ang kailangan mo lang gawin ay maglagay ng hubad na repositoryo ng Git sa ilalim ng isang HTTP na dokumento sa root at mag-set up ng isang tiyak na post-update hook, at tapos ka na (Tingnan Mga Hook ng Git ). Sa puntong iyon, sinuman na makaka-access sa web server sa ilalim na kung saan maaaring ilagay ang repositoryo ay maaaring i-clone ang iyong repositoryo. Upang payagan ang access sa pagbasa sa iyong repositoryo sa HTTP, gumawa ng katulad nito: $ cd /var/www/htdocs/ $ git clone --bare /path/to/git_project gitproject.git $ cd gitproject.git $ mv hooks/post-update.sample hooks/post-update $ chmod a+x hooks/post-update Yun Lang. Ang post-update na hook na kasama ng Git bilang default ay ipinapagana ang tamang utos ( git update-server-info ) upang mapagana ng maayos ang HTTP fetching ang pag-kopya. Ang utos na ito ay pinapagana kapag nag-push ka sa repositoryo na ito (marahil sa SSH); pagktapos, maaaring mag-clone ang ibang tao gamit ang katulad $ git clone https://example.com/gitproject.git Sa partikular na kaso na ito, ginagamit natin ang /var/www/htdocs na landas na karaniwan sa mga Apache na setup, pero makikita mo na maaaring gumamit ng anumang statik na web server — ilagay lamang ang hubad na repositoryo sa kanyang landas. Ang data ng Git ay hinahatid bilang isang statik na mga file (tingnan ang Mga Panloob ng GIT na kabanata para sa mga detalye kung papaano ito eksaktong hinahatid). Sa pangkalahatan pipili ka kung magpapagana ng pabasa/pagsulat na Smart HTTP server o ilagay na ma-access ang mga file bilang read-only sa Dumb na paraan. Bihirang magpagana ng halo ng dalawang serbisyo. Ang mga Kalamangan Tayo ay magtuon sa mga kalamangan ng Smart na bersyon ng HTTP na protokol. Ang kasimplihan ng pagkakaroon ng isang URL para sa lahat ng klase ng access at pagkakaroon ng server na mag prompt lamang kapag kailangan ang pagpapatunay ay nagpapadali para sa mga end user. Ang kakayahan ng pag pagpapatunay gamit ang username at password ay isang malaking kalamangan sa SSH, dahil ang mga gumagamit ay hindi na kailangan lumikha sa lokal ng mga susi ng SSH at mag-upload ng kanilang pampubliko na susi sa server bago makikipag-ugnayan sa mga ito. Para sa mga hindi masyadong sopistikado na mga gumagamit, o mga gumagamit sa mga sistema kung saan hindi gaano karaniwan ang SSH, ito ay isang malaking kalamangan para sa kakayahang magamit. Ito rin ay isang napakabilis at mahusay na protokol, katulad ng SSH. Maaari mo ring pagsilbihan ang iyong mga repositoryo bilang read-only sa HTTPS, ibig sabihin maaari mong i-encrypt ang paglipat ng nilalaman; o maaari mong gawin na gumamit ng mga tiyak na napirmahan na mga sertipiko ng SSL ang mga kliyente. Isa pang magandang bagay ay ang HTTPS ay karaniwan na ginagamit na mga protokol na kung saan ang mga firewall ng mga korporasyon ay madalas na naka-set up para payagan ang trapiko sa mga port na ito. Ang mga Kahinaan Ang Git sa HTTPS ay maaaring maging mas nakakalito na i-set up kumpara sa SSH sa ibang mga server. Maliban dun, napakaliit ng kalamangan ang mayroon ang ibang mga protokol sa Smart HTTP para sa pagsilbi ng mga nilalaman ng Git. Kung ikaw ny gumagamit ng HTTP para sa pagpapatunay ng pag-push, paminsan ang pagbibigay ng iyong mga kredensyal ay mas mahirap kaysa gumamit ng mga susi sa SSH. Gayunpaman, may mga iilang mga instrumento sa pag-cache ng kridensyal na maaari mong gamitin, kasali na ang Keychain access sa macOS at Tagapamahala ng Kredensyal sa Windows, upang gawin itong madali. Basahin ang Kredensyal na ImbakanCredential Storage upang makita kung papaano mag set up ng ligtas na pag-cache ng password sa iyong sistema. Ang Protokol ng SSH Ang isang karaniwang protokol ng paglilipat para sa Git kapag ang pag boto sa sarili ay nasa SSH. Ito ay dahil ang access ng SSH sa mga server ay naka-set up na sa karamihan ng mga lugar — at kapag wala pa, ito ay madaling gawin. Ang SSH ay isang protokol ng isang network na may pagpapatunay din at, dahil ito ay saanman, ito ay karaniwang madali i-set up at gamitin. Upang i-clone ang isang repository ng Git sa SSH, maaari mong tukuyin ang isang ssh:// na URL tulad nito: To clone a Git repository over SSH, you can specify an ssh:// URL like this: $ git clone ssh://[user@]server/project.git O maari mong gamitin ang mas maikli na tulad ng scp na sintaks para sa protokol ng SSH: $ git clone [user@]server:project.git Sa dalawang kaso sa itaas, kapag hindi mo tukuyin ang opsyonal na username, ipinapagpapalagay ng Git na ang user na kasalukuyang naka-log in. Ang mga Kalamangan Marami ang kalamangan ng paggamit ng SSH. Una, Ang SSH ay relatibong madali i-set up — Ang mga SSH daemon ay karaniwan, maraming mga network admin ang may karanasan gamit sila, at maraming distribusyon ng OS naka-set up kasama nila or mayroong mga kasangkapan upang pamahalaan sila. Sunod, ang pag-access sa SSH ay ligtas — lahat ng paglipat ng datos ay naka-encrypt at napatunayan. Sa wakas, katulad ng HTTPS, Git at mga lokal na protokol, ang SSH ay mabilis, ginagawang siksik ang datos bago nililipat. Ang mga Kahinaan Ang negatibong aspeto ng SSH ay ito ay hindi sumusuporta ng hindi kilalang pag-acces sa iyong Git na repositoryo. Kung ikaw ay gumagamit ng SSH, ang mga tao ay dapat may access sa SSH sa iyong makina, kahit na read-only lang ang kapasidad, na hindi gumagawa sa SSH na nakakatulong sa mga open source na proyekto para sa mga tao na gusto lamang i-clone ang iyong repositoryo at suriin ito. Kapag ginagamit mo lamang sa loob ng network ng iyong korporasyon, Maaaring ang SSH na protokol lamang ang iyong kailangan harapin. Kapag gusto mong payagan ang mga hindi kilalang read-only na access sa iyong mga proyekto at gusto rin gamitin ang SSH, kailangan mong mag-set up ng SSH para ikaw ay makaka-push pero iba pa para sa iba na mag-fetch. Ang Protokol ng Git Ang sunod ay ang protokol ng Git. Ito ay isang espesyal na daemon na nakabalot sa Git, ito ay nakikinig sa isang dedikado na port (9418) na kung saan ay nagbibigay ng isang serbisyo tulad sa protokol ng SSH, pero walang pagpapatunay. Upang ang isang repositoryo ay magsilbi sa protokol ng Git, ikaw ay dapat lumikha ng isang git-daemon-export-ok na file — ang daemon ay hindi magsisilbi ng isang repositoryo kung walang file sa loob nito — pero maliban pa dun wala ng ibang seguridad. Ang alinman ang Git ng repositoryo ay magagamit ng lahat para i-clone, o hindi. Ibig sabihin karaniwang walang pag-push sa protokol na ito. Maaari mong paganahin ang access sa pag-push pero, dahil kulang ang pagpapatunay, sino man sa internet na mahahanap ang URL ng iyong proyekto ay maaaring mag-push sa proyektong iyon. Sapat na sabihin na ito ay bihira. Ang mga Kalamangan Ang protokol ng Git ay kadalasan ang pinakamabilis na protokol sa paglipat sa network na magagamit. Kung ikaw ay naghahatid ng maraming trapiko sa isang pampublikong proyekto o naghahatid ng malaking proyekto na hindi nangangailangan ng pagpapatunay ng user para sa read na access, Malamang gugustohin mong mag set up ng isang Git daemon upang pagsilbihan ang iyong proyekto. Ito ay gumagamit ng parehong data-transfer na mekanismo sa protokol ng SSH pero walang encryption at pagpapatunay sa itaas. Ang mga Kakulangan Ang problema sa protokol ng Git ay ang kakulangan ng pagpapatunay. Ito ay sa pangkalahatan ay hindi gusto para sa protokol ng Git na maging tanging access sa iyong proyekto. Sa pangkalahatan, ipapares mo sa access ng SSH o HTTPS para sa iilang mga developer na mayroong push (sulat) na access at payagan ang iba na gamitin ang git:// para sa read-only na access. Ito rin siguro ang pinaka mahirap na i-set up na protokol. Ito ay dapat magpatakbo ng sariling daemon, na nangangailangan ng xinetd na pagsasaayos o ang katulad nito, na kung saan ay hindi palagi madaling gawin. Ito ay nangangailangan din ng access sa firewall patungo sa port 9418, na kung saan ay hindi kinikilala na port ng mga firewall ng korporasyon na parating payagan. Sa likod ng mga firewall ng korporasyon, itong nakatago na port ay karaniwang naka-block prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/zh_cn/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | 为相关实体配置 CloudWatch 代理服务和环境名称 - Amazon CloudWatch 为相关实体配置 CloudWatch 代理服务和环境名称 - Amazon CloudWatch 文档 Amazon CloudWatch 用户指南 为相关实体配置 CloudWatch 代理服务和环境名称 CloudWatch 代理可以发送带有实体数据的指标和日志,以便支持 CloudWatch 控制台中的 探索相关窗格 。服务名称或环境名称可以通过 CloudWatch 代理 JSON 配置 进行配置。 注意 可能会覆盖代理配置。有关代理如何决定向相关实体发送哪些数据的详细信息,请参阅 将 CloudWatch 代理与相关遥测结合使用 。 对于指标,可以在代理、指标或插件级别进行配置。对于日志,可以在代理、日志或文件级别进行配置。始终使用最具体的配置。例如,如果配置存在于代理级别和指标级别,则指标将使用指标配置,而其他任何内容(日志)将使用代理配置。以下示例显示了配置服务名称和环境名称的不同方法。 { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } Javascript 在您的浏览器中被禁用或不可用。 要使用 Amazon Web Services 文档,必须启用 Javascript。请参阅浏览器的帮助页面以了解相关说明。 文档惯例 在 Amazon EC2 实例上设置和配置 Prometheus 指标集合 启动 CloudWatch 代理 此页面对您有帮助吗?- 是 感谢您对我们工作的肯定! 如果不耽误您的时间,请告诉我们做得好的地方,让我们做得更好。 此页面对您有帮助吗?- 否 感谢您告诉我们本页内容还需要完善。很抱歉让您失望了。 如果不耽误您的时间,请告诉我们如何改进文档。 | 2026-01-13T09:29:25 |
https://babelfish.money/comments/feed/ | Babelfish – BabelFish BABELFISH Github Docs Forum Launch Dapp X sun (Traced) crescent-moon-phase-shape-with-two-stars (Traced)2022 BABELFISH Github Docs Analytics X LAUNCH DAPP UNIVERSAL MULTI-CHAIN STABLECOIN BabelFish's mind-boggling objective is to aggregate and distribute stablecoins, enhance flow, and push hyperBitcoinization. ABOUT BabelFish DAO Money Protocol is the simplest and most mind-bogglingly useful thing in the DeFi Universe. It absorbs, aggregates and distributes USD-pegged stablecoins across chains; the practical upshot of all this is that if you stick stablecoins to it, you can neatly cross the language divide between any chains. BabelFish's meta-stablecoin, XUSD, is backed by the underlying aggregated stablecoins to leverage and enhance their combined flow and utility across protocols and users. FISH holders can vote on improvement proposals, such as which stablecoins to accept, or what percent of collateral to lend. Learn More About It GO TO OUR GITBOOK What It Does What It Does: Since the big bang of DeFi in the crypto galaxy, many stablecoin projects have been created to meet the demand for USD. Different stablecoin brands with unique selling points are competing to represent the same dollar, but they do not translate 1:1, and crypto dollar liquidity is fractured between issuers and protocols. As DeFi markets expand beyond Ethereum to multiple chains, stablecoin liquidity is fractured further by the bridges used, which is suboptimal for the industry. BabelFish abstracts away these differences by aggregating stablecoins from multiple isolated liquidity lakes and providing users with access to the combined ocean of crypto-dollars available. Think of it as a translator or a converter: if a user wants to use crypto dollars on another chain, she can stick it on BabelFish and seamlessly get a par-value equivalent on the other side. What Is The FISH DAO The BabelFish DAO Money Protocol is ultimately directed by the will of FISH token holders participating in governance. From protocol improvement proposals to budget allocation and partnerships, it is the community that will be able to decide on the direction of the protocol. If you are passionate about the stablecoin ecosystem, this is the DAO for you. Help define protocol rules and parameters around the collateral accepted, discuss community incentives, manage risk, and much more. The future of BabelFish depends on active FISH holder participation. Help build, shape and enhance the stablecoin ecosystem. Participate in stablecoin collateral management. Be rewarded for staking and actively participating with FISH in governance. Mind-Bogglingly Useful XUSD is collateralized 1:1 by a hedged basket of accepted stablecoins across chains, which enables users and protocols to tap into the combined liquidity and utility of the underlying collateral and also enhances stablecoin flow across the ecosystem. User deposits an accepted stablecoin on BabelFish protocol. Protocol issues XUSD, its convertible stablecoin backed 1:1 User can use XUSD on accepted protocols, bridge between chains, and redeem back at any time What Moves BabelFish The need for a "trustless stablecoin translation device" seems painfully clear. The accelerated growth and velocity of crypto dollars is unstoppable, we expect demand for programmable money to continue accelerating as we onboard the first billion users. But the market remains fragmented, two players dominate >80% of the total USD-stablecoin float, and systemic and idiosyncratic risks abound. In the decentralised economy we ought not to rely on one or two issuers of USD-stablecoins but rather enable a thousand stablecoins to bloom and communicate with each other to bring mass adoption. For BabelFish DAO Money Protocol it is important that our first product, XUSD, is the safest, easiest and ultimate stablecoin instrument out there in crypto space. JOIN US TO EXPERIENCE BABELFISH | 2026-01-13T09:29:25 |
https://www.linkedin.com/products/categories/ip-address-management-software?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_subtitle-click | Best IP Address Management (IPAM) Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Network Engineer (2) Information Technology Administrator (2) Network Specialist (2) Information Technology Network Administrator (1) Information Technology Operations Analyst (1) See all products Find top products in IP Address Management (IPAM) Software category Software used to plan and manage the use of IP addresses on a network. - Use unique IP addresses for applications, devices, and related resources - Prevent conflicts and errors with automated audits and network discovery - Use IPv4/IPv6 support and integrate with DNS and DHCP services - View subnet capacity and optimize IP planning space 9 results Next-Gen IPAM IP Address Management (IPAM) Software by IPXO Next-Gen IPAM is designed to simplify and automate public IP resource management. It supports both IPv4 and IPv6, and is built with a focus on automation, transparency, and security. You get a centralized view of IP reputation, WHOIS data, RPKI validation, BGP routing, and geolocation – all in one automated platform. View product AX DHCP | IP Address Management (IPAM) Software IP Address Management (IPAM) Software by Axiros AX DHCP server is a clusterable carrier-grade DHCP / IPAM (IP Address Management) solution that can be seamlessly integrated within given provisioning platforms. AX DHCP copes with FttH, ONT provisioning, VOIP and IPTV services. Telecommunications carriers and internet service providers (ISPs) need powerful and robust infrastructure that supports future workloads. DDI (DNS-DHCP-IPAM) is a critical networking technology for every service provider that ensures customer services availability, security and performance. View product Tidal LightMesh IP Address Management (IPAM) Software by Tidal Go beyond IP Address Management (IPAM) with LightMesh from Tidal. Simplify and automate the administration and management of internet protocol networks. LightMesh makes IP visibility and operation scalable, secure and self-controlled with a central feature-rich interface, reducing complexity, and – all for free. Currently in Public Beta. View product ManageEngine OpUtils IP Address Management (IPAM) Software by ManageEngine ITOM OpUtils is an IP address and switch port management software that is geared towards helping engineers efficiently monitor, diagnose, and troubleshoot IT resources. OpUtils complements existing management tools by providing troubleshooting and real-time monitoring capabilities. It helps network engineers manage their switches and IP address space with ease. With a comprehensive set of over 20 tools, this switch port management tool helps with network monitoring tasks like detecting a rogue device intrusion, keeping an eye on bandwidth usage, monitoring the availability of critical devices, backing up Cisco configuration files, and more. View product Numerus IP Address Management (IPAM) Software by TechNarts-Nart Bilişim Numerus, a mega-scale enterprise-level IP address management tool, helps simplify and automate several tasks related to IP space management. It can manage IP ranges, pools, and VLANs, monitor the hierarchy, manage utilizations and capacities, perform automated IP address assignments, and report assignments to registries with regular synchronization. It provides extensive reporting capabilities and data for 3rd party systems with various integrations. For ISPs, it also provides global IP Registry integrations such as RIPE. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights dedicated datacenter proxies IP Address Management (IPAM) Software by Decodo You can now own IP addresses that are solely yours! SOCKS5 and HTTP(S) proxies that no one else can lay their hands on when you’re using them. View product AX DHCP | Gerenciamento de endereços IP IP Address Management (IPAM) Software by Axiros LATAM O AX DHCP é uma solução DHCP/IPAM que pode ser integrada de forma transparente em determinadas plataformas de provisionamento como FTTH, ONT, VOIP e IPTV. As operadoras de telecomunicações e os provedores de serviços de Internet precisam de uma infraestrutura poderosa e robusta que suporte cargas de trabalho futuras. DDI (DNS-DHCP-IPAM) é uma tecnologia de rede crítica para provedores de serviços que garante a disponibilidade, segurança e desempenho dos serviços ao cliente. … AX DHCP es una solución DHCP/IPAM que se puede integrar perfectamente en determinadas plataformas de aprovisionamiento como FTTH, ONT, VOIP e IPTV. Los operadores de telecomunicaciones y los proveedores de servicios de Internet necesitan una infraestructura potente y robusta para admitir futuras cargas de trabajo. DDI (DNS-DHCP-IPAM) es una tecnología de red fundamental para proveedores de servicios que garantiza la disponibilidad, la seguridad y el rendimiento de los servicios al cliente. View product Cygna runIP Appliance Platform IP Address Management (IPAM) Software by Cygna Labs Deutschland Maximizing the benefits of your DDI-Solution VitalQIP (Nokia) | DiamondIP (BT DiamondIP) | Micetro (Men & Mice) By providing an efficient solution for roll-out, configuration, patching and upgrades of DNS and DHCP servers, the runIP Management Platform optimizes the efficiency and value of your DDI investment. To create runIP, N3K combined the experience gained from setting up thousands of DNS & DHCP servers and hundreds of DDI environments into one holistic solution. runIP is suitable both for those companies that want to further reduce the operating costs of their existing DDI installation and for those that want to make their initial installation or further roll-out even more efficient and successful. The runIP solution is completed by its integrated, comprehensive real-time monitoring of the DNS and DHCP services and operating system and extensive long-term statistics. This ensures that you always have an overview of the DNS and DHCP services as well as the operating system. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/CloudWatch-ServiceLevelObjectives.html | Servicelevel-Ziele (SLOs) - Amazon CloudWatch Servicelevel-Ziele (SLOs) - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden SLO-Konzepte Fehlerbudget und Erreichung für periodenbasierte SLOs berechnen Fehlerbudget und Erreichung für anfragebasierte SLOs berechnen Berechnen der Nutzungsraten und optionale Einstellung der Nutzungsratenalarme Ein SLO erstellen SLO-Status anzeigen und untersuchen Ein vorhandenes SLO bearbeiten Ein SLO löschen Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Servicelevel-Ziele (SLOs) Sie können Application Signals verwenden, um Servicelevel-Ziele für die Services für Ihre kritischen Geschäftsabläufe oder Abhängigkeiten zu erstellen. Wenn Sie SLOs für diese Services erstellen, können Sie sie im SLO-Dashboard verfolgen, sodass Sie auf einen Blick eine Übersicht Ihrer wichtigsten Abläufe haben. Neben der Erstellung einer Schnellansicht, in der sich Ihre Mitarbeiter über den aktuellen Status kritischer Vorgänge informieren können, können Sie mit Hilfe von SLOs auch die längerfristige Leistung Ihrer Services verfolgen, um sicherzustellen, dass sie Ihren Erwartungen entsprechen. Wenn Sie Service Level Agreements mit Kunden abgeschlossen haben, sind SLOs ein hervorragendes Instrument, um sicherzustellen, dass diese eingehalten werden. Die Bewertung des Zustands Ihrer Services mithilfe von SLOs beginnt mit der Festlegung klarer, messbarer Ziele auf der Grundlage wichtiger Leistungsmetriken – Servicelevel-Indikator (SLIs) . Mit einem SLO wird die SLI-Leistung anhand des von Ihnen festgelegten Schwellenwerts und Ziels verglichen und es wird gemeldet, wie weit oder wie nahe Ihre Anwendungsleistung am Schwellenwert liegt. Application Signals hilft Ihnen dabei, SLOs für Ihre wichtigsten Leistungsmetriken festzulegen. Application Signals erfasst automatisch Latency - und Availability -Metriken für jeden Service und Vorgang, den es entdeckt, und diese Metriken eignen sich oft ideal für die Verwendung als SLIs. Mit dem Assistenten zur SLO-Erstellung können Sie diese Metriken für Ihre SLOs verwenden. Anschließend können Sie den Status all Ihrer SLOs mit den Dashboards von Application Signals verfolgen. Sie können SLOs für bestimmte Vorgänge oder Abhängigkeiten einrichten, die Ihr Service aufruft oder verwendet. Sie können jede CloudWatch-Metrik oder jeden Metrikausdruck als SLI verwenden, zusätzlich zu den Latency - und Availability -Metriken. Die Erstellung von SLOs ist sehr wichtig, um den größtmöglichen Nutzen aus CloudWatch Application Signals zu ziehen. Nachdem Sie SLOs erstellt haben, können Sie ihren Status in der Application Signals Console einsehen, um schnell zu sehen, welche Ihrer kritischen Services und Vorgänge gut funktionieren und welche nicht. Die Tatsache, dass SLOs nachverfolgt werden können, bietet die folgenden großen Vorteile: Es ist für Ihre Servicebetreiber einfacher, den aktuellen Betriebsstatus kritischer Services, gemessen am SLI, zu ermitteln. Auf diese Weise können sie schnell fehlerhafte Services und Betriebsabläufe untersuchen und identifizieren. Sie können Ihre Serviceleistung anhand messbarer Geschäftsziele über längere Zeiträume hinweg verfolgen. Indem Sie entscheiden, worauf Sie SLOs setzen möchten, priorisieren Sie das, was für Sie wichtig ist. Die Dashboards von Application Signals enthalten automatisch Informationen darüber, was Sie priorisiert haben. Wenn Sie ein SLO erstellen, können Sie sich auch dafür entscheiden, gleichzeitig CloudWatch-Alarme zu erstellen, um die SLOs zu überwachen. Sie können Alarme einrichten, die sowohl auf Überschreitungen des Schwellenwerts als auch auf Warnstufen achten. Diese Alarme können Sie automatisch benachrichtigen, wenn die SLO-Metriken den von Ihnen festgelegten Schwellenwert überschreiten oder wenn sie sich einem Warnschwellenwert nähern. Wenn sich ein SLO beispielsweise seinem Warnschwellenwert nähert, können Sie darüber informiert werden, dass Ihr Team möglicherweise die Kundenabwanderung in der Anwendung verlangsamen muss, um sicherzustellen, dass die langfristigen Leistungsziele erreicht werden. Themen SLO-Konzepte Fehlerbudget und Erreichung für periodenbasierte SLOs berechnen Fehlerbudget und Erreichung für anfragebasierte SLOs berechnen Berechnen der Nutzungsraten und optionale Einstellung der Nutzungsratenalarme Ein SLO erstellen SLO-Status anzeigen und untersuchen Ein vorhandenes SLO bearbeiten Ein SLO löschen SLO-Konzepte Ein SLO-Konzept enthält die folgenden Komponenten: Ein Servicelevel-Indikator (SLI) , bei dem es sich um eine wichtige Leistungsmetrik handelt, die Sie angeben. Der SLI stellt das gewünschte Leistungsniveau für Ihre Anwendung dar. Application Signals erfasst automatisch die wichtigen Latency - und Availability -Metriken für jeden Service und Vorgang, den es entdeckt, und diese Metriken eignen sich oft ideal für die Verwendung mit SLOs. Sie wählen den Schwellenwert, den Sie für Ihr SLI verwenden möchten. Zum Beispiel 200 ms für die Latenz. Ein Ziel oder Erreichungsziel . Dabei handelt es sich um den Prozentsatz der Zeit oder Anforderungen, in der der SLI den Schwellenwert voraussichtlich in jedem Zeitintervall erreicht. Die Zeitintervalle können so kurz wie Stunden oder so lang wie ein Jahr sein. Intervalle können entweder Kalenderintervalle oder fortlaufende Intervalle sein. Kalenderintervalle sind auf den Kalender abgestimmt, z. B. ein SLO, das pro Monat erfasst wird. CloudWatch passt die Status-, Budget- und Leistungszahlen automatisch an die Anzahl der Tage in einem Monat an. Kalenderintervalle eignen sich besser für Geschäftsziele, die kalendergerecht gemessen werden. Rollende Intervalle werden fortlaufend berechnet. Rollende Intervalle eignen sich besser, um das aktuelle Benutzererlebnis Ihrer Anwendung nachzuverfolgen. Der Zeitraum ist kürzer, und viele Zeiträume bilden ein Intervall. Die Leistung der Anwendung wird in jedem Zeitraum innerhalb des Intervalls mit der SLI verglichen. Für jeden Zeitraum wird festgestellt, ob die Anwendung entweder die erforderliche Leistung erreicht hat oder nicht. Ein Ziel von 99 % bei einem Kalenderintervall von einem Tag und einem Zeitraum von 1 Minute bedeutet beispielsweise, dass die Anwendung die Erfolgsschwelle in 99 % der Zeiträume von 1 Minute am Tag erreichen oder erreichen muss. Ist dies der Fall, ist der SLO für diesen Tag erfüllt. Der nächste Tag ist ein neues Bewertungsintervall, und die Anwendung muss während 99 % der Zeiträume von 1 Minute am zweiten Tag die Erfolgsschwelle erreichen oder erreichen, um den SLO für diesen zweiten Tag zu erfüllen. Ein SLI kann auf einer der neuen Standard-Anwendungsmetriken basieren, die von Application Signals erfasst wurden. Alternativ kann es sich um eine beliebige CloudWatch-Metrik oder einen beliebigen metrischen Ausdruck handeln. Die Standard-Anwendungsmetriken, die Sie für eine SLI verwenden können, sind Latency und Availability . Availability stellt die erfolgreichen Antworten geteilt durch die Gesamtzahl der Anfragen dar. Sie wird als (1 - Störungsrate)*100 berechnet, wobei es sich bei Fehlerantworten um 5xx -Fehler handelt. Erfolgsantworten sind Antworten ohne 5XX -Fehler. 4XX -Antworten werden als erfolgreich behandelt. Fehlerbudget und Erreichung für periodenbasierte SLOs berechnen Wenn Sie Informationen zu einem SLO anzeigen, sehen Sie dessen aktuellen Zustand und sein Fehlerbudget . Das Fehlerbudget gibt an, wie lange innerhalb des Intervalls der Schwellenwert überschritten werden kann, wobei der SLO aber trotzdem eingehalten werden kann. Das Gesamtfehlerbudget ist die Gesamtdauer der Sicherheitsverletzung, die während des gesamten Intervalls toleriert werden kann. Das verbleibende Fehlerbudget ist die verbleibende Dauer der Sicherheitsverletzung, die im aktuellen Intervall toleriert werden kann. Dies ist der Fall, nachdem die bereits eingetretene Zeit für Verstöße vom Gesamtfehlerbudget abgezogen wurde. Die folgende Abbildung veranschaulicht die Konzepte für das Erreichungs- und das Fehlerbudget für ein Ziel mit einem Intervall von 30 Tagen, Zeiträumen von 1 Minute und einer Zielerreichung von 99 %. 30 Tage beinhalten 43 200 Zeiträume von einer Minute. 99 % von 43 200 entsprechen 42 768 Minuten im Monat, sodass 42 768 Minuten im Monat einwandfrei sein müssen, damit der SLO eingehalten werden kann. Bisher waren 130 der 1-Minuten-Zeiträume im aktuellen Intervall fehlerbehaftet. Den Erfolg innerhalb der einzelnen Zeiträumen ermitteln Innerhalb jedes Zeitraums werden die SLI-Daten auf der Grundlage der für den SLI verwendeten Statistik zu einem einzigen Datenpunkt zusammengefasst. Dieser Datenpunkt stellt die gesamte Länge des Zeitraums dar. Dieser einzelne Datenpunkt wird mit dem SLI-Schwellenwert verglichen, um festzustellen, ob der Zeitraum fehlerfrei ist. Wenn Sie auf dem Dashboard fehlerhafte Perioden während des aktuellen Zeitraums sehen, können Ihre Servicemitarbeiter darauf aufmerksam gemacht werden, dass der Service untersucht werden muss. Wenn festgestellt wird, dass der Zeitraum fehlerhaft ist, wird die gesamte Dauer des Zeitraums im Fehlerbudget als fehlerhaft gewertet. Wenn Sie das Fehlerbudget verfolgen, können Sie feststellen, ob der Service über einen längeren Zeitraum die von Ihnen gewünschte Leistung erzielt. Zeitfenster-Ausschlüsse Bei Zeitfenster-Ausschlüssen handelt es sich um einen Zeitblock mit einem definierten Start- und Enddatum. Dieser Zeitraum ist in den Leistungsmetriken des SLO nicht enthalten, und Sie können Zeitfenster für einmalige oder wiederkehrende Zeitausschlüsse festlegen. Zum Beispiel geplante Wartungsarbeiten. Anmerkung Bei zeitraumbasierten SLOs gelten SLI-Daten im Ausschlussfenster als nicht verletzend. Bei anforderungsbasierten SLOs werden alle guten und schlechten Anfragen im Ausschlussfenster ausgeschlossen. Wenn ein Intervall für ein anforderungsbasiertes SLO vollständig ausgeschlossen wird, wird eine Standardkennzahl für die Erreichungsquote von 100 % veröffentlicht. Sie können nur Zeitfenster mit einem Startdatum in der Zukunft angeben. Fehlerbudget und Erreichung für anfragebasierte SLOs berechnen Nachdem Sie ein SLO erstellt haben, können Sie Fehlerbudgetberichte für dieses SLO abrufen. Ein Fehlerbudget ist die Anzahl der Anforderungen, bei denen Ihre Anwendung möglicherweise nicht dem Ziel des SLO entspricht und Ihre Anwendung trotzdem das Ziel erreicht. Bei einem anforderungsbasierten SLO ist das verbleibende Fehlerbudget dynamisch und kann je nach Verhältnis der guten Anforderungen zur Gesamtzahl der Anforderungen steigen oder sinken Die folgende Tabelle zeigt die Berechnung für ein anforderungsbasiertes SLO mit einem Intervall von 5 Tagen und einer Zielerreichung von 85 %. In diesem Beispiel nehmen wir an, dass es vor Tag 1 keinen Verkehr gibt. Der SLO hat das Ziel an Tag 10 nicht erreicht. Zeit Anfragen insgesamt Ungültige Anforderungen Gesamtzahl der Anforderungen in den letzten 5 Tagen Gesamtzahl der guten Anforderungen in den letzten 5 Tagen Anforderungsbasierte Erreichung Budgetanforderungen insgesamt Restliche Budgetanforderungen Tag 1 10 1 10 9 9/10 = 90 % 1.5 0.5 Tag 2 5 1 15 13 13/15 = 86 % 2.3 0.3 Tag 3 1 1 16 13 13/16 = 81 % 2.4 -0,6 Tag 4 24 0 40 37 37/40 = 92 % 6.0 3.0 Tag 5 20 5 60 52 52/60 = 87 % 9.0 1,0 Tag 6 6 2 56 47 47/56 = 84 % 8,4 -0,6 Tag 7 10 3 61 50 50/61 = 82 % 9,2 -1,8 Tag 8 15 6 75 59 59/75 = 79 % 11,3 -4,7 Tag 9 12 1 63 46 46/63 = 73 % 9,5 -7,5 Tag 10 5 57 40 40/57 = 70 % 8,5 -8,5 Endgültige Erreichung der letzten 5 Tage 70 % Berechnen der Nutzungsraten und optionale Einstellung der Nutzungsratenalarme Sie können Application Signals verwenden, um die Nutzungsraten für Ihre Service-Level-Ziele zu berechnen. Eine Nutzungsrate ist eine Kennzahl, die angibt, wie schnell der Service das Fehlerbudget im Verhältnis zum Erreichen des SLO-Ziels verbraucht. Sie wird als Vielfaches der Basisfehlerquote ausgedrückt. Die Nutzungsrate wird anhand der Basisfehlerquote berechnet, die vom Erreichungsziel abhängt. Das Erreichungsziel gibt an, in welchem Prozentsatz entweder fehlerfreie Zeiträume oder erfolgreiche Anfragen erreicht werden müssen, um das SLO-Ziel zu erreichen. Die Basisfehlerquote liegt bei (100% – Prozentsatz des Erreichungsziels), und diese Zahl würde das gesamte Fehlerbudget am Ende des SLO-Zeitintervalls aufbrauchen. Ein SLO mit einem Erreichungsziel von 99 % hätte also eine Basisfehlerquote von 1 %. Durch die Überwachung der Nutzungsrate erfahren wir, wie weit wir von der Basisfehlerquote entfernt sind. Nehmen wir erneut das Beispiel eines Erreichungsziels von 99 %, so gilt Folgendes: Nutzungsrate = 1 : Wenn die Nutzungsrate immer exakt auf der Basisfehlerquote bleibt, erreichen wir genau das SLO-Ziel. Nutzungsrate < 1 : Wenn die Nutzungsrate unter der Basisfehlerquote liegt, sind wir auf dem besten Weg, das SLO-Ziel zu übertreffen. Nutzungsrate > 1 : Wenn die Nutzungsrate höher als die Basisfehlerquote ist, besteht die Möglichkeit, dass wir das SLO-Ziel nicht erreichen. Wenn Sie Nutzungsraten für Ihre SLOs erstellen, können Sie sich auch dafür entscheiden, gleichzeitig CloudWatch-Alarme zu erstellen, um die Nutzungsraten zu überwachen. Sie können einen Schwellenwert für die Nutzungsraten festlegen und die Alarme können Sie automatisch benachrichtigen, wenn die Nutzungsraten-Metriken den von Ihnen festgelegten Schwellenwert überschreiten. Wenn sich eine Nutzungsrate beispielsweise seinem Schwellenwert nähert, können Sie darüber informiert werden, dass der SLO das Fehlerbudget schneller aufbraucht, als Ihr Team tolerieren kann, und Ihr Team muss möglicherweise die Kundenabwanderung in der Anwendung verlangsamen, um sicherzustellen, dass die langfristigen Leistungsziele erreicht werden. Für die Erstellung von Alarmen fallen Gebühren an. Weitere Informationen zur Preisgestaltung von CloudWatch finden Sie unter Amazon CloudWatch – Preise . Die Nutzungsrate berechnen Um die Nutzungsrate zu berechnen, müssen Sie ein Lookback-Fenster angeben. Das Lookback-Fenster ist die Zeitdauer, über die die Fehlerquote gemessen werden soll. burn rate = error rate over the look-back window / (100% - attainment goal) Anmerkung Liegen keine Daten für den Zeitraum der Nutzungsrate vor, berechnet Application Signals die Nutzungsrate auf der Grundlage des erreichten Werts. Die Fehlerrate wird als Verhältnis der Anzahl der fehlerhaften Ereignisse zur Gesamtzahl der Ereignisse während des Nutzungsratenfensters berechnet: Bei zeitraumbasierten SLOs wird die Fehlerquote berechnet, indem schlechte Zeiträume durch die Gesamtzahl der Zeiträume geteilt werden. Die Gesamtzahl der Zeiträume entspricht der Gesamtheit der Zeiträume während des Lookback-Fensters. Bei anforderungsbasierten SLOs ist dies ein Maß für fehlerhafte Anforderungen geteilt durch die Gesamtzahl der Anforderungen. Die Gesamtzahl der Anforderungen entspricht der Anzahl der Anforderungen während des Lookback-Fensters. Das Lookback-Fenster muss ein Vielfaches des SLO-Zeitraums sein und muss kleiner als das SLO-Intervall sein. Ermitteln Sie den geeigneten Schwellenwert für einen Alarm für die Nutzungsrate Wenn Sie einen Alarm für die Nutzungsrate konfigurieren, müssen Sie einen Wert für die Nutzungsrate als Alarmschwellenwert auswählen. Der Wert für diesen Schwellenwert hängt von der Länge des SLO-Intervalls und dem Lookback-Fenster ab und hängt davon ab, welche Methode oder welches mentale Modell Ihr Team anwenden möchte. Für die Bestimmung des Schwellenwerts stehen zwei Hauptmethoden zur Verfügung. Methode 1: Ermitteln Sie den Prozentsatz des geschätzten Gesamtfehlerbudgets, den Ihr Team bereit ist, im Lookback-Fenster aufzubrauchen. Wenn Sie einen Alarm erhalten möchten, wenn X % des geschätzten Fehlerbudgets innerhalb der letzten Lookback-Stunden mit Blick auf die Nutzungsrate aufgebraucht werden, ist der Schwellenwert für die Ausfallrate wie folgt: burn rate threshold = X% * SLO interval length / look-back window size Beispiel: Für 5 % eines Fehlerbudgets von 30 Tagen (720 Stunden), das über eine Stunde aufgewendet wurde, ist eine Ausfallrate von 5% * 720 / 1 = 36 erforderlich. Wenn das Lookback-Fenster für die Nutzungsrate also 1 Stunde beträgt, legen wir den Schwellenwert für die Nutzungsrate auf 36 fest. Mit dieser Methode können Sie die CloudWatch-Konsole verwenden, um Nutzungsraten-Alarme zu erstellen. Sie können die Zahl X angeben, und der Schwellenwert wird anhand der obigen Formel bestimmt. Die Länge des SLO-Intervalls wird anhand des SLO-Intervalltyps bestimmt: Bei SLOs mit einem fortlaufenden Intervall ist dies die Länge des Intervalls in Stunden. Für SLOs mit einem kalenderbasierten Intervall: Wenn es sich bei der Einheit um Tage oder Wochen handelt, entspricht dies der Länge des Intervalls in Stunden. Wenn es sich bei der Einheit um einen Monat handelt, nehmen wir 30 Tage als geschätzte Dauer und rechnen sie in Stunden um. Methode 2: Ermitteln Sie die Zeit bis zur Erschöpfung des Budgets für das nächste Intervall Damit Sie vom Alarm benachrichtigt werden, wenn die aktuelle Fehlerquote im letzten Lookback-Fenster anzeigt, dass die Zeit bis zur Erschöpfung des Budgets weniger als X Stunden beträgt (vorausgesetzt, das verbleibende Budget liegt derzeit bei 100 %), können Sie die folgende Formel verwenden, um den Schwellenwert für die Nutzungsrate zu ermitteln. burn rate threshold = SLO interval length / X Wir betonen, dass die Zeit bis zur Erschöpfung des Budgets (X) in der obigen Formel davon ausgeht, dass das verbleibende Gesamtbudget derzeit 100 % beträgt, und dass daher die Höhe des Budgets, das in diesem Zeitraum bereits verbraucht wurde, nicht berücksichtigt wird. Wir können uns das auch als die Zeit bis zur Erschöpfung des Budgets für das nächste Intervall vorstellen. Anleitungen für Nutzungsratenalarme Nehmen wir als Beispiel ein SLO mit einem fortlaufenden Intervall von 28 Tagen. Das Festlegen eines Nutzungsratenalarms für diesen SLO umfasst zwei Schritte: Stellen Sie die Nutzungsrate und das Lookback-Fenster ein. Erstellen Sie einen CloudWatch-Alarm, der die Nutzungsrate überwacht. Stellen Sie zunächst fest, wie viel vom gesamten Fehlerbudget der Service bereit ist, innerhalb eines bestimmten Zeitraums aufzuwenden. Mit anderen Worten, formulieren Sie Ihr Ziel anhand des folgenden Satzes: „Ich möchte benachrichtigt werden, wenn X % meines gesamten Fehlerbudgets innerhalb von M Minuten aufgebraucht sind.“ Sie könnten beispielsweise festlegen, dass für das Ziel eine Warnung ausgegeben wird, wenn innerhalb von 60 Minuten 2 % des gesamten Fehlerbudgets aufgebraucht sind. Um die Nutzungsrate festzulegen, definieren Sie zunächst das Lookback-Fenster. Das Lookback-Fenster ist M, was in diesem Beispiel 60 Minuten entspricht. Als Nächstes erstellen Sie den CloudWatch-Alarm. Wenn Sie dies tun, müssen Sie einen Schwellenwert für die Nutzungsrate angeben. Wenn die Nutzungsrate diesen Schwellenwert überschreitet, werden Sie vom Alarm benachrichtigt. Verwenden Sie die folgende Formel, um den Schwellenwert zu ermitteln: burn rate threshold = X% * SLO interval length/ look-back window size In diesem Beispiel ist X gleich 2, weil wir eine Warnung erhalten möchten, wenn innerhalb von 60 Minuten 2 % des Fehlerbudgets aufgebraucht sind. Die Länge des Intervalls beträgt 40.320 Minuten (28 Tage), und 60 Minuten sind das Lookback-Fenster. Die Antwort lautet also: burn rate threshold = 2% * 40,320 / 60 = 13.44. In diesem Beispiel würden Sie 13,44 als Alarmschwellenwert festlegen. Mehrere Alarme mit unterschiedlichen Fenstern Durch die Einrichtung von Alarmen in mehreren Lookback-Fenstern können Sie schnell einen starken Anstieg der Fehlerquote bei einem kurzen Fenster erkennen und gleichzeitig kleinere Erhöhungen der Fehlerquote erkennen, die letztendlich das Fehlerbudget aufbrauchen, wenn sie unbemerkt bleiben. Darüber hinaus können Sie einen kombinierten Alarm für eine Nutzungsrate mit langem Fenster und für eine Nutzungsrate mit kurzem Fenster (1/12 des langen Fensters) einrichten und werden nur dann informiert, wenn beide Nutzungsraten einen Schwellenwert überschreiten. Auf diese Weise können Sie sicherstellen, dass Sie nur über noch andauernde Situationen benachrichtigt werden. Weitere Informationen zu CloudWatch-Verbundalarmen finden Sie unter Kombinieren von Alarmen . Anmerkung Sie können bei der Erstellung der Nutzungsrate einen Metrikalarm für eine Nutzungsrate einrichten. Um einen Verbundalarm für mehrere Nutzungsratenalarme einzurichten, müssen Sie die Anweisungen unter Einen zusammengesetzten Alarm erstellen befolgen. Eine Strategie für Verbundalarme, die in der Arbeitsmappe Google Site Reliability Engineering empfohlen wird, umfasst drei Verbundalarme: Ein Verbundalarm, der zwei Alarme überwacht, einen mit einem Zeitfenster von einer Stunde und einen mit einem Zeitfenster von fünf Minuten. Ein zweiter Verbundalarm, der zwei Alarme überwacht, einen mit einem Sechsstundenfenster und einen mit einem 30-Minuten-Fenster. Ein dritter Verbundalarm, der zwei Alarme überwacht, einen mit einem Drei-Tage-Fenster und einen mit einem Sechs-Stunden-Fenster. Im Folgenden finden Sie die Schritte, die Sie für diese Konfiguration benötigen: Erstellen Sie fünf Nutzungsraten mit einem Zeitfenster von fünf Minuten, 30 Minuten, einer Stunde, sechs Stunden und drei Tagen. Erstellen Sie die folgenden drei Paare von CloudWatch-Alarmen. Jedes Paar umfasst ein langes Fenster und ein kurzes Fenster, das 1/12 des langen Fensters ausmacht. Die Schwellenwerte werden anhand der Schritte unter Ermitteln Sie den geeigneten Schwellenwert für einen Alarm für die Nutzungsrate bestimmt. Wenn Sie den Schwellenwert für jeden Alarm im Paar berechnen, verwenden Sie bei Ihrer Berechnung das längere Lookback-Fenster des Paares. Alarme für die Nutzungsraten von 1 Stunde und 5 Minuten (der Schwellenwert wird durch 2 % des Gesamtbudgets bestimmt) Alarme für die Nutzungsraten von 6 Stunde und 30 Minuten (der Schwellenwert wird durch 5 % des Gesamtbudgets bestimmt) Alarme für die Brennrate von 3 Tagen und 6 Stunden (der Schwellenwert wird durch 10 % des Gesamtbudgets bestimmt) Erstellen Sie für jedes dieser Paare einen Verbundalarm, um eine Warnung zu erhalten, wenn beide Einzelalarme in den ALARM-Status wechseln. Weitere Informationen zum Erstellen eines Verbundalarms finden Sie unter Einen zusammengesetzten Alarm erstellen . Wenn Ihre Alarme für das erste Paar (Ein-Stunden-Fenster und Fünf-Minuten-Fenster) beispielsweise den Namen OneHourBurnRate und FiveMinuteBurnRate haben, würde die CloudWatch-Verbundalarmregel ALARM(OneHourBurnRate) AND ALARM(FiveMinuteBurnRate) lauten Die vorherige Strategie ist nur für SLOs mit einer Intervalldauer von mindestens drei Stunden möglich. Für SLOs mit kürzeren Intervallen empfehlen wir, mit einem Paar von Nutzungsratenalarmen zu beginnen, wobei ein Alarm ein Lookback-Fenster hat, das 1/12 des Lookback-Fensters des anderen Alarms beträgt. Stellen Sie dann einen Verbundalarm für dieses Paar ein. Ein SLO erstellen Wir empfehlen, dass Sie für Ihre kritischen Anwendungen sowohl Latenz- als auch Verfügbarkeits-SLOs festlegen. Diese von Application Signals gesammelten Metriken entsprechen den gemeinsamen Geschäftszielen. Sie können SLOs auch für jede CloudWatch-Metrik oder jeden mathematischen metrischen Ausdruck festlegen, der zu einer einzigen Zeitreihe führt. Wenn Sie zum ersten Mal ein SLO in Ihrem Konto erstellen, erstellt CloudWatch automatisch die serviceverknüpfte Rolle AWSServiceRoleForCloudWatchApplicationSignals in Ihrem Konto, sofern sie noch nicht vorhanden ist. Diese serviceverknüpfte Rolle ermöglicht CloudWatch das Erfassen von CloudWatch-Logs-Daten, X-Ray-Ablaufverfolgungsdaten, CloudWatch-Metrikdaten und Tagging-Daten von Anwendungen in Ihrem Konto. Weitere Informationen zu serviceverknüpften CloudWatch-Rollen finden Sie unter Verwenden von serviceverknüpften Rollen für CloudWatch . Wenn Sie ein SLO erstellen, geben Sie an, ob es sich um ein zeitraumbasiertes SLO oder ein anforderungsbasiertes SLO handelt. Jeder SLO-Typ hat eine andere Methode, um die Leistung Ihrer Anwendung im Vergleich zum Erreichungsziels zu bewerten. Ein zeitraumbasiertes SLO verwendet definierte Zeiträume innerhalb eines bestimmten Gesamtzeitintervalls. Für jeden Zeitraum bestimmt Application Signals, ob die Anwendung ihr Ziel erreicht hat. Die Erreichungsquote wird berechnet als die number of good periods/number of total periods . Bei einem zeitraumbasierten SLO bedeutet das Erreichen eines Erreichungsziels von 99,9 % beispielsweise, dass Ihre Anwendung innerhalb Ihres Intervalls ihr Leistungsziel in mindestens 99,9 % der Zeiträume erreichen muss. Ein anforderungsbasiertes SLO verwendet keine vordefinierten Zeiträume. Stattdessen misst der SLO number of good requests/number of total requests während des Intervalls. Sie können jederzeit das Verhältnis zwischen guten Anforderungen und der Gesamtzahl der Anforderungen für das Intervall bis zu dem von Ihnen angegebenen Zeitstempel ermitteln und dieses Verhältnis im Vergleich mit dem in Ihrem SLO festgelegten Ziel messen. Themen Ein zeitraumbasiertes SLO erstellen Anforderungsbasiertes SLO erstellen Ein zeitraumbasiertes SLO erstellen Gehen Sie wie folgt vor, um ein zeitraumbasiertes SLO zu erstellen. So erstellen Sie ein zeitraumbasiertes SLO Öffnen Sie die CloudWatch-Konsole unter https://console.aws.amazon.com/cloudwatch/ . Wählen Sie im Navigationsbereich Servicelevel-Ziele (SLO) . Wählen Sie SLO erstellen . Geben Sie einen Namen für das SLO ein. Wenn Sie den Namen eines Services oder Vorgangs zusammen mit entsprechenden Schlüsselwörtern wie Latenz oder Verfügbarkeit angeben, können Sie bei der Untersuchung schnell erkennen, was der SLO-Status bedeutet. Führen Sie für Servicelevel-Indikator (SLI) festlegen einen der folgenden Schritte aus: Um das SLO auf eine der Standard-Anwendungsmetriken Latency oder Availability festzulegen: Wählen Sie Service-Vorgang . Wählen Sie ein Konto aus, das dieses SLO überwachen soll. Wählen Sie den Service aus, den dieses SLO überwachen soll. Wählen Sie den Vorgang aus, den dieses SLO überwachen soll. Wählen Sie für Wählen Sie eine Berechnungsmethode die Option Zeiträume aus. Die Dropdown-Menüs Service auswählen und Vorgang auswählen werden mit Services und Vorgängen gefüllt, die in den letzten 24 Stunden aktiv waren. Wählen Sie entweder Verfügbarkeit oder Latenz und legen Sie dann den Schwellenwert fest. So legen Sie das SLO für eine beliebige CloudWatch-Metrik oder einen mathematischen CloudWatch-Metrikausdruck fest: Wählen Sie CloudWatch-Metrik . Wählen Sie CloudWatch-Metrik auswählen . Der Bildschirm Metrik auswählen wird angezeigt. Verwenden Sie die Registerkarten Durchsuchen oder Abfragen , um die gewünschte Metrik zu finden, oder erstellen Sie einen mathematischen Ausdruck für die Metrik. Nachdem Sie die gewünschte Metrik ausgewählt haben, wählen Sie die Registerkarte Graphische Metriken und dann die Statistik und den Zeitraum aus, die für das SLO verwendet werden sollen. Wählen Sie dann Select Metric (Metrik auswählen) aus. Weitere Informationen zu diesen Bildschirmen finden Sie unter Grafisches Darstellen von Metriken und Fügen Sie einem CloudWatch Diagramm einen mathematischen Ausdruck hinzu . Wählen Sie für Wählen Sie eine Berechnungsmethode die Option Zeiträume aus. Wählen Sie unter Bedingung festlegen einen Vergleichsoperator und einen Schwellenwert aus, den das SLO als Erfolgsindikator verwenden soll. So legen Sie das SLO auf der Abhängigkeit eines Services auf eine der Standard-Anwendungsmetriken Latency oder Availability fest: Wählen Sie Serviceabhängigkeit aus. Wählen Sie unter Service auswählen den Service aus, den dieses SLO überwachen soll. Basierend auf dem ausgewählten Service können Sie unter Vorgang auswählen einen bestimmten Vorgang auswählen oder Alle Vorgänge auswählen, um die Metriken aller Vorgänge dieses Services zu verwenden, der eine Abhängigkeit aufruft. Unter Abhängigkeit auswählen können Sie die erforderliche Abhängigkeit suchen und auswählen, für die Sie die Zuverlässigkeit messen möchten. Nachdem Sie die Abhängigkeit ausgewählt haben, können Sie das aktualisierte Diagramm und die auf der Abhängigkeit basierenden historischen Daten anzeigen. Wenn Sie in Schritt 5 Service-Vorgang oder Service-Abhängigkeit ausgewählt haben, legen Sie die Länge des Zeitraums für dieses SLO fest. Legen Sie das Intervall und das Erreichungsziel für das SLO fest. Weitere Informationen zu Intervallen und Erreichungszielen sowie zu deren Zusammenspiel finden Sie unter SLO-Konzepte . (Optional) Gehen Sie für SLO-Nutzungsraten festlegen wie folgt vor: Stellen Sie die Dauer (in Minuten) des Lookback-Fensters für die Nutzungsrate ein. Weitere Informationen zur Auswahl dieser Dauer finden Sie unter Anleitungen für Nutzungsratenalarme . Um mehr Nutzungsraten für dieses SLO zu erstellen, wählen Sie Weitere Nutzungsraten hinzufügen und legen Sie das Lookback-Fenster für die zusätzlichen Nutzungsraten fest. (Optional) Erstellen Sie wie folgt Alarme für die Nutzungsrate: Aktivieren Sie unter Nutzungsrate-Alarme festlegen das Kontrollkästchen für jede Nutzungsrate, für die Sie einen Alarm erstellen möchten. Gehen Sie für jeden dieser Alarme wie folgt vor: Geben Sie das Amazon-SNS-Thema an, das für Benachrichtigungen verwendet werden soll, wenn der Alarm in den ALARM-Status wechselt. Legen Sie entweder einen Schwellenwert für die Nutzungsrate fest oder geben Sie den Prozentsatz des geschätzten Gesamtbudgets an, der im letzten Lookback-Fenster verbraucht werden soll. Wenn Sie den Prozentsatz des geschätzten Gesamtbudgets angeben, das aufgebraucht wurde, wird der Schwellenwert für die Nutzungsrate für Sie berechnet und im Alarm verwendet. Unter Ermitteln Sie den geeigneten Schwellenwert für einen Alarm für die Nutzungsrate erfahren Sie, wie Sie entscheiden können, welcher Schwellenwert festgelegt werden soll, oder um zu erfahren, wie diese Option zur Berechnung des Schwellenwerts für die Nutzungsrate verwendet wird. (Optional) Legen Sie einen oder mehrere CloudWatch-Alarme oder einen Warnschwellenwert für das SLO fest. CloudWatch-Alarme können Amazon SNS verwenden, um Sie proaktiv zu benachrichtigen, wenn eine Anwendung aufgrund ihrer SLI-Leistung fehlerhaft ist. Um einen Alarm zu erstellen, wählen Sie eines der Alarm-Kontrollkästchen aus und geben Sie das Amazon-SNS-Thema ein – oder erstellen Sie eines – welches für Benachrichtigungen verwendet werden soll, wenn der Alarm in den ALARM -Status wechselt. Weitere Informationen zu CloudWatch-Alarmen finden Sie unter CloudWatch Amazon-Alarme verwenden . Für die Erstellung von Alarmen fallen Gebühren an. Weitere Informationen zur Preisgestaltung von CloudWatch finden Sie unter Amazon CloudWatch – Preise . Wenn Sie einen Warnschwellenwert festlegen, wird dieser auf den Bildschirmen von Application Signals angezeigt und hilft Ihnen dabei, SLOs zu identifizieren, bei denen die Gefahr besteht, dass sie nicht erfüllt werden, auch wenn sie derzeit fehlerfrei sind. Um einen Warnschwellenwert festzulegen, geben Sie den Schwellenwert im Feld Warnschwellenwert ein. Wenn das Fehlerbudget des SLO unter dem Warnschwellenwert liegt, wird das SLO auf mehreren Bildschirmen von Application Signals mit Warnung gekennzeichnet. Warnschwellenwerte werden auch in den Grafiken zum Fehlerbudget angezeigt. Sie können auch einen SLO-Warnalarm erstellen, der auf dem Warnschwellenwert basiert. (Optional) Gehen Sie unter Ausschluss des SLO-Zeitfensters festlegen wie folgt vor: Legen Sie unter Zeitfenster ausschließen das Zeitfenster fest, das von den SLO-Leistungsmetriken ausgeschlossen werden soll. Sie können Zeitfenster festlegen wählen und das Startfenster für jede Stunde oder jeden Monat eingeben, oder Sie können Zeitfenster mit CRON festlegen wählen und den CRON-Ausdruck eingeben. Stellen Sie unter Wiederholen ein, ob dieser Zeitfensterausschluss wiederholt wird oder nicht. (Optional) Unter Grund hinzufügen können Sie wählen, ob Sie einen Grund für den Zeitfensterausschluss eingeben möchten. Zum Beispiel geplante Wartungsarbeiten. Wählen Sie Zeitfenster hinzufügen , um bis zu 10 Zeitausschlussfenster hinzuzufügen. Um diesem SLO Tags hinzuzufügen, wählen Sie die Registerkarte Tags und dann Neues Tag hinzufügen . Mit Tags können Sie Ressourcen verwalten, identifizieren, organisieren, suchen und filtern. Weitere Informationen über das Markieren finden Sie unter Markieren Ihrer AWS-Ressourcen . Anmerkung Wenn die Anwendung, auf die sich dieses SLO bezieht, in AWS Service Catalog AppRegistry registriert ist, können Sie das awsApplication -Tag verwenden, um dieses SLO dieser Anwendung in AppRegistry zuzuordnen. Weitere Informationen finden Sie unter Was ist AppRegistry? Wählen Sie SLO erstellen . Wenn Sie sich außerdem dafür entscheiden, einen oder mehrere Alarme zu erstellen, ändert sich der Name der Schaltfläche entsprechend. Anforderungsbasiertes SLO erstellen Gehen Sie wie folgt vor, um ein anforderungsbasiertes SLO zu erstellen. So erstellen Sie ein anforderungsbasiertes SLO Öffnen Sie die CloudWatch-Konsole unter https://console.aws.amazon.com/cloudwatch/ . Wählen Sie im Navigationsbereich Servicelevel-Ziele (SLO) . Wählen Sie SLO erstellen . Geben Sie einen Namen für das SLO ein. Wenn Sie den Namen eines Services oder Vorgangs zusammen mit entsprechenden Schlüsselwörtern wie Latenz oder Verfügbarkeit angeben, können Sie bei der Untersuchung schnell erkennen, was der SLO-Status bedeutet. Führen Sie für Servicelevel-Indikator (SLI) festlegen einen der folgenden Schritte aus: Um das SLO auf eine der Standard-Anwendungsmetriken Latency oder Availability festzulegen: Wählen Sie Service-Vorgang . Wählen Sie den Service aus, den dieses SLO überwachen soll. Wählen Sie den Vorgang aus, den dieses SLO überwachen soll. Wählen Sie für Wählen Sie eine Berechnungsmethode die Option Anforderungen aus. Die Dropdown-Menüs Service auswählen und Vorgang auswählen werden mit Services und Vorgängen gefüllt, die in den letzten 24 Stunden aktiv waren. Wählen Sie entweder Verfügbarkeit oder Latenz . Wenn Sie Latenz wählen, legen Sie den Schwellenwert fest. So legen Sie das SLO für eine beliebige CloudWatch-Metrik oder einen mathematischen CloudWatch-Metrikausdruck fest: Wählen Sie CloudWatch-Metrik . Gehen Sie bei Zielanforderungen definieren wie folgt vor: Wählen Sie aus, ob Sie gute oder schlechte Anforderungen messen möchten. Wählen Sie CloudWatch-Metrik auswählen . Diese Metrik ist der Zähler für das Verhältnis der Zielanforderungen zur Gesamtzahl der Anforderungen. Wenn Sie eine Latenzmetrik verwenden, nutzen Sie die Statistik Getrimmte Anzahl (TC) . Wenn der Schwellenwert 9 ms beträgt und Sie den Vergleichsoperator „Weniger als“ (<) verwenden, verwenden Sie den Schwellenwert TC (:threshold – 1). Weitere Informationen zu TC finden Sie unter Syntax . Der Bildschirm Metrik auswählen wird angezeigt. Verwenden Sie die Registerkarten Durchsuchen oder Abfragen , um die gewünschte Metrik zu finden, oder erstellen Sie einen mathematischen Ausdruck für die Metrik. Wählen Sie unter Gesamtzahl der Anforderungen definieren die CloudWatch-Metrik aus, die Sie für die Quelle verwenden möchten. Diese Metrik ist der Nenner für das Verhältnis von Zielanfragen zur Gesamtzahl der Anforderungen. Der Bildschirm Metrik auswählen wird angezeigt. Verwenden Sie die Registerkarten Durchsuchen oder Abfragen , um die gewünschte Metrik zu finden, oder erstellen Sie einen mathematischen Ausdruck für die Metrik. Nachdem Sie die gewünschte Metrik ausgewählt haben, wählen Sie die Registerkarte Graphische Metriken und dann die Statistik und den Zeitraum aus, die für das SLO verwendet werden sollen. Wählen Sie dann Select Metric (Metrik auswählen) aus. Wenn Sie eine Latenzmetrik verwenden, die einen Datenpunkt pro Anforderung ausgibt, verwenden Sie die Statistik zur Beispielanzahl , um die Gesamtzahl der Anforderungen zu zählen. Weitere Informationen zu diesen Bildschirmen finden Sie unter Grafisches Darstellen von Metriken und Fügen Sie einem CloudWatch Diagramm einen mathematischen Ausdruck hinzu . So legen Sie das SLO auf der Abhängigkeit eines Services auf eine der Standard-Anwendungsmetriken Latency oder Availability fest: Wählen Sie Serviceabhängigkeit aus. Wählen Sie unter Service auswählen den Service aus, den dieses SLO überwachen soll. Basierend auf dem ausgewählten Service können Sie unter Vorgang auswählen einen bestimmten Vorgang auswählen oder Alle Vorgänge auswählen, um die Metriken aller Vorgänge dieses Services zu verwenden, der eine Abhängigkeit aufruft. Unter Abhängigkeit auswählen können Sie die erforderliche Abhängigkeit suchen und auswählen, für die Sie die Zuverlässigkeit messen möchten. Nachdem Sie die Abhängigkeit ausgewählt haben, können Sie das aktualisierte Diagramm und die auf der Abhängigkeit basierenden historischen Daten anzeigen. Legen Sie das Intervall und das Erreichungsziel für das SLO fest. Weitere Informationen zu Intervallen und Erreichungszielen sowie zu deren Zusammenspiel finden Sie unter SLO-Konzepte . (Optional) Gehen Sie für SLO-Nutzungsraten festlegen wie folgt vor: Stellen Sie die Dauer (in Minuten) des Lookback-Fensters für die Nutzungsrate ein. Weitere Informationen zur Auswahl dieser Dauer finden Sie unter Anleitungen für Nutzungsratenalarme . Um mehr Nutzungsraten für dieses SLO zu erstellen, wählen Sie Weitere Nutzungsraten hinzufügen und legen Sie das Lookback-Fenster für die zusätzlichen Nutzungsraten fest. (Optional) Erstellen Sie wie folgt Alarme für die Nutzungsrate: Aktivieren Sie unter Nutzungsrate-Alarme festlegen das Kontrollkästchen für jede Nutzungsrate, für die Sie einen Alarm erstellen möchten. Gehen Sie für jeden dieser Alarme wie folgt vor: Geben Sie das Amazon-SNS-Thema an, das für Benachrichtigungen verwendet werden soll, wenn der Alarm in den ALARM-Status wechselt. Legen Sie entweder einen Schwellenwert für die Nutzungsrate fest oder geben Sie den Prozentsatz des geschätzten Gesamtbudgets an, der im letzten Lookback-Fenster verbraucht werden soll. Wenn Sie den Prozentsatz des geschätzten Gesamtbudgets angeben, das aufgebraucht wurde, wird der Schwellenwert für die Nutzungsrate für Sie berechnet und im Alarm verwendet. Unter Ermitteln Sie den geeigneten Schwellenwert für einen Alarm für die Nutzungsrate erfahren Sie, wie Sie entscheiden können, welcher Schwellenwert festgelegt werden soll, oder um zu erfahren, wie diese Option zur Berechnung des Schwellenwerts für die Nutzungsrate verwendet wird. (Optional) Legen Sie einen oder mehrere CloudWatch-Alarme oder einen Warnschwellenwert für das SLO fest. CloudWatch-Alarme können Amazon SNS verwenden, um Sie proaktiv zu benachrichtigen, wenn eine Anwendung aufgrund ihrer SLI-Leistung fehlerhaft ist. Um einen Alarm zu erstellen, wählen Sie eines der Alarm-Kontrollkästchen aus und geben Sie das Amazon-SNS-Thema ein – oder erstellen Sie eines – welches für Benachrichtigungen verwendet werden soll, wenn der Alarm in den ALARM -Status wechselt. Weitere Informationen zu CloudWatch-Alarmen finden Sie unter CloudWatch Amazon-Alarme verwenden . Für die Erstellung von Alarmen fallen Gebühren an. Weitere Informationen zur Preisgestaltung von CloudWatch finden Sie unter Amazon CloudWatch – Preise . Wenn Sie einen Warnschwellenwert festlegen, wird dieser auf den Bildschirmen von Application Signals angezeigt und hilft Ihnen dabei, SLOs zu identifizieren, bei denen die Gefahr besteht, dass sie nicht erfüllt werden, auch wenn sie derzeit fehlerfrei sind. Um einen Warnschwellenwert festzulegen, geben Sie den Schwellenwert im Feld Warnschwellenwert ein. Wenn das Fehlerbudget des SLO unter dem Warnschwellenwert liegt, wird das SLO auf mehreren Bildschirmen von Application Signals mit Warnung gekennzeichnet. Warnschwellenwerte werden auch in den Grafiken zum Fehlerbudget angezeigt. Sie können auch einen SLO-Warnalarm erstellen, der auf dem Warnschwellenwert basiert. (Optional) Gehen Sie unter Ausschluss des SLO-Zeitfensters festlegen wie folgt vor: Legen Sie unter Zeitfenster ausschließen das Zeitfenster fest, das von den SLO-Leistungsmetriken ausgeschlossen werden soll. Sie können Zeitfenster festlegen wählen und das Startfenster für jede Stunde oder jeden Monat eingeben, oder Sie können Zeitfenster mit CRON festlegen wählen und den CRON-Ausdruck eingeben. Stellen Sie unter Wiederholen ein, ob dieser Zeitfensterausschluss wiederholt wird oder nicht. (Optional) Unter Grund hinzufügen können Sie wählen, ob Sie einen Grund für den Zeitfensterausschluss eingeben möchten. Zum Beispiel geplante Wartungsarbeiten. Wählen Sie Zeitfenster hinzufügen , um bis zu 10 Zeitausschlussfenster hinzuzufügen. Um diesem SLO Tags hinzuzufügen, wählen Sie die Registerkarte Tags und dann Neues Tag hinzufügen . Mit Tags können Sie Ressourcen verwalten, identifizieren, organisieren, suchen und filtern. Weitere Informationen über das Markieren finden Sie unter Markieren Ihrer AWS-Ressourcen . Anmerkung Wenn die Anwendung, auf die sich dieses SLO bezieht, in AWS Service Catalog AppRegistry registriert ist, können Sie das awsApplication -Tag verwenden, um dieses SLO dieser Anwendung in AppRegistry zuzuordnen. Weitere Informationen finden Sie unter Was ist AppRegistry? Wählen Sie SLO erstellen . Wenn Sie sich außerdem dafür entscheiden, einen oder mehrere Alarme zu erstellen, ändert sich der Name der Schaltfläche entsprechend. SLO-Status anzeigen und untersuchen Mithilfe der Optionen für Servicelevel-Ziele oder Services in der CloudWatch-Konsole können Sie den Zustand Ihrer SLOs schnell überprüfen. Die Services -Ansicht bietet auf einen Blick eine Übersicht über das Verhältnis fehlerhafter Services, das auf der Grundlage der von Ihnen festgelegten SLOs berechnet wird. Weitere Informationen zur Verwendung der Services -Option finden Sie unter Den Betriebsstatus Ihrer Anwendungen mit Application Signals überwachen . Die Ansicht Servicelevel-Ziele bietet eine übergeordnete Ansicht Ihrer Organisation. Sie können die erfüllten und nicht erfüllten SLOs als Ganzes sehen. Auf diese Weise erhalten Sie einen Überblick darüber, wie viele Ihrer Services und Abläufe gemäß den von Ihnen ausgewählten SLIs über längere Zeiträume Ihren Erwartungen entsprechen. So zeigen Sie alle SLOs in der Servicelevel-Ziele-Ansicht an Öffnen Sie die CloudWatch-Konsole unter https://console.aws.amazon.com/cloudwatch/ . Wählen Sie im Navigationsbereich Servicelevel-Ziele (SLO) . Die Liste der Servicelevel-Ziele (SLO) wird angezeigt. In der SLI-Status -Spalte können Sie schnell den aktuellen Status Ihrer SLOs einsehen. Um die SLOs so zu sortieren, dass alle fehlerhaften SLOs ganz oben in der Liste stehen, wählen Sie die SLI-Status -Spalte aus, bis alle fehlerhaften SLOs ganz oben stehen. Die SLO-Tabelle hat die folgenden standardmäßigen Spalten. Sie können anpassen, welche Spalten angezeigt werden, indem Sie das Zahnradsymbol über der Liste auswählen. Weitere Informationen zu Zielen, SLIs, erreichten Zielen und Intervallen finden Sie unter SLO-Konzepte . Der Name des SLO. In der Ziel -Spalte wird der Prozentsatz der Zeiträume in jedem Intervall angezeigt, bei denen der SLI-Schwellenwert erfolgreich erreicht werden muss, damit das SLO-Ziel erreicht wird. Außerdem wird die Intervall-Länge für das SLO angezeigt. Der SLI-Status zeigt an, ob der aktuelle Betriebsstatus der Anwendung fehlerfrei ist oder nicht. Wenn ein Zeitraum innerhalb des aktuell ausgewählten Zeitraums für das SLO fehlerhaft war, wird der SLI-Status als Fehlerhaft angezeigt. Wenn dieser SLO für die Überwachung einer Abhängigkeit konfiguriert ist, werden in den Spalten Abhängigkeit und Remote-Vorgang die Details zu dieser Abhängigkeitsbeziehung angezeigt. Das Endziel ist das Erreichungsniveau, das am Ende des ausgewählten Zeitraums erreicht wurde. Sortieren Sie nach dieser Spalte, um die SLOs zu finden, bei denen die Gefahr am größten ist, dass sie nicht eingehalten werden. Das Erreichungs-Delta ist der Unterschied in der Leistungsstufe zwischen dem Beginn und dem Ende des ausgewählten Zeitraums. Ein negatives Delta bedeutet, dass die Metrik nach unten tendiert. Sortieren Sie nach dieser Spalte, um die neuesten Trends der SLOs zu sehen. Das Budget für Endfehler (%) ist der Prozentsatz der Gesamtzeit in dem Zeitraum, in dem es zu fehlerhaften Zeiträumen kommen kann und das SLO trotzdem erfolgreich erreicht werden kann. Wenn Sie diesen Wert auf 5 % setzen und der SLI in 5 % oder weniger der verbleibenden Zeiträumen des Intervalls fehlerhaft ist, wird das SLO trotzdem erfolgreich erreicht. Das Fehlerbudget-Delta ist die Differenz im Fehlerbudget zwischen dem Start und dem Ende des ausgewählten Zeitraums. Ein negatives Delta bedeutet, dass die Metrik nach unten tendiert. Beim Endfehlerbudget (Zeit) handelt es sich um die tatsächliche Zeit innerhalb des Intervalls, die fehlerhaft sein kann, während das SLO trotzdem erfolgreich erreicht werden muss. Wenn dieser Wert beispielsweise 14 Minuten beträgt und der SLI während des verbleibenden Intervalls weniger als 14 Minuten fehlerhaft ist, wird das SLO trotzdem erfolgreich erreicht. Beim Endfehlerbudget (Anforderungen) handelt es sich um die Anzahl der Anforderungen innerhalb des Intervalls, die fehlerhaft sein kann, während das SLO trotzdem erfolgreich erreicht werden muss. Bei anforderungsbasierten SLOs ist dieser Wert dynamisch und kann schwanken, wenn sich die Gesamtzahl der Anfragen im Laufe der Zeit ändert. In den Spalten Service , Vorgang und Typ werden Informationen darüber angezeigt, für welchen Service und welchen Betrieb dieses SLO eingerichtet ist. Aktivieren Sie das Optionsfeld neben dem SLO-Namen, um die Budgets für Erreichen und Fehler für ein SLO anzuzeigen. Die Grafiken oben auf der Seite zeigen den Budgetstatus des SLO-Erreichens und des Fehlerbudgets . Ein Diagramm über die SLI-Metrik, die diesem SLO zugeordnet ist, wird ebenfalls angezeigt. Um ein SLO, das sein Ziel nicht erreicht, genauer zu untersuchen, wählen Sie den Service-, Vorgangs- oder Abhängigkeitsnamen, der diesem SLO zugeordnet ist. Sie werden auf die Detailseite weitergeleitet, auf der Sie eine weitere Auswahl vornehmen können. Weitere Informationen finden Sie unter Anzeigen detaillierter Serviceaktivitäten und des Betriebsstatus auf der Servicedetailseite . Um den Zeitraum der Diagramme und Tabellen auf der Seite zu ändern, wählen Sie oben auf dem Bildschirm einen neuen Zeitraum aus. Ein vorhandenes SLO bearbeiten Gehen Sie folgendermaßen vor, um eine bestehende SLO zu bearbeiten. Wenn Sie ein SLO bearbeiten, können Sie nur den Schwellenwert, das Intervall, das Erreichungsziel und die Tags ändern. Um andere Aspekte wie Service, Betrieb oder Metrik zu ändern, erstellen Sie ein neues SLO, anstatt ein vorhandenes zu bearbeiten. Wenn Sie einen Teil einer SLO-Kernkonfiguration ändern, z. B. einen Zeitraum oder einen Schwellenwert, werden alle vorherigen Datenpunkte und Bewertungen in Bezug auf Leistung und Zustand ungültig. Das SLO wird effektiv gelöscht und neu erstellt. Anmerkung Wenn Sie ein SLO bearbeiten, werden die mit diesem SLO verknüpften Alarme nicht automatisch aktualisiert. Möglicherweise müssen Sie die Alarme aktualisieren, damit sie mit dem SLO synchron bleiben. So bearbeiten Sie ein vorhandenes SLO Öffnen Sie die CloudWatch-Konsole unter https://console.aws.amazon.com/cloudwatch/ . Wählen Sie im Navigationsbereich Servicelevel-Ziele (SLO) . Aktivieren Sie das Optionsfeld neben dem SLO, das Sie bearbeiten möchten, und wählen Sie Aktionen , SLO bearbeiten aus. Nehmen Sie die gewünschten Änderungen vor und wählen Sie dann Änderungen speichern . Ein SLO löschen Gehen Sie folgendermaßen vor, um ein bestehendes SLO zu löschen. Anmerkung Wenn Sie ein SLO löschen, werden die mit diesem SLO verknüpften Alarme nicht automatisch gelöscht. Sie müssen sie selbst löschen. Weitere Informationen finden Sie unter Verwalten von Alarmen . So löschen Sie ein SLO Öffnen Sie die CloudWatch-Konsole unter https://console.aws.amazon.com/cloudwatch/ . Wählen Sie im Navigationsbereich Servicelevel-Ziele (SLO) . Aktivieren Sie das Optionsfeld neben dem SLO, das Sie bearbeiten möchten, und wählen Sie Aktionen , SLO löschen aus. Wählen Sie Bestätigen aus. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Benutzerdefinierte Metriken mit Application Signals Transaktionssuche Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#param-num-of-posts-1 | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://foundryco.com/privacy-policy/ | Privacy Policy | Foundry Skip to content Search Contact us Translation available Select an experience Japan Global Search for: Search Brands CIO CSO InfoWorld Network World Computerworld Macworld PCWorld Tech Advisor TechHive ChannelWorld Specialty brands CIO100 CSO50 All brands Audiences Artificial intelligence Cloud Security Hardware Software All audiences Solutions Ads Audiences Lead gen Intent data Brand experiences Interactive storytelling Events Partner marketing Content creation Affiliate marketing All solutions Research Technology insights AI Priorities CIO Tech Priorities Cloud Computing Security Priorities State of the CIO Buying process Customer Engagement Role & Influence Partner Marketing All research Resources Resources Tools for marketers Blog Videos Customer stories Developer portal The Intersection newsletter All resources About us Press Awards Work here Privacy / Compliance Licensing About Us Brands CIO CSO InfoWorld Network World Computerworld Macworld PCWorld Tech Advisor TechHive ChannelWorld Specialty brands CIO 100 CSO50 All brands Audiences Artificial intelligence Cloud Security Hardware Software All audiences Solutions Ads Lead gen Intent data Brand experiences Interactive storytelling Events Partner marketing Content creation Affiliate marketing All solutions Research Technology insights AI Priorities CIO Tech Priorities Cloud Computing Security Priorities State of the CIO Buying process Customer Engagement Role & Influence Partner Marketing All research Resources Resources Tools for marketers Blog Videos Customer stories Developer Portal The Intersection newsletter All Resources About us Press Awards Work here Privacy / Compliance Licensing About Us Contact Log in Edition - Select an experience Japan Global FoundryCo, Inc. Privacy Policy Welcome We’re glad you chose to visit an FoundryCo, Inc. (“ Foundry ” or “ we ”) site! We care about your privacy and the personal data you share with us and want you to understand how we are using and protecting the personal data we collect about you as a data controller. Foundry is respectful of data privacy and adopts best practices in compliance with applicable privacy law and regulations, including the European General Data Protection Regulation (EU 2016/679) (the “ GDPR ”), Directive 2002/58/EC (the “ e-Privacy Directive ”); European national laws implementing derogations, exceptions or other aspects of the e-Privacy Directive and/or the GDPR; the GDPR, as transposed into the United Kingdom national law by operation of section 3 of the European Union (Withdrawal) Act 2018 and as amended by the Data Protection, Privacy and Electronic Communications (Amendments etc.) (EU Exit) Regulations 2019 (the “ UK GDPR ”) and the United Kingdom’ Data Protection Act 2018; the Canadian Personal Information Protection and Electronic Documents Act (the “ PIPEDA ”); the California Consumer Privacy Act of 2018, as amended from time to time (the “ CCPA ”) and the California Privacy Rights Act of 2020 (the “ CPRA ”). Who Processes Your Data? The controller of your personal data is Foundry and members of the Foundry group of companies (“ Group Undertakings ”). A full list of Foundry Group Undertakings, including contact details, is available here . Each Group Undertaking acts as a data controller with respect to data collected, processed, used, and stored as a part of services and activities conducted by a Group Undertaking. Certain services and activities may be conducted by several Group Undertakings. Where required, Foundry has designated a representative in the EU: IDG Communications Limited Mezzanine Floor, Millennium House, Great Strand Street Dublin 1, Ireland dataprotection@foundryco.com . For all matters related to privacy and the collection, processing, use and storage of your personal data, please contact Foundry’s Data Protection Officer at: dataprotection@foundryco.com . What personal data we collect? Depending on the product or/and service involved, Foundry may collect various categories of data for distinct purposes. This may include, in particular: Data that you provide to us: Business contact information, such as first name, last name, business email address and phone number, company name, business title, business address. In some cases, you may also provide us additional professional information such as the size of the company you work for, and industry type. When you set up an account with Foundry, we may also collect your log-in credentials (i.e., email and password) and billing information, where applicable. With respect to networking events, we may collect certain details pertaining to professional profiles, including photos and feedback or information that participants, speakers, and/or partners have volunteered to provide to us via questionnaires submitted prior to, during or after events. We may also process the information necessary for the provision of ancillary services relating to events organized by us that you requested (e.g., to arrange accommodation for you). We may collect such data when you use or register to receive any of the products, services or content offered by Foundry and its third-party sponsors, as applicable (including authentication through your social media profiles), or when you communicate with us through email, web application forms, chat, our social media, and other forms of communication. Data we collect from other sources: We collect information about your use and interaction with our websites, content and services. The exact scope of personal data collected will depend on the specific features of the website / services then available and used by you. We use cookies and other tools for collecting such data. Please refer to our Cookie Policy for more information. We may also collect personal data that you have made accessible from publicly available sources, including third-party websites, blogs, articles, or similar publicly available content. This typically includes business contact information as described in Section 2.1 above. Foundry processes your business professional details in the context of your professional capacity. Foundry does not, in any event, knowingly collect or process any personal data that may be classified as special categories of personal data or sensitive personal information under the applicable privacy laws. How we use your personal data To provide and improve our services and content and to develop new offerings We may use your personal data and information about your organization to deliver our services or content to you and/or our clients, e.g., when you access our platforms from the Foundry Communications Network , subscribe to our newsletter, visit our or our clients’ websites, view our or our clients’ content, participate in our networking events, etc. We will use your personal data in particular to identify you when you login to your account, to process your payments, and to fulfil your requests. If you enter a sweepstake, contest, or similar Foundry promotion, we may use your personal data to administer such promotion. Some public Foundry events may be recorded by means of photographs, audio recordings, and/or videos. Such photographs and recordings may subsequently appear on the respective event website, other relevant website, on social media, in the press, or in promotional materials (such as Foundry promotional videos or program guides). All events to be thus documented will be clearly indicated as such. Foundry may separately seek your consent prior to recording if you are to feature predominantly in the planned recording (e.g., speakers and participants in interviews). We may also use the personal data about you to enhance, optimize, secure, update, market, and analyse our services or develop new services or products. To communicate with you We may process your personal data to communicate with you, for example, when we assist you with setting up or administering your account, provide customer care, resolve your complaints, and send technical notices and other support messages. Such communication is not affected by your marketing communication preferences. Foundry may also use your personal data when we ask you for your insights about a specific industry or topic (surveys) or to provide you with updates on industry developments and other value-added services or features provided whether for remuneration or free of charge. To inform you about Foundry’s products or services We may contact you about our news, events, services and their features or special offers that we believe may interest you, provided that we have the requisite permission to do so, either on the basis of your consent, or our legitimate interests to provide you with marketing communications where we may do so, within the limits provided by law. You may change your marketing preferences at any time by following the instructions in such communications or by contacting us via the contact details provided in the Section 1 of this Privacy Policy. To carry out B2B market research and provide relating services Foundry may process your personal data to carry out relevant market research and analysis and to provide services and insights relating to market assessments, business-to-business marketing, sales, business development, and other related purposes. This may include analysis and other processing relying also on third-party sources and algorithmic processing. The purposes are limited to business-related information processed in the context of your professional capacity. To comply with legal obligations Foundry may also process your personal data to comply with applicable legal obligations. To prevent fraudulent activities and protect our rights We may use the information about you to detect, prevent and address fraud and other illegal activity and to establish, exercise or defend our legal claims and protect our rights. On what legal basis do we process your personal data? We process, use, and store your data primarily to perform our obligations under the contracts we have concluded with you or your organization. In certain instances, we may also process your personal data based on our legitimate interests. Foundry’s legitimate interests include developing, offering and delivering our B2B products and services to customers, enhancing our products and customer base management of the customer relationships, conducting market research and analysis, exchanging professional knowledge about the industry, exercising and defending our legal rights, preventing fraud, illegal activity or imminent harm, and ensuring the security and operability of our network and services. Where permissible under applicable law, we may also contact you about our products and services based on our legitimate interests. In specific cases, we process your data based on your consent, in accordance with the requirements for consent under applicable privacy laws. For example, we may rely on your consent for direct marketing purposes or personalized advertising, where required. We may also process your personal data where such processing is necessary to comply with the laws applicable to Foundry’s business operations. For how long do we store your data? We will store your personal data for as long as is necessary to provide you with the products and/or services requested by you or your organization; for as long as is reasonably required to store such information for our legitimate interests, such as exercising our legal rights, or for as long as we are legally obligated to store such information. If you consent to us collecting, processing, using, and storing your personal data, we will do so for the duration of such consent — in other words, until such consent expires or is withdrawn. You have the right to withdraw your consent at any time. For more details on the withdrawal of your consent, please see Section 8 below. Data disclosures and transfers We may disclose your personal data to third parties or Group Undertakings in the following circumstances: Our Group Undertakings, as listed here , working with us or on our behalf for the purposes described in this Privacy Policy may have access to your personal data. We may also share your personal data with non-affiliated third parties such as professional advisors or public authorities when necessary: To comply with legal obligations; To enforce or defend the legal rights of Foundry in connection with corporate restructuring, such as a merger or business acquisition, or in connection with an insolvency situation; To prevent fraud or imminent harm; and/or To ensure the security and operability of our network and services. We share your data with our trusted business partners, who process your data as our vendors and data processors on our behalf and pursuant to our instructions (for the purposes of e.g., IT support, hosting, etc.). We strive to select our vendors carefully and ensure they are able to provide adequate data protection and security safeguards. Foundry can share your personal data with its business partners, sponsors, clients or other third parties, in the context of business-to-business services provided by Foundry, or to develop or market certain content (e.g., white papers). Where required, we will seek your consent before sharing your personal data with such third parties. Processing of your personal data by such third parties is governed by their respective privacy policies. Your personal data can be shared through third-party cookies and other related technologies that are used by third parties. Please refer to our Cookie Policy for more information. As required by the applicable laws, any transfer of your personal data outside the EU or the UK only takes place if the requirements of the applicable privacy laws, in particular as laid down in Art. 44 et seq. GDPR, have been fulfilled, e.g., based on the European Commission’s adequacy decision, or the Standard Contractual Clauses, including the UK SCC Addendum, where applicable. Data security We have implemented and will maintain appropriate technical and organizational measures, internal controls, and information security routines in accordance with good industry practice while keeping in mind the state of technological development in order to protect your data against accidental loss, destruction, alteration, unauthorized disclosure or access or unlawful destruction. We employ various security measures to protect the information we collect, as appropriate to the type of information, including encryption, firewalls, and access controls. Foundry further ensures that all individuals who have access to your data and are involved in the collection, processing, use, and/or storage thereof are bound by appropriate confidentiality obligations and have appropriate training. You are responsible for keeping confidential any passwords that we give you (or you choose) that enable you to access certain parts of our website. For security reasons, such passwords must not be shared with anyone. What are your rights? If you have any questions or concerns about the handling of your personal data, and/or you wish to exercise your data subject rights, please contact us at any time via the contact details provided in the Section 1 of this Privacy Policy. As a data subject, you have the following rights: The Right to Access : You have a right to request access to the personal data about you that we process and to receive a copy of that data. The Right to Receive Information : You may contact us at any time with a request to receive more information regarding the following: the purposes for which we use your personal data; how we categorize your personal data; the recipients of your personal data; the length of time we store your personal data; and your rights as a data subject. The Right to Portability : You have the right to receive a copy of your personal data from us in a structured and commonly used machine-readable format. You may also request us to transfer such data to another data controller. The Right to Erasure : You may request us to erase your personal data from our records. Please note, that in some cases we may be legally obliged to retain some of your personal data. We may also retain some of your personal data in order to defend our legal rights, avoid sending you unwanted materials in the future, and to keep a record of your request and our response. The Right to Rectification : If you discover that any of the data we possess about you is incorrect or incomplete, you may as us to rectify or supplement your data. The Right to the Restriction of Use : You have a right to request restriction of our use of your personal data, in particular if you believe that such processing is unlawful, or your data are inaccurate. The Right to Object : In cases in which we rely on our legitimate interest to use your personal data, we must consider and acknowledge the interests and rights that you have under data protection law. Your privacy rights are always protected by appropriate safeguards and balanced with your freedoms and other rights. You have the right to submit an objection at any time to our use of your personal data based on our legitimate interest. The Right to Withdraw Consent : You have the right to withdraw your consent at any time to our use of your personal data. Please note that a withdrawal of your consent does not affect the legality of the use of your personal data prior to your consent being withdrawn. To exercise any of the above-mentioned rights, please contact us through the contact information provided in Section 1 of this Privacy Policy. Additionally, you have the right to lodge a complaint with the competent Data Protection Authority if you believe your rights regarding our use of your personal data have been violated. For all processing of personal data carried out by Foundry within the European Economic Area subject to the GDPR, the competent Data Protection Authority is the Irish Data Protection Commission, with registered office at 21 Fitzwilliam Square South, Dublin 2, D02 RD28, Ireland, as the lead supervisory authority of Foundry. More information can be found at: https://www.dataprotection.ie/ . For all processing of personal data carried out by Foundry in the United Kingdom subject to the UK GDPR, the competent Data Protection Authority is the Information Commissioner’s Office, with registered office at Wycliffe House, Water Lane, Wilmslow SK9 5AF, United Kingdom. More information can be found at: https://ico.org.uk . If you are a Californian consumer, please see the Your California Privacy Rights for a complete statement of your rights under the CCPA, including the right to opt-out of the sharing and sale of your personal data. ContentPass On many of our websites, in Austria, Germany, Ireland and the United Kingdom, we offer you privacy-friendly and ad-free access with Contentpass. This is an offer from Content Pass GmbH, Wolfswerder 58, 14532 Kleinmachnow, Germany. Upon completing the service, Contentpass becomes your contractual partner. In order to display and thus offer this service to you on our websites, Contentpass processes your IP address on our behalf at the beginning of your website visit. Contentpass is the controller within the meaning of the GDPR for registration and contract processing of Contentpass and the associated data processing. We are solely responsible for the processing of your IP address. You can find further information on data protection directly at Contentpass . The basis for the data processing of the IP address, as part of our contract processing with Contentpass, is our legitimate interest in offering you the opportunity to access our website free of advertising and tracking, and your interest in using our website with virtually no advertising and tracking [Art. 6 (1) (f) GDPR]. Click here to log in with Contentpass or to register . Links to Other Sites We may provide references and/or links to other companies, organizations, and/or public institutions on our website that enable you to access their websites directly from ours. The websites of these entities are governed by the entities’ own privacy policies. Please note that we are not responsible for the content of such websites and cannot accept responsibility for any issues arising in connection with such third parties’ use of your personal data. If you have any queries concerning the way your personal data is processed, used, or stored by such entities, we recommend referring to the privacy policies on the relevant websites. Children’s Privacy Our services or content are not directed, or intended for use by, children under the age of 16 years. Therefore, we do not knowingly collect personal data from children under the age of 16 years. If we become aware that personal data of a child under 16 has been collected, we will take appropriate steps to delete such data. Changes to this Privacy Policy We may periodically modify the provisions of this Privacy Policy and encourage you to review it from time to time in order to stay up to date with the most recent developments in the area of the protection of your personal data. In the event of significant changes, we may also choose to notify you via email should we have your email address in our records. An updated versions of this Privacy Policy will be published on our website. This Privacy Policy was last updated in September 2025. Download Privacy Policy External link Our brands Solutions Research Resources Events About Newsletter Contact us Work here Sitemap Topics Cookies: First-party & third-party Generative AI sponsorships Intent data IP address intelligence Reverse IP lookup Website visitor tracking Legal Terms of Service Privacy / Compliance Environmental Policy Copyright Notice Licensing CCPA IAB Europe TCF Regions ASEAN Australia & New Zealand Central Europe Germany India Middle East, Turkey, & Africa Nordics Southern Europe Western Europe Facebook Twitter LinkedIn ©2026 FoundryCo, Inc. All Rights Reserved. Privacy Policy Ad Choices Privacy Settings California: Do Not Sell My Information | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/id_id/AmazonCloudWatch/latest/monitoring/CloudWatch-ServiceLevelObjectives.html | Tujuan tingkat layanan (SLOs) - Amazon CloudWatch Tujuan tingkat layanan (SLOs) - Amazon CloudWatch Dokumentasi Amazon CloudWatch Panduan Pengguna Konsep-konsep SLO Hitung anggaran kesalahan dan pencapaian untuk berbasis periode SLOs Hitung anggaran kesalahan dan pencapaian berdasarkan permintaan SLOs Hitung laju pembakaran dan atur alarm laju pembakaran secara opsional Membuat SLO Menampilkan dan melakukan penilaian awal pada status SLO Sunting SLO yang ada Menghapus SLO Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tujuan tingkat layanan (SLOs) Anda dapat menggunakan Sinyal Aplikasi untuk membuat tujuan tingkat layanan untuk layanan untuk operasi atau dependensi bisnis penting Anda. Dengan membuat SLOs layanan ini, Anda akan dapat melacaknya di dasbor SLO, memberi Anda at-a-glance gambaran tentang operasi terpenting Anda. Selain membuat tampilan cepat yang dapat digunakan operator Anda untuk melihat status operasi kritis saat ini, Anda dapat menggunakannya SLOs untuk melacak kinerja jangka panjang layanan Anda, untuk memastikan bahwa mereka memenuhi harapan Anda. Jika Anda memiliki perjanjian tingkat layanan dengan pelanggan, SLOs adalah alat yang hebat untuk memastikan bahwa mereka terpenuhi. Menilai kesehatan layanan Anda dengan SLOs memulai dengan menetapkan tujuan yang jelas dan terukur berdasarkan metrik kinerja utama— indikator tingkat layanan (). SLIs SLO melacak performa SLI terhadap ambang batas dan sasaran yang Anda tetapkan, dan melaporkan seberapa jauh atau seberapa dekat performa aplikasi Anda dengan ambang batas. Sinyal Aplikasi membantu Anda mengatur SLOs metrik kinerja utama Anda. Sinyal Aplikasi secara otomatis mengumpulkan Latency dan Availability metrik untuk setiap layanan dan operasi yang ditemukannya, dan metrik ini seringkali ideal untuk digunakan sebagai. SLIs Dengan wizard pembuatan SLO, Anda dapat menggunakan metrik ini untuk Anda. SLOs Anda kemudian dapat melacak status semua Anda SLOs dengan dasbor Sinyal Aplikasi. Anda dapat mengatur SLOs operasi atau dependensi tertentu yang dipanggil atau digunakan oleh layanan Anda. Anda dapat menggunakan ekspresi CloudWatch metrik atau metrik apa pun sebagai SLI, selain menggunakan Latency dan Availability metrik. Membuat SLOs sangat penting untuk mendapatkan manfaat maksimal dari Sinyal CloudWatch Aplikasi. Setelah Anda membuat SLOs, Anda dapat melihat statusnya di konsol Sinyal Aplikasi untuk dengan cepat melihat layanan dan operasi penting Anda yang berkinerja baik dan mana yang tidak sehat. Harus SLOs melacak memberikan manfaat utama berikut: Lebih mudah bagi para operator layanan Anda untuk melihat kondisi kesehatan operasional saat ini dari layanan kritis yang diukur berdasarkan SLI. Kemudian mereka dapat dengan cepat melakukan penilaian awal dan mengidentifikasi layanan dan operasi yang sedang dalam kondisi tidak sehat. Anda dapat melacak performa layanan Anda terhadap sasaran bisnis yang terukur dalam jangka waktu yang lebih lama. Dengan memilih apa yang akan ditetapkan SLOs , Anda memprioritaskan apa yang penting bagi Anda. Dasbor Sinyal Aplikasi secara otomatis akan menyajikan informasi mengenai apa yang telah Anda prioritaskan. Saat Anda membuat SLO, Anda juga dapat memilih untuk membuat CloudWatch alarm pada saat yang sama untuk memantau. SLOs Anda dapat mengatur alarm yang memantau terjadinya pelanggaran ambang batas, dan juga alarm untuk tingkat-tingkat peringatan. Alarm-alarm ini dapat secara otomatis memberikan notifikasi kepada Anda jika metrik-metrik SLO melanggar ambang batas yang Anda tetapkan, atau jika mendekati ambang peringatan. Misalnya, SLO yang mendekati ambang peringatannya dapat memberi tahu Anda bahwa tim Anda mungkin perlu memperlambat churn dalam aplikasi untuk memastikan bahwa tujuan performa jangka panjang terpenuhi. Topik Konsep-konsep SLO Hitung anggaran kesalahan dan pencapaian untuk berbasis periode SLOs Hitung anggaran kesalahan dan pencapaian berdasarkan permintaan SLOs Hitung laju pembakaran dan atur alarm laju pembakaran secara opsional Membuat SLO Menampilkan dan melakukan penilaian awal pada status SLO Sunting SLO yang ada Menghapus SLO Konsep-konsep SLO Suatu SLO mencakup komponen-komponen berikut: Indikator tingkat layanan (SLI) , yang merupakan sebuah metrik performa utama yang Anda tentukan. Ini mewakili tingkat performa yang diinginkan untuk aplikasi Anda. Sinyal Aplikasi secara otomatis mengumpulkan metrik utama Latency dan Availability untuk layanan dan operasi yang ditemukannya, dan ini sering kali dapat menjadi metrik yang ideal untuk ditetapkan. SLOs Anda memilih ambang batas yang akan Anda gunakan untuk SLI Anda. Seperti, 200 ms untuk latensi. Tujuan atau tujuan pencapaian , yang merupakan persentase waktu atau permintaan agar SLI diharapkan memenuhi ambang batas selama setiap interval waktu. Interval waktu tersebut bisa dalam hitungan jam atau selama setahun. Interval dapat berupa interval kalender atau interval bergulir. Interval kalender diselaraskan dengan kalender, seperti SLO yang dilacak per bulan. CloudWatch Secara otomatis menyesuaikan kesehatan, anggaran, dan angka pencapaian berdasarkan jumlah hari dalam sebulan. Interval kalender lebih cocok untuk tujuan-tujuan bisnis yang diukur berdasarkan kalender yang sudah diselaraskan. Interval bergulir dihitung secara bergulir. Interval bergulir lebih cocok untuk melakukan pelacakan terhadap pengalaman pengguna terbaru dari aplikasi Anda. Periode adalah jangka waktu yang lebih pendek, dan banyak periode membentuk interval. Performa aplikasi dibandingkan dengan SLI selama masing-masing periode dalam interval. Untuk setiap periode, aplikasi ditentukan telah mencapai atau tidak mencapai performa yang diperlukan. Sebagai contoh, tujuan 99% dengan interval kalender satu hari dan periode 1 menit berarti bahwa aplikasi harus memenuhi atau mencapai ambang keberhasilan selama 99% dari periode 1 menit di siang hari. Jika ya, artinya SLO terpenuhi untuk hari itu. Hari berikutnya adalah interval evaluasi baru, dan aplikasi tersebut harus memenuhi atau mencapai ambang keberhasilan selama 99% dari periode 1 menit selama hari kedua untuk memenuhi SLO untuk hari kedua itu. SLI dapat didasarkan pada salah satu metrik aplikasi standar baru yang dikumpulkan Sinyal Aplikasi. Atau, itu bisa berupa ekspresi CloudWatch metrik atau metrik apa pun. Metrik aplikasi standar yang dapat Anda gunakan untuk SLI adalah Latency dan Availability . Availability mewakili respons yang berhasil dibagi dengan total permintaan. Ini dihitung sebagai (1 - Tingkat Kegagalan)*100 , di mana respons Kegagalan adalah kesalahan 5xx . Respons keberhasilan adalah respons tanpa kesalahan 5XX . Respons 4XX dianggap berhasil. Hitung anggaran kesalahan dan pencapaian untuk berbasis periode SLOs Ketika Anda melihat informasi tentang SLO, Anda melihat status kesehatan saat ini dan anggaran kesalahannya . Anggaran kesalahan adalah jumlah waktu dalam interval yang dapat menembus ambang batas tetapi tetap membiarkan SLO dipenuhi. Anggaran kesalahan total adalah jumlah total waktu pelanggaran yang dapat ditoleransi di seluruh interval. Sisa anggaran kesalahan adalah sisa jumlah waktu pelanggaran yang dapat ditoleransi selama interval saat ini. Ini setelah jumlah waktu pelanggaran yang telah terjadi telah dikurangi total anggaran kesalahan. Gambar berikut menggambarkan konsep anggaran pencapaian dan kesalahan untuk suatu tujuan dengan interval 30 hari, periode 1 menit, dan tujuan pencapaian 99%. 30 hari mencakup 43.200 periode 1 menit. 99% dari 43.200 adalah 42.768, jadi 42.768 menit selama sebulan harus sehat agar SLO terpenuhi. Sejauh ini dalam interval saat ini, 130 dari periode 1 menit berada kondisi tidak sehat. Menentukan keberhasilan dalam masing-masing periode Dalam masing-masing periode, data SLI akan dikumpulkan menjadi satu titik data berdasarkan statistik yang digunakan untuk SLI. Titik data ini mewakili durasi periode seluruhnya. Titik data tunggal itu dibandingkan dengan ambang batas SLI untuk menentukan apakah periode tersebut dalam kondisi sehat, atau tidak. Melihat periode yang tidak sehat selama rentang waktu saat ini di dasbor dapat mengingatkan para operator layanan Anda bahwa layanan perlu diprioritaskan. Jika periode ditentukan tidak sehat, seluruh panjang periode dihitung sebagai gagal terhadap anggaran kesalahan. Melacak anggaran kesalahan memungkinkan Anda mengetahui apakah layanan mencapai performa yang Anda inginkan dalam jangka waktu yang lebih lama. Pengecualian jendela waktu Pengecualian jendela waktu adalah blok waktu dengan tanggal mulai dan akhir yang ditentukan. Periode waktu ini dikecualikan dari metrik kinerja SLO dan Anda dapat menjadwalkan jendela pengecualian waktu satu kali atau berulang. Misalnya, pemeliharaan terjadwal. catatan Untuk berbasis periode SLOs, data SLI di jendela pengecualian dianggap sebagai tidak melanggar. Untuk berbasis permintaan SLOs, semua permintaan baik dan buruk di jendela pengecualian dikecualikan. Ketika interval untuk SLO berbasis permintaan sepenuhnya dikecualikan, metrik tingkat pencapaian default 100% diterbitkan. Anda hanya dapat menentukan jendela waktu dengan tanggal mulai di masa depan. Hitung anggaran kesalahan dan pencapaian berdasarkan permintaan SLOs Setelah Anda membuat SLO, Anda dapat mengambil laporan anggaran kesalahan untuk itu. Anggaran kesalahan adalah jumlah permintaan yang aplikasi Anda dapat tidak sesuai dengan tujuan SLO, dan masih memiliki aplikasi Anda memenuhi tujuan. Untuk SLO berbasis permintaan, anggaran kesalahan yang tersisa bersifat dinamis dan dapat meningkat atau menurun, tergantung pada rasio permintaan yang baik terhadap total permintaan Tabel berikut menggambarkan perhitungan untuk SLO berbasis permintaan dengan interval 5 hari dan 85% tujuan pencapaian. Dalam contoh ini, kami berasumsi tidak ada lalu lintas sebelum Hari 1. SLO tidak memenuhi tujuan pada Hari 10. Waktu Total permintaan Permintaan buruk Total permintaan akumulatif dalam 5 hari terakhir Akumulatif total permintaan bagus dalam 5 hari terakhir Pencapaian berbasis permintaan Total permintaan anggaran Permintaan anggaran yang tersisa Hari 1 10 1 10 9 9/10 = 90% 1.5 0,5 Hari 2 5 1 15 13 13/15= 86% 2.3 0,3 Hari 3 1 1 16 13 13/16= 81% 2.4 -0,6 Hari 4 24 0 40 37 37/40= 92% 6.0 3.0 Hari 5 20 5 60 52 52/60= 87% 9.0 1.0 Hari 6 6 2 56 47 47/56= 84% 8.4 -0,6 Hari 7 10 3 61 50 50/61= 82% 9.2 -1,8 Hari 8 15 6 75 59 59/75= 79% 11.3 -4,7 Hari 9 12 1 63 46 46/63= 73% 9.5 -7,5 Hari 10 5 57 40 40/57= 70% 8.5 -8,5 Pencapaian akhir selama 5 hari terakhir 70% Hitung laju pembakaran dan atur alarm laju pembakaran secara opsional Anda dapat menggunakan Sinyal Aplikasi untuk menghitung tingkat pembakaran untuk tujuan tingkat layanan Anda. Burn rate adalah metrik yang menunjukkan seberapa cepat layanan mengkonsumsi anggaran kesalahan, relatif terhadap tujuan pencapaian SLO. Ini dinyatakan sebagai faktor mutliple dari tingkat kesalahan dasar. Tingkat pembakaran dihitung sesuai dengan tingkat kesalahan dasar , yang tergantung pada tujuan pencapaian. Tujuan pencapaian adalah persentase dari periode waktu yang sehat atau permintaan yang berhasil yang harus dicapai untuk memenuhi tujuan SLO. Tingkat kesalahan dasar adalah (100% - persentase tujuan pencapaian), dan angka ini akan menggunakan anggaran kesalahan lengkap yang tepat pada akhir interval waktu SLO. Jadi SLO dengan tujuan pencapaian 99% akan memiliki tingkat kesalahan dasar 1%. Memantau laju pembakaran memberi tahu kita seberapa jauh kita dari tingkat kesalahan dasar. Sekali lagi mengambil contoh tujuan pencapaian 99%, berikut ini benar: Burn rate = 1 : Jika tingkat pembakaran tetap tepat pada tingkat kesalahan dasar sepanjang waktu, kami memenuhi tujuan SLO dengan tepat. Burn rate < 1 : Jika tingkat pembakaran lebih rendah dari tingkat kesalahan dasar, kami berada di jalur untuk melebihi tujuan SLO. Burn rate > 1 : Jika tingkat pembakaran lebih tinggi dari tingkat kesalahan dasar, kami memiliki kesempatan untuk gagal dalam tujuan SLO. Saat Anda membuat tingkat pembakaran untuk Anda SLOs, Anda juga dapat memilih untuk membuat CloudWatch alarm pada saat yang sama untuk memantau tingkat pembakaran. Anda dapat menetapkan ambang batas untuk tingkat pembakaran dan alarm dapat secara otomatis memberi tahu Anda jika metrik tingkat pembakaran melanggar ambang batas yang Anda tetapkan. Misalnya, tingkat pembakaran yang mendekati ambang batas dapat memberi tahu Anda bahwa SLO membakar anggaran kesalahan lebih cepat daripada yang dapat ditoleransi tim Anda dan tim Anda mungkin perlu memperlambat churn dalam aplikasi untuk memastikan bahwa tujuan kinerja jangka panjang terpenuhi. Membuat alarm akan menimbulkan biaya. Untuk informasi selengkapnya tentang CloudWatch harga, lihat CloudWatch Harga Amazon . Hitung laju pembakaran Untuk menghitung tingkat pembakaran, Anda harus menentukan jendela tampilan belakang . Jendela look-back adalah durasi waktu untuk mengukur tingkat kesalahan. burn rate = error rate over the look-back window / (100% - attainment goal) catatan Ketika tidak ada data untuk periode burn rate, Application Signals menghitung laju pembakaran berdasarkan pencapaian. Tingkat kesalahan dihitung sebagai rasio jumlah peristiwa buruk atas jumlah total peristiwa selama jendela tingkat pembakaran: Untuk berbasis periode SLOs, tingkat kesalahan dihitung sebagai periode buruk dibagi dengan total periode. Total periode mewakili keseluruhan periode selama jendela tampilan belakang. Untuk permintaan berbasis SLOs, ini adalah ukuran permintaan buruk dibagi dengan total permintaan. Jumlah total permintaan adalah jumlah permintaan selama jendela look-back. Jendela look-back harus kelipatan dari periode waktu SLO, dan harus kurang dari interval SLO. Tentukan ambang batas yang sesuai untuk alarm tingkat pembakaran Saat Anda mengonfigurasi alarm tingkat pembakaran, Anda harus memilih nilai untuk tingkat pembakaran sebagai ambang batas alarm. Nilai ambang batas ini tergantung pada panjang interval SLO dan jendela tampilan belakang, dan tergantung pada metode atau model mental mana yang ingin diadopsi oleh tim Anda. Ada dua metode utama yang tersedia untuk menentukan ambang batas. Metode 1: Tentukan persentase perkiraan total anggaran kesalahan yang bersedia dibakar tim Anda di jendela tampilan belakang. Jika Anda ingin khawatir ketika X% dari perkiraan anggaran kesalahan dihabiskan dalam jam lihat kembali tingkat pembakaran terakhir, ambang batas tingkat pembakaran adalah sebagai berikut: burn rate threshold = X% * SLO interval length / look-back window size Misalnya, 5% dari anggaran kesalahan 30 hari (720 jam) yang dihabiskan lebih dari satu jam membutuhkan tingkat pembakaran. 5% * 720 / 1 = 36 Oleh karena itu, jika jendela tampilan kembali tingkat pembakaran adalah 1 jam, kami menetapkan ambang batas laju pembakaran menjadi 36. Anda dapat menggunakan CloudWatch konsol untuk membuat alarm laju pembakaran menggunakan metode ini. Anda dapat menentukan angka X, dan ambang batas ditentukan menggunakan rumus di atas. Panjang interval SLO ditentukan berdasarkan jenis interval SLO: Untuk SLOs dengan interval bergulir, itu adalah panjang interval dalam jam. Untuk SLOs dengan interval berbasis kalender: Jika unit adalah hari atau minggu, itu adalah panjang interval dalam jam. Jika unit adalah satu bulan, kami mengambil 30 hari sebagai perkiraan panjang dan mengubahnya menjadi jam. Metode 2: Tentukan satuan waktul kelelahan anggaran untuk interval berikutnya Agar alarm memberi tahu Anda ketika tingkat kesalahan saat ini di jendela tampilan belakang terbaru menunjukkan bahwa waktu hingga kelelahan anggaran kurang dari X jam jauhnya (dengan asumsi anggaran yang tersisa saat ini 100%), Anda dapat menggunakan rumus berikut untuk menentukan ambang batas tingkat pembakaran. burn rate threshold = SLO interval length / X Kami menekankan bahwa waktu hingga habisnya anggaran (X) dalam rumus di atas mengasumsikan bahwa total anggaran yang tersisa saat ini 100%, dan oleh karena itu tidak memperhitungkan jumlah anggaran yang telah dibakar dalam interval ini. Kita juga bisa menganggapnya sebagai waktu sampai habis anggaran untuk interval berikutnya. Panduan untuk alarm laju pembakaran Sebagai contoh, mari kita ambil SLO dengan interval penggulungan 28 hari. Menyetel alarm tingkat pembakaran untuk SLO ini melibatkan dua langkah: Atur laju pembakaran dan jendela tampilan belakang. Kreta CloudWatch alarm yang memantau laju pembakaran. Untuk memulai, tentukan berapa banyak dari total anggaran kesalahan yang bersedia dibakar oleh layanan dalam jangka waktu tertentu. Dengan kata lain, tentukan tujuan Anda dengan menggunakan kalimat ini: “Saya ingin mendapatkan peringatan ketika X% dari total anggaran kesalahan saya dikonsumsi dalam M menit.” Misalnya, Anda mungkin ingin menetapkan tujuan untuk diperingatkan ketika 2% dari total anggaran kesalahan dikonsumsi dalam 60 menit. Untuk mengatur tingkat pembakaran, pertama-tama Anda menentukan jendela tampilan belakang. Jendela tampilan belakang adalah M, yang dalam contoh ini adalah 60 menit. Selanjutnya, Anda membuat CloudWatch alarm. Ketika Anda melakukannya, Anda harus menentukan ambang batas untuk tingkat pembakaran. Jika tingkat pembakaran melebihi ambang batas ini, alarm akan memberi tahu Anda. Untuk menemukan ambang batas, gunakan rumus berikut: burn rate threshold = X% * SLO interval length/ look-back window size Dalam contoh ini, X adalah 2 karena kami ingin diperingatkan jika 2% dari anggaran kesalahan dikonsumsi dalam 60 menit. Panjang interval adalah 40.320 menit (28 hari), dan 60 menit adalah jendela tampilan belakang, jadi jawabannya adalah: burn rate threshold = 2% * 40,320 / 60 = 13.44. Dalam contoh ini, Anda akan menetapkan 13.44 sebagai ambang alarm. Beberapa alarm dengan jendela berbeda Dengan mengatur alarm di beberapa jendela tampilan belakang, Anda dapat dengan cepat mendeteksi peningkatan tingkat kesalahan yang tajam dengan jendela pendek dan pada saat yang sama mendeteksi peningkatan tingkat kesalahan yang lebih kecil yang pada akhirnya menghabiskan anggaran kesalahan jika tetap tidak diperhatikan. Selain itu, Anda dapat mengatur alarm komposit pada tingkat pembakaran dengan jendela panjang dan pada tingkat pembakaran dengan jendela pendek (1/12 dari jendela panjang), dan diberi tahu hanya ketika kedua tingkat pembakaran melanggar ambang batas. Dengan cara ini, Anda dapat memastikan bahwa Anda mendapatkan peringatan hanya untuk situasi yang masih terjadi. Untuk informasi selengkapnya tentang alarm komposit di CloudWatch, lihat Menggabungkan alarm . catatan Anda dapat mengatur alarm metrik pada tingkat pembakaran saat Anda membuat laju pembakaran. Untuk menyetel alarm compoaite pada beberapa alarm laju pembakaran, Anda harus menggunakan instruksi di. Membuat sebuah alarm gabungan Satu strategi alarm komposit yang direkomendasikan dalam buku kerja Google Site Reliability Engineering mencakup tiga alarm komposit: Satu alarm komposit yang mengawasi sepasang alarm, satu dengan jendela satu jam dan satu lagi dengan jendela lima menit. Alarm komposit kedua yang mengawasi sepasang alarm, satu dengan jendela enam jam dan satu dengan jendela 30 menit. Alarm komposit ketiga yang mengawasi sepasang alarm, satu dengan jendela tiga hari dan satu dengan jendela enam jam. Langkah-langkah untuk melakukan pengaturan ini adalah sebagai berikut: Buat lima tingkat pembakaran, dengan jendela lima menit, 30 menit, satu jam, enam jam, dan tiga hari. Buat tiga pasang CloudWatch alarm berikut. Setiap pasangan mencakup satu jendela panjang dan satu jendela pendek yaitu 1/12 dari jendela panjang, dan ambang batas ditentukan dengan menggunakan langkah-langkah masuk. Tentukan ambang batas yang sesuai untuk alarm tingkat pembakaran Saat Anda menghitung ambang batas untuk setiap alarm pada pasangan, gunakan jendela tampilan belakang pasangan yang lebih panjang dalam perhitungan Anda. Alarm pada tingkat pembakaran 1 jam dan 5 menit (ambang batas ditentukan oleh 2% dari total anggaran) Alarm pada tingkat pembakaran 6 jam dan 30 menit (ambang batas ditentukan oleh 5% dari total anggaran) Alarm pada tingkat pembakaran 3 hari dan 6 jam (ambang batas ditentukan oleh 10% dari total anggaran) Untuk masing-masing pasangan ini, buat alarm komposit untuk mendapatkan peringatan ketika kedua alarm individu masuk ke status ALARM. Untuk informasi selengkapnya tentang membuat alarm komposit, lihat Membuat sebuah alarm gabungan . Misalnya, jika alarm Anda untuk pasangan pertama (jendela satu jam dan jendela lima menit) diberi nama OneHourBurnRate dan FiveMinuteBurnRate , aturan alarm CloudWatch komposit adalah ALARM(OneHourBurnRate) AND ALARM(FiveMinuteBurnRate) Strategi sebelumnya hanya mungkin untuk SLOs dengan panjang interval setidaknya tiga jam. Untuk SLOs dengan panjang interval yang lebih pendek, kami sarankan Anda memulai dengan sepasang alarm laju pembakaran di mana satu alarm memiliki jendela lihat ke belakang yang 1/12 dari jendela lihat-belakang alarm lainnya. Kemudian atur alarm komposit pada pasangan ini. Membuat SLO Kami menyarankan Anda mengatur latensi dan ketersediaan SLOs pada aplikasi penting Anda. Metrik yang dikumpulkan Sinyal Aplikasi ini selaras dengan tujuan bisnis bersama. Anda juga dapat mengatur SLOs CloudWatch metrik atau ekspresi matematika metrik apa pun yang menghasilkan satu deret waktu. Pertama kali Anda membuat SLO di akun Anda, CloudWatch secara otomatis membuat peran AWSServiceRoleForCloudWatchApplicationSignals terkait layanan di akun Anda, jika belum ada. Peran terkait layanan ini memungkinkan CloudWatch untuk mengumpulkan data CloudWatch Log, data jejak X-Ray, data CloudWatch metrik, dan data penandaan dari aplikasi di akun Anda. Untuk informasi selengkapnya tentang peran CloudWatch terkait layanan, lihat. Menggunakan peran terkait layanan untuk CloudWatch Saat Anda membuat SLO, Anda menentukan apakah itu SLO berbasis periode atau SLO berbasis permintaan. Setiap jenis SLO memiliki cara yang berbeda untuk mengevaluasi kinerja aplikasi Anda terhadap tujuan pencapaiannya. SLO berbasis periode menggunakan periode waktu yang ditentukan dalam interval waktu total yang ditentukan. Untuk setiap periode waktu, Sinyal Aplikasi menentukan apakah aplikasi memenuhi tujuannya. Tingkat pencapaian dihitung sebagai. number of good periods/number of total periods Misalnya, untuk SLO berbasis periode, memenuhi tujuan pencapaian 99,9% berarti bahwa dalam interval Anda, aplikasi Anda harus memenuhi tujuan kinerjanya selama setidaknya 99,9% dari periode waktu. SLO berbasis permintaan tidak menggunakan periode waktu yang telah ditentukan sebelumnya. Sebaliknya, SLO mengukur number of good requests/number of total requests selama interval. Kapan saja, Anda dapat menemukan rasio permintaan yang baik terhadap total permintaan untuk interval hingga stempel waktu yang Anda tentukan, dan mengukur rasio tersebut terhadap sasaran yang ditetapkan dalam SLO Anda. Topik Buat SLO berbasis periode Buat SLO berbasis permintaan Buat SLO berbasis periode Gunakan prosedur berikut untuk membuat SLO berbasis periode. Untuk membuat SLO berbasis periode Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Pada panel navigasi, silakan pilih Tujuan Tingkat Layanan (SLO) . Pilih Buat SLO . Masukkan nama untuk SLO. Menyertakan nama layanan atau operasi, bersama kata kunci yang sesuai seperti latensi atau ketersediaan, akan membantu Anda mengidentifikasi apa yang ditunjukkan status SLO selama triase dengan cepat. Untuk Mengatur Indikator Tingkat Layanan(SLI) , lakukan salah satu hal berikut: Untuk mengatur SLO pada salah satu metrik aplikasi standar Latency atau Availability : Pilih Operasi Layanan . Pilih akun yang akan dipantau SLO ini. Pilih layanan yang akan dipantau oleh SLO ini. Pilih operasi yang akan dipantau oleh SLO ini. Untuk Pilih metode perhitungan , pilih Periode . Drop-down Pilih Layanan dan Pilih operasi diisi oleh layanan dan operasi yang telah aktif dalam 24 jam terakhir. Pilih Ketersediaan atau Latensi dan kemudian atur ambang batas. Untuk mengatur SLO pada CloudWatch metrik atau ekspresi matematika CloudWatch metrik apa pun: Pilih CloudWatch Metrik . Pilih Pilih CloudWatch metrik . Layar Pilih metrik muncul. Gunakan tab Jelajahi atau Kueri untuk menemukan metrik yang Anda inginkan, atau membuat ekspresi matematika metrik. Setelah Anda memilih metrik yang Anda inginkan, pilih tab Metrik bergrafik dan pilih Statistik dan Periode yang akan digunakan untuk SLO. Kemudian pilih Pilih metrik . Untuk informasi selengkapnya tentang metrik ini, silakan lihat Membuat sebuah grafik metrik dan Tambahkan ekspresi matematika ke CloudWatch grafik . Untuk Pilih metode perhitungan , pilih Periode . Untuk Atur kondisi , pilih operator perbandingan dan ambang batas untuk SLO yang akan digunakan sebagai indikator keberhasilan. Untuk mengatur SLO pada ketergantungan layanan pada salah satu metrik Latency aplikasi standar atau: Availability Pilih Ketergantungan Layanan . Di bawah Pilih layanan , pilih layanan yang akan dipantau oleh SLO ini. Berdasarkan layanan yang dipilih, di bawah Pilih operasi , Anda dapat memilih satu operasi tertentu atau memilih Semua operasi untuk menggunakan metrik dari semua operasi layanan ini yang memanggil ketergantungan. Di bawah Pilih dependensi , Anda dapat mencari dan memilih ketergantungan yang diperlukan yang ingin Anda ukur keandalannya. Setelah Anda memilih ketergantungan, Anda dapat melihat grafik yang diperbarui dan data historis berdasarkan ketergantungan. Jika Anda memilih Operasi Layanan atau Ketergantungan Layanan di langkah 5, atur panjang periode untuk SLO ini. Atur interval dan tujuan pencapaian untuk SLO. Untuk informasi selengkapnya tentang interval dan pencapaian tujuan dan bagaimana keduanya bekerja sama, silakan lihat Konsep-konsep SLO . (Opsional) Untuk Set SLO burn rate lakukan hal berikut: Atur panjang (dalam hitungan menit) jendela tampilan belakang untuk laju pembakaran. Untuk informasi tentang cara memilih panjang ini, lihat Panduan untuk alarm laju pembakaran . Untuk membuat lebih banyak tingkat pembakaran untuk SLO ini, pilih Tambahkan lebih banyak tingkat pembakaran dan atur jendela tampilan belakang untuk tingkat pembakaran tambahan. (Opsional) Buat alarm tingkat pembakaran dengan melakukan hal berikut: Di bawah Setel alarm laju pembakaran , pilih kotak centang untuk setiap laju pembakaran yang ingin Anda buat alarm. Untuk masing-masing alarm ini, lakukan hal berikut: Tentukan topik Amazon SNS yang akan digunakan untuk notifikasi saat alarm masuk ke status ALARM. Tetapkan ambang batas tingkat pembakaran atau tentukan persentase perkiraan total anggaran yang dibakar di jendela tampilan belakang terakhir yang ingin Anda tetapkan di bawah. Jika Anda menetapkan persentase perkiraan total anggaran yang dibakar, ambang batas tingkat pembakaran dihitung untuk Anda dan digunakan dalam alarm. Untuk memutuskan ambang batas apa yang akan ditetapkan atau untuk memahami bagaimana opsi ini digunakan untuk menghitung ambang batas tingkat pembakaran, lihat Tentukan ambang batas yang sesuai untuk alarm tingkat pembakaran . (Opsional) Atur satu atau lebih CloudWatch alarm atau ambang peringatan untuk SLO. CloudWatch alarm dapat menggunakan Amazon SNS untuk memberi tahu Anda secara proaktif jika aplikasi tidak sehat berdasarkan kinerja SLI-nya. Untuk membuat alarm, pilih salah satu kotak centang alarm dan masukkan atau buat topik Amazon SNS yang akan digunakan untuk notifikasi saat alarm masuk ke status ALARM . Untuk informasi selengkapnya tentang CloudWatch alarm, lihat Menggunakan CloudWatch alarm Amazon . Membuat alarm akan menimbulkan biaya. Untuk informasi selengkapnya tentang CloudWatch harga, lihat CloudWatch Harga Amazon . Jika Anda menetapkan ambang peringatan, itu muncul di layar Sinyal Aplikasi untuk membantu Anda mengidentifikasi SLOs yang berada dalam bahaya tidak terpenuhi, bahkan jika mereka saat ini sehat. Untuk mengatur ambang batas peringatan, masukkan nilai ambang batas di Ambang batas peringatan . Ketika anggaran kesalahan SLO lebih rendah dari ambang batas peringatan, SLO ditandai dengan Peringatan di beberapa layar Sinyal Aplikasi. Ambang batas peringatan juga muncul pada grafik anggaran kesalahan. Anda juga dapat membuat alarm peringatan SLO yang didasarkan pada ambang batas peringatan. (Opsional) Untuk Mengatur pengecualian jendela waktu SLO , lakukan hal berikut: Di bawah Jendela waktu Kecualikan , setel jendela waktu yang akan dikecualikan dari metrik kinerja SLO. Anda dapat memilih Tetapkan jendela waktu dan masuk ke jendela Mulai untuk setiap jam atau bulan atau Anda dapat memilih Atur jendela waktu dengan CRON dan masukkan ekspresi CRON. Di bawah Ulangi , atur apakah pengecualian jendela waktu ini berulang atau tidak. (Opsional) Di bawah Tambahkan alasan , Anda dapat memilih untuk memasukkan alasan pengecualian jendela waktu. Misalnya, pemeliharaan terjadwal. Pilih Tambahkan jendela waktu untuk menambahkan hingga 10 jendela pengecualian waktu. Untuk menambahkan tanda ke SLO ini, silakan pilih tab Tanda dan kemudian pilih Tambahkan tanda baru . Tanda dapat membantu Anda mengelola, mengidentifikasi, mengatur, dan memfilter sumber daya. Untuk informasi selengkapnya tentang penandaan, silakan lihat Menandai sumber daya AWS Anda . catatan Jika aplikasi yang terkait dengan SLO ini terdaftar AWS Service Catalog AppRegistry, Anda dapat menggunakan awsApplication tag untuk mengaitkan SLO ini dengan aplikasi itu. AppRegistry Untuk informasi lebih lanjut, lihat Apa itu AppRegistry? Pilih Buat SLO . Jika Anda juga memilih untuk membuat satu atau beberapa alarm, nama tombol berubah sehingga mencerminkan hal ini. Buat SLO berbasis permintaan Gunakan prosedur berikut untuk membuat SLO berbasis permintaan. Untuk membuat SLO berbasis permintaan Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Pada panel navigasi, silakan pilih Tujuan Tingkat Layanan (SLO) . Pilih Buat SLO . Masukkan nama untuk SLO. Menyertakan nama layanan atau operasi, bersama kata kunci yang sesuai seperti latensi atau ketersediaan, akan membantu Anda mengidentifikasi apa yang ditunjukkan status SLO selama triase dengan cepat. Untuk Mengatur Indikator Tingkat Layanan(SLI) , lakukan salah satu hal berikut: Untuk mengatur SLO pada salah satu metrik aplikasi standar Latency atau Availability : Pilih Operasi Layanan . Pilih layanan yang akan dipantau oleh SLO ini. Pilih operasi yang akan dipantau oleh SLO ini. Untuk Pilih metode perhitungan , pilih Permintaan . Drop-down Pilih Layanan dan Pilih operasi diisi oleh layanan dan operasi yang telah aktif dalam 24 jam terakhir. Pilih Availability atau Latency . Jika Anda memilih Latency , atur ambang batas. Untuk mengatur SLO pada CloudWatch metrik atau ekspresi matematika CloudWatch metrik apa pun: Pilih CloudWatch Metrik . Untuk Tentukan permintaan target , lakukan hal berikut: Pilih apakah Anda ingin mengukur Permintaan Baik atau Permintaan Buruk . Pilih Pilih CloudWatch metrik . Metrik ini akan menjadi pembilang rasio permintaan target terhadap total permintaan. Jika Anda menggunakan metrik latensi, gunakan statistik Trimmed count (TC) . Jika ambang batas adalah 9 ms dan Anda menggunakan operator perbandingan kurang dari (<), maka gunakan threshold TC (:threshold - 1). Untuk informasi lebih lanjut tentang TC, lihat Sintaks . Layar Pilih metrik muncul. Gunakan tab Jelajahi atau Kueri untuk menemukan metrik yang Anda inginkan, atau membuat ekspresi matematika metrik. Untuk Tentukan permintaan total , pilih CloudWatch metrik yang ingin Anda gunakan untuk sumbernya. Metrik ini akan menjadi penyebut rasio permintaan target terhadap total permintaan. Layar Pilih metrik muncul. Gunakan tab Jelajahi atau Kueri untuk menemukan metrik yang Anda inginkan, atau membuat ekspresi matematika metrik. Setelah Anda memilih metrik yang Anda inginkan, pilih tab Metrik bergrafik dan pilih Statistik dan Periode yang akan digunakan untuk SLO. Kemudian pilih Pilih metrik . Jika Anda menggunakan metrik latensi yang memancarkan satu titik data per permintaan, gunakan statistik jumlah sampel untuk menghitung jumlah total permintaan. Untuk informasi selengkapnya tentang metrik ini, silakan lihat Membuat sebuah grafik metrik dan Tambahkan ekspresi matematika ke CloudWatch grafik . Untuk mengatur SLO pada ketergantungan layanan pada salah satu metrik Latency aplikasi standar atau: Availability Pilih Ketergantungan Layanan . Di bawah Pilih layanan , pilih layanan yang akan dipantau oleh SLO ini. Berdasarkan layanan yang dipilih, di bawah Pilih operasi , Anda dapat memilih satu operasi tertentu atau memilih Semua operasi untuk menggunakan metrik dari semua operasi layanan ini yang memanggil ketergantungan. Di bawah Pilih dependensi , Anda dapat mencari dan memilih ketergantungan yang diperlukan yang ingin Anda ukur keandalannya. Setelah Anda memilih ketergantungan, Anda dapat melihat grafik yang diperbarui dan data historis berdasarkan ketergantungan. Atur interval dan tujuan pencapaian untuk SLO. Untuk informasi selengkapnya tentang interval dan pencapaian tujuan dan bagaimana keduanya bekerja sama, silakan lihat Konsep-konsep SLO . (Opsional) Untuk Set SLO burn rate lakukan hal berikut: Atur panjang (dalam hitungan menit) jendela tampilan belakang untuk laju pembakaran. Untuk informasi tentang cara memilih panjang ini, lihat Panduan untuk alarm laju pembakaran . Untuk membuat lebih banyak tingkat pembakaran untuk SLO ini, pilih Tambahkan lebih banyak tingkat pembakaran dan atur jendela tampilan belakang untuk tingkat pembakaran tambahan. (Opsional) Buat alarm tingkat pembakaran dengan melakukan hal berikut: Di bawah Setel alarm laju pembakaran , pilih kotak centang untuk setiap laju pembakaran yang ingin Anda buat alarm. Untuk masing-masing alarm ini, lakukan hal berikut: Tentukan topik Amazon SNS yang akan digunakan untuk notifikasi saat alarm masuk ke status ALARM. Tetapkan ambang batas tingkat pembakaran atau tentukan persentase perkiraan total anggaran yang dibakar di jendela tampilan belakang terakhir yang ingin Anda tetapkan di bawah. Jika Anda menetapkan persentase perkiraan total anggaran yang dibakar, ambang batas tingkat pembakaran dihitung untuk Anda dan digunakan dalam alarm. Untuk memutuskan ambang batas apa yang akan ditetapkan atau untuk memahami bagaimana opsi ini digunakan untuk menghitung ambang batas tingkat pembakaran, lihat Tentukan ambang batas yang sesuai untuk alarm tingkat pembakaran . (Opsional) Atur satu atau lebih CloudWatch alarm atau ambang peringatan untuk SLO. CloudWatch alarm dapat menggunakan Amazon SNS untuk memberi tahu Anda secara proaktif jika aplikasi tidak sehat berdasarkan kinerja SLI-nya. Untuk membuat alarm, pilih salah satu kotak centang alarm dan masukkan atau buat topik Amazon SNS yang akan digunakan untuk notifikasi saat alarm masuk ke status ALARM . Untuk informasi selengkapnya tentang CloudWatch alarm, lihat Menggunakan CloudWatch alarm Amazon . Membuat alarm akan menimbulkan biaya. Untuk informasi selengkapnya tentang CloudWatch harga, lihat CloudWatch Harga Amazon . Jika Anda menetapkan ambang peringatan, itu muncul di layar Sinyal Aplikasi untuk membantu Anda mengidentifikasi SLOs yang berada dalam bahaya tidak terpenuhi, bahkan jika mereka saat ini sehat. Untuk mengatur ambang batas peringatan, masukkan nilai ambang batas di Ambang batas peringatan . Ketika anggaran kesalahan SLO lebih rendah dari ambang batas peringatan, SLO ditandai dengan Peringatan di beberapa layar Sinyal Aplikasi. Ambang batas peringatan juga muncul pada grafik anggaran kesalahan. Anda juga dapat membuat alarm peringatan SLO yang didasarkan pada ambang batas peringatan. (Opsional) Untuk Mengatur pengecualian jendela waktu SLO , lakukan hal berikut: Di bawah Jendela waktu Kecualikan , setel jendela waktu yang akan dikecualikan dari metrik kinerja SLO. Anda dapat memilih Tetapkan jendela waktu dan masuk ke jendela Mulai untuk setiap jam atau bulan atau Anda dapat memilih Atur jendela waktu dengan CRON dan masukkan ekspresi CRON. Di bawah Ulangi , atur apakah pengecualian jendela waktu ini berulang atau tidak. (Opsional) Di bawah Tambahkan alasan , Anda dapat memilih untuk memasukkan alasan pengecualian jendela waktu. Misalnya, pemeliharaan terjadwal. Pilih Tambahkan jendela waktu untuk menambahkan hingga 10 jendela pengecualian waktu. Untuk menambahkan tanda ke SLO ini, silakan pilih tab Tanda dan kemudian pilih Tambahkan tanda baru . Tanda dapat membantu Anda mengelola, mengidentifikasi, mengatur, dan memfilter sumber daya. Untuk informasi selengkapnya tentang penandaan, silakan lihat Menandai sumber daya AWS Anda . catatan Jika aplikasi yang terkait dengan SLO ini terdaftar AWS Service Catalog AppRegistry, Anda dapat menggunakan awsApplication tag untuk mengaitkan SLO ini dengan aplikasi itu. AppRegistry Untuk informasi lebih lanjut, lihat Apa itu AppRegistry? Pilih Buat SLO . Jika Anda juga memilih untuk membuat satu atau beberapa alarm, nama tombol berubah sehingga mencerminkan hal ini. Menampilkan dan melakukan penilaian awal pada status SLO Anda dapat dengan cepat melihat kesehatan Anda SLOs menggunakan Tujuan Tingkat Layanan atau opsi Layanan di CloudWatch konsol. Tampilan Layanan memberikan at-a-glance tampilan rasio layanan yang tidak sehat, dihitung berdasarkan SLOs yang telah Anda tetapkan. Untuk informasi selengkapnya tentang penggunaan opsi Layanan , silakan lihat Memantau kondisi kesehatan operasional aplikasi Anda dengan Sinyal Aplikasi . Tampilan Tujuan Tingkat Layanan memberikan sebuah tampilan makro organisasi Anda. Anda dapat melihat yang bertemu dan tidak terpenuhi secara SLOs keseluruhan. Ini memberi Anda gambaran tentang berapa banyak layanan dan operasi Anda yang berkinerja sesuai harapan Anda selama periode waktu yang lebih lama, sesuai dengan SLIs yang Anda pilih. Untuk melihat semua SLOs tampilan Tujuan Tingkat Layanan Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Pada panel navigasi, silakan pilih Tujuan Tingkat Layanan (SLO) . Daftar Tujuan Tingkat Layanan (SLO) ditampilkan. Anda dapat dengan cepat melihat status Anda saat ini SLOs di kolom status SLI . Untuk mengurutkan SLOs sehingga semua yang tidak sehat berada di bagian atas daftar, pilih kolom status SLI sampai yang tidak sehat SLOs semuanya di atas. Tabel SLO memiliki kolom-kolom default berikut. Anda dapat menyesuaikan kolom-kolom mana saja yang ditampilkan dengan memilih ikon roda gigi yang ada di atas daftar. Untuk informasi lebih lanjut tentang tujuan SLIs, pencapaian, dan interval, lihat Konsep-konsep SLO . Nama SLO. Kolom Tujuan menampilkan persentase periode selama setiap interval yang harus berhasil memenuhi ambang batas SLI agar tujuan SLO terpenuhi. Ini juga menampilkan panjang interval untuk SLO tersebut. Status SLI akan menampilkan apakah status operasional aplikasi saat ini sedang dalam kondisi yang sehat atau tidak sehat. Jika ada periode selama rentang waktu yang dipilih saat ini tidak sehat untuk SLO, status SLI menampilkan Tidak Sehat . Jika SLO ini dikonfigurasi untuk memantau ketergantungan, kolom Dependency dan Remote Operation akan menampilkan detail tentang hubungan ketergantungan tersebut. Pencapaian akhir adalah tingkat pencapaian yang dicapai pada akhir rentang waktu yang dipilih. Urutkan berdasarkan kolom ini untuk melihat SLOs yang paling berisiko tidak terpenuhi. Delta pencapaian adalah perbedaan tingkat pencapaian antara awal dan akhir rentang waktu yang dipilih. Delta negatif berarti bahwa metrik kecenderungannya sedang ke arah bawah. Urutkan berdasarkan kolom ini untuk melihat tren terbaru dari SLOs. Anggaran kesalahan akhir (%) adalah persentase dari total waktu dalam periode yang dapat memiliki periode tidak sehat dan masih memiliki SLO yang berhasil dicapai. Jika Anda mengatur ini menjadi 5%, dan SLI sedang dalam kondisi tidak sehat dalam 5% atau kurang dari periode yang tersisa dalam interval, maka SLO masih berhasil dicapai. Delta anggaran kesalahan adalah perbedaan anggaran kesalahan antara awal dan akhir rentang waktu yang dipilih. Delta negatif berarti bahwa metrik sedang mengarah ke arah yang gagal. Anggaran kesalahan akhir (waktu) adalah jumlah waktu aktual dalam interval yang bisa tidak sehat dan masih memiliki SLO yang berhasil dicapai. Sebagai contoh, jika ini 14 menit, maka jika SLI tidak sehat selama kurang dari 14 menit selama interval yang tersisa, SLO akan tetap berhasil tercapai. Anggaran kesalahan akhir (permintaan) adalah jumlah permintaan dalam interval yang bisa tidak sehat dan SLO masih berhasil dicapai. Untuk berbasis permintaan SLOs, nilai ini dinamis dan dapat berfluktuasi karena jumlah permintaan kumulatif berubah seiring waktu. Kolom Layanan , Operasi , dan Tipe menampilkan informasi tentang layanan dan operasi apa yang diatur SLO ini. Untuk melihat grafik pencapaian dan kesalahan untuk SLO, pilih tombol radio di samping nama SLO. Grafik di bagian atas halaman menampilkan pencapaian SLO dan status Anggaran kesalahan . Sebuah grafik tentang metrik SLI yang dikaitkan dengan SLO ini juga ditampilkan. Untuk melakukan triase lebih lanjut SLO yang tidak memenuhi tujuannya, pilih nama layanan, nama operasi, atau nama ketergantungan yang terkait dengan SLO tersebut. Anda dibawa ke halaman detail di mana Anda dapat melakukan penilaian awal lebih lanjut. Untuk informasi selengkapnya, lihat Lihat detail aktivitas layanan dan kesehatan operasional dengan halaman detail layanan . Untuk mengubah rentang waktu grafik dan tabel pada halaman tersebut, pilih rentang waktu baru di dekat bagian atas layar. Sunting SLO yang ada Ikuti langkah-langkah ini untuk menyunting SLO yang ada. Saat Anda menyunting SLO, Anda hanya dapat mengubah ambang batas, interval, tujuan pencapaian, dan tag. Untuk mengubah aspek lain seperti layanan, operasi, atau metrik, buat SLO baru alih-alih menyunting yang sudah ada. Mengubah bagian dari konfigurasi inti SLO, seperti periode atau ambang batas, membatalkan semua titik data sebelumnya dan penilaian tentang pencapaian dan kondisi kesehatan. Ini secara efektif menghapus dan membuat kembali SLO. catatan Jika Anda menyunting SLO, alarm yang terkait dengan SLO tersebut tidak diperbarui secara otomatis. Anda mungkin perlu memperbarui alarm-alarm tersebut agar tetap sinkron dengan SLO. Cara menyunting SLO yang ada Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Pada panel navigasi, silakan pilih Tujuan Tingkat Layanan (SLO) . Pilih tombol radio yang ada di samping SLO yang ingin Anda sunting, dan pilih Tindakan , Sunting SLO . Buat perubahan, lalu pilih Simpan perubahan . Menghapus SLO Ikuti langkah-langkah ini untuk menghapus SLO yang ada. catatan Saat Anda menghapus sebuah SLO, alarm yang terkait dengan SLO tersebut tidak akan dihapus secara otomatis. Anda harus menghapusnya sendiri. Untuk informasi selengkapnya, lihat Mengelola alarm-alarm . Cara menghapus SLO Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Pada panel navigasi, silakan pilih Tujuan Tingkat Layanan (SLO) . Pilih tombol radio di samping SLO yang ingin Anda sunting, dan pilih Tindakan , Hapus SLO . Pilih Konfirmasi . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Metrik khusus dengan Sinyal Aplikasi Pencarian Transaksi Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html | Creating metrics from log events using filters - Amazon CloudWatch Logs Creating metrics from log events using filters - Amazon CloudWatch Logs Documentation Amazon CloudWatch User Guide Concepts Creating metrics from log events using filters You can search and filter the log data coming into CloudWatch Logs by creating one or more metric filters . Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. If you specify a unit, be sure to specify the correct one when you create the filter. Changing the unit for the filter later will have no effect. If you have configured AWS Organizations and are working with member accounts you can use log centralization to collect log data from source accounts into a central monitoring account. When working with centralized log groups you can use these system fields dimensions when creating metric filters. @aws.account - This dimension represents the AWS account ID from which the log event originated. @aws.region - This dimension represents the AWS region where the log event was generated. These dimensions help in identifying the source of log data, allowing for more granular filtering and analysis of metrics derived from centralized logs. For more information, see Cross-account cross-Region log centralization . If a log group with a subscription uses log transformation, the filter pattern is applied to the transformed versions of the log events. For more information, see Transform logs during ingestion . Note Metric filters are supported only for log groups in the Standard log class. For more information about log classes, see Log classes . You can use any type of CloudWatch statistic, including percentile statistics, when viewing these metrics or setting alarms. Note Percentile statistics are supported for a metric only if none of the metric's values are negative. If you set up your metric filter so that it can report negative numbers, percentile statistics will not be available for that metric when it has negative numbers as values. For more information, see Percentiles . Filters do not retroactively filter data. Filters only publish the metric data points for events that happen after the filter was created. When testing a filter pattern, the Filter results preview shows up to the first 50 matching log lines for validation purposes. If the timestamp on the filtered results is earlier than the metric creation time, no logs are displayed. Contents Concepts Filter pattern syntax for metric filters Creating metric filters Listing metric filters Deleting a metric filter Concepts Each metric filter is made up of the following key elements: default value The value reported to the metric filter during a period when logs are ingested but no matching logs are found. By setting this to 0, you ensure that data is reported during every such period, preventing "spotty" metrics with periods of no matching data. If no logs are ingested during a one-minute period, then no value is reported. If you assign dimensions to a metric created by a metric filter, you can't assign a default value for that metric. dimensions Dimensions are the key-value pairs that further define a metric. You can assign dimensions to the metric created from a metric filter. Because dimensions are part of the unique identifier for a metric, whenever a unique name/value pair is extracted from your logs, you are creating a new variation of that metric. filter pattern A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log entry may contain timestamps, IP addresses, strings, and so on. You use the pattern to specify what to look for in the log file. metric name The name of the CloudWatch metric to which the monitored log information should be published. For example, you may publish to a metric called ErrorCount. metric namespace The destination namespace of the new CloudWatch metric. metric value The numerical value to publish to the metric each time a matching log is found. For example, if you're counting the occurrences of a particular term like "Error", the value will be "1" for each occurrence. If you're counting the bytes transferred, you can increment by the actual number of bytes found in the log event. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Access logs with S3 Tables Integration Filter pattern syntax for metric filters Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file.html#CloudWatch-Agent-multiple-config-files | Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei - Amazon CloudWatch Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Bevor Sie den CloudWatch Agenten auf einem beliebigen Server ausführen, müssen Sie eine oder mehrere CloudWatch Agenten-Konfigurationsdateien erstellen. Die Konfigurationsdatei des Agenten ist eine JSON-Datei, in der die Metriken, Protokolle und Ablaufverfolgungen angegeben sind, die der Agent erfassen soll, einschließlich benutzerdefinierter Metriken. Sie können sie mithilfe des Assistenten oder selbst von Grund auf erstellen. Sie können die Konfigurationsdatei auch mit dem Assistenten erstellen und dann manuell anpassen. Wenn Sie die Datei manuell erstellen oder bearbeiten, ist der Prozess komplizierter. Sie haben jedoch mehr Kontrolle über die erfassten Metriken und können Metriken angeben, die im Assistenten nicht verfügbar sind. Bei jeder Änderung der Agent-Konfigurationsdatei müssen Sie den Agent neu starten, damit die Änderungen wirksam werden. Um den Agent neu zu starten, befolgen Sie die Anweisungen in (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten . Sie können die erstellte Konfigurationsdatei als JSON-Datei speichern und später für die Installation des Agenten auf Ihren Servern verwenden. Alternativ können Sie die Datei in Systems Manager Parameter Store speichern, wenn Sie für die Agenteninstallation auf den Servern Systems Manager verwenden möchten. Der CloudWatch Agent unterstützt die Verwendung mehrerer Konfigurationsdateien. Weitere Informationen finden Sie unter Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien . Für die vom CloudWatch Agenten gesammelten Metriken, Protokolle und Traces fallen Gebühren an. Weitere Informationen zur Preisgestaltung finden Sie unter CloudWatchAmazon-Preise . Inhalt Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Erstellen oder bearbeiten Sie die CloudWatch Agenten-Konfigurationsdatei manuell Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Sowohl auf Linux- als auch auf Windows-Servern können Sie den CloudWatch Agenten so einrichten, dass er mehrere Konfigurationsdateien verwendet. Sie können zum Beispiel eine gemeinsame Konfigurationsdatei verwenden, die eine Reihe von Metriken, Protokollen und Ablaufverfolgungen sammelt, die Sie immer von allen Servern in Ihrer Infrastruktur sammeln möchten. Anschließend können Sie zusätzliche Konfigurationsdateien verwenden, die Metriken aus bestimmten Anwendungen oder in bestimmten Situationen erfassen. Um dies einzurichten, erstellen Sie zunächst die Konfigurationsdateien, die Sie verwenden möchten. Alle Konfigurationsdateien, die gemeinsam auf demselben Server benutzt werden, müssen unterschiedliche Dateinamen haben. Sie können die Konfigurationsdateien auf Servern oder in Parameter Store speichern. Starten Sie den CloudWatch Agenten mit der fetch-config Option und geben Sie die erste Konfigurationsdatei an. Um die zweite Konfigurationsdatei an den ausgeführten Agent anzufügen, verwenden Sie denselben Befehl, aber mit der append-config -Option. Alle Metriken, Protokolle und Ablaufverfolgungen, die in einer der beiden Konfigurationsdateien aufgeführt sind, werden gesammelt. Die folgenden Beispielbefehle veranschaulichen dieses Szenario mit Konfigurationen, die als Dateien gespeichert sind. Die erste Zeile startet den Agent mithilfe der infrastructure.json -Konfigurationsdatei und die zweite Zeile fügt die app.json -Konfigurationsdatei an. Die folgenden Beispielbefehle sind für Linux. /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/tmp/infrastructure.json /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -s -c file:/tmp/app.json Die folgenden Beispielbefehle sind für Windows Server. & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:"C:\Program Files\Amazon\AmazonCloudWatchAgent\infrastructure.json" & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a append-config -m ec2 -s -c file:"C:\Program Files\Amazon\AmazonCloudWatchAgent\app.json" Die folgenden Beispiel-Konfigurationsdateien veranschaulichen eine Nutzung für dieses Feature. Die erste Konfigurationsdatei wird für alle Server in der Infrastruktur verwendet, und die zweite erfasst nur Protokolle von einer bestimmten Anwendung und wird an Server angehängt, die die Anwendung ausführen. infrastructure.json { "metrics": { "metrics_collected": { "cpu": { "resources": [ "*" ], "measurement": [ "usage_active" ], "totalcpu": true }, "mem": { "measurement": [ "used_percent" ] } } }, "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log" }, { "file_path": "/var/log/messages", "log_group_name": "/var/log/messages" } ] } } } } app.json { "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/app/app.log*", "log_group_name": "/app/app.log" } ] } } } } Die Dateinamen aller Konfigurationsdateien, die an die Konfiguration angehängt werden, müssen sich voneinander und vom Namen der anfänglichen Konfigurationsdatei unterscheiden. Wenn Sie append-config mit einer Konfigurationsdatei mit dem selben Dateinamen wie eine Konfigurationsdatei verwenden, die der Agent bereits verwendet, überschreibt der Append-Befehl die Informationen aus der ersten Konfigurationsdatei, anstatt Inhalte anzuhängen. Dies gilt auch, wenn sich zwei Konfigurationsdateien mit demselben Dateinamen in verschiedenen Dateipfaden befinden. Das vorstehende Beispiel zeigt die Verwendung von zwei Konfigurationsdateien. Es gibt jedoch keine Beschränkungen hinsichtlich der Anzahl der Konfigurationsdateien, die Sie an die Agentenkonfiguration anhängen können. Sie können auch die Nutzung von Konfigurationsdateien auf Servern und Konfigurationen in Parameter Store kombinieren. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Richten Sie den CloudWatch Agenten mit Linux mit verbesserter Sicherheit ein () SELinux Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file.html#CloudWatch-Agent-multiple-config-files | Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei - Amazon CloudWatch Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Bevor Sie den CloudWatch Agenten auf einem beliebigen Server ausführen, müssen Sie eine oder mehrere CloudWatch Agenten-Konfigurationsdateien erstellen. Die Konfigurationsdatei des Agenten ist eine JSON-Datei, in der die Metriken, Protokolle und Ablaufverfolgungen angegeben sind, die der Agent erfassen soll, einschließlich benutzerdefinierter Metriken. Sie können sie mithilfe des Assistenten oder selbst von Grund auf erstellen. Sie können die Konfigurationsdatei auch mit dem Assistenten erstellen und dann manuell anpassen. Wenn Sie die Datei manuell erstellen oder bearbeiten, ist der Prozess komplizierter. Sie haben jedoch mehr Kontrolle über die erfassten Metriken und können Metriken angeben, die im Assistenten nicht verfügbar sind. Bei jeder Änderung der Agent-Konfigurationsdatei müssen Sie den Agent neu starten, damit die Änderungen wirksam werden. Um den Agent neu zu starten, befolgen Sie die Anweisungen in (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten . Sie können die erstellte Konfigurationsdatei als JSON-Datei speichern und später für die Installation des Agenten auf Ihren Servern verwenden. Alternativ können Sie die Datei in Systems Manager Parameter Store speichern, wenn Sie für die Agenteninstallation auf den Servern Systems Manager verwenden möchten. Der CloudWatch Agent unterstützt die Verwendung mehrerer Konfigurationsdateien. Weitere Informationen finden Sie unter Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien . Für die vom CloudWatch Agenten gesammelten Metriken, Protokolle und Traces fallen Gebühren an. Weitere Informationen zur Preisgestaltung finden Sie unter CloudWatchAmazon-Preise . Inhalt Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Erstellen oder bearbeiten Sie die CloudWatch Agenten-Konfigurationsdatei manuell Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Sowohl auf Linux- als auch auf Windows-Servern können Sie den CloudWatch Agenten so einrichten, dass er mehrere Konfigurationsdateien verwendet. Sie können zum Beispiel eine gemeinsame Konfigurationsdatei verwenden, die eine Reihe von Metriken, Protokollen und Ablaufverfolgungen sammelt, die Sie immer von allen Servern in Ihrer Infrastruktur sammeln möchten. Anschließend können Sie zusätzliche Konfigurationsdateien verwenden, die Metriken aus bestimmten Anwendungen oder in bestimmten Situationen erfassen. Um dies einzurichten, erstellen Sie zunächst die Konfigurationsdateien, die Sie verwenden möchten. Alle Konfigurationsdateien, die gemeinsam auf demselben Server benutzt werden, müssen unterschiedliche Dateinamen haben. Sie können die Konfigurationsdateien auf Servern oder in Parameter Store speichern. Starten Sie den CloudWatch Agenten mit der fetch-config Option und geben Sie die erste Konfigurationsdatei an. Um die zweite Konfigurationsdatei an den ausgeführten Agent anzufügen, verwenden Sie denselben Befehl, aber mit der append-config -Option. Alle Metriken, Protokolle und Ablaufverfolgungen, die in einer der beiden Konfigurationsdateien aufgeführt sind, werden gesammelt. Die folgenden Beispielbefehle veranschaulichen dieses Szenario mit Konfigurationen, die als Dateien gespeichert sind. Die erste Zeile startet den Agent mithilfe der infrastructure.json -Konfigurationsdatei und die zweite Zeile fügt die app.json -Konfigurationsdatei an. Die folgenden Beispielbefehle sind für Linux. /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/tmp/infrastructure.json /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -s -c file:/tmp/app.json Die folgenden Beispielbefehle sind für Windows Server. & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:"C:\Program Files\Amazon\AmazonCloudWatchAgent\infrastructure.json" & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a append-config -m ec2 -s -c file:"C:\Program Files\Amazon\AmazonCloudWatchAgent\app.json" Die folgenden Beispiel-Konfigurationsdateien veranschaulichen eine Nutzung für dieses Feature. Die erste Konfigurationsdatei wird für alle Server in der Infrastruktur verwendet, und die zweite erfasst nur Protokolle von einer bestimmten Anwendung und wird an Server angehängt, die die Anwendung ausführen. infrastructure.json { "metrics": { "metrics_collected": { "cpu": { "resources": [ "*" ], "measurement": [ "usage_active" ], "totalcpu": true }, "mem": { "measurement": [ "used_percent" ] } } }, "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log" }, { "file_path": "/var/log/messages", "log_group_name": "/var/log/messages" } ] } } } } app.json { "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/app/app.log*", "log_group_name": "/app/app.log" } ] } } } } Die Dateinamen aller Konfigurationsdateien, die an die Konfiguration angehängt werden, müssen sich voneinander und vom Namen der anfänglichen Konfigurationsdatei unterscheiden. Wenn Sie append-config mit einer Konfigurationsdatei mit dem selben Dateinamen wie eine Konfigurationsdatei verwenden, die der Agent bereits verwendet, überschreibt der Append-Befehl die Informationen aus der ersten Konfigurationsdatei, anstatt Inhalte anzuhängen. Dies gilt auch, wenn sich zwei Konfigurationsdateien mit demselben Dateinamen in verschiedenen Dateipfaden befinden. Das vorstehende Beispiel zeigt die Verwendung von zwei Konfigurationsdateien. Es gibt jedoch keine Beschränkungen hinsichtlich der Anzahl der Konfigurationsdateien, die Sie an die Agentenkonfiguration anhängen können. Sie können auch die Nutzung von Konfigurationsdateien auf Servern und Konfigurationen in Parameter Store kombinieren. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Richten Sie den CloudWatch Agenten mit Linux mit verbesserter Sicherheit ein () SELinux Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file.html#CloudWatch-Agent-multiple-config-files | Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei - Amazon CloudWatch Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Bevor Sie den CloudWatch Agenten auf einem beliebigen Server ausführen, müssen Sie eine oder mehrere CloudWatch Agenten-Konfigurationsdateien erstellen. Die Konfigurationsdatei des Agenten ist eine JSON-Datei, in der die Metriken, Protokolle und Ablaufverfolgungen angegeben sind, die der Agent erfassen soll, einschließlich benutzerdefinierter Metriken. Sie können sie mithilfe des Assistenten oder selbst von Grund auf erstellen. Sie können die Konfigurationsdatei auch mit dem Assistenten erstellen und dann manuell anpassen. Wenn Sie die Datei manuell erstellen oder bearbeiten, ist der Prozess komplizierter. Sie haben jedoch mehr Kontrolle über die erfassten Metriken und können Metriken angeben, die im Assistenten nicht verfügbar sind. Bei jeder Änderung der Agent-Konfigurationsdatei müssen Sie den Agent neu starten, damit die Änderungen wirksam werden. Um den Agent neu zu starten, befolgen Sie die Anweisungen in (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten . Sie können die erstellte Konfigurationsdatei als JSON-Datei speichern und später für die Installation des Agenten auf Ihren Servern verwenden. Alternativ können Sie die Datei in Systems Manager Parameter Store speichern, wenn Sie für die Agenteninstallation auf den Servern Systems Manager verwenden möchten. Der CloudWatch Agent unterstützt die Verwendung mehrerer Konfigurationsdateien. Weitere Informationen finden Sie unter Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien . Für die vom CloudWatch Agenten gesammelten Metriken, Protokolle und Traces fallen Gebühren an. Weitere Informationen zur Preisgestaltung finden Sie unter CloudWatchAmazon-Preise . Inhalt Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Erstellen oder bearbeiten Sie die CloudWatch Agenten-Konfigurationsdatei manuell Erstellen mehrerer CloudWatch Agenten-Konfigurationsdateien Sowohl auf Linux- als auch auf Windows-Servern können Sie den CloudWatch Agenten so einrichten, dass er mehrere Konfigurationsdateien verwendet. Sie können zum Beispiel eine gemeinsame Konfigurationsdatei verwenden, die eine Reihe von Metriken, Protokollen und Ablaufverfolgungen sammelt, die Sie immer von allen Servern in Ihrer Infrastruktur sammeln möchten. Anschließend können Sie zusätzliche Konfigurationsdateien verwenden, die Metriken aus bestimmten Anwendungen oder in bestimmten Situationen erfassen. Um dies einzurichten, erstellen Sie zunächst die Konfigurationsdateien, die Sie verwenden möchten. Alle Konfigurationsdateien, die gemeinsam auf demselben Server benutzt werden, müssen unterschiedliche Dateinamen haben. Sie können die Konfigurationsdateien auf Servern oder in Parameter Store speichern. Starten Sie den CloudWatch Agenten mit der fetch-config Option und geben Sie die erste Konfigurationsdatei an. Um die zweite Konfigurationsdatei an den ausgeführten Agent anzufügen, verwenden Sie denselben Befehl, aber mit der append-config -Option. Alle Metriken, Protokolle und Ablaufverfolgungen, die in einer der beiden Konfigurationsdateien aufgeführt sind, werden gesammelt. Die folgenden Beispielbefehle veranschaulichen dieses Szenario mit Konfigurationen, die als Dateien gespeichert sind. Die erste Zeile startet den Agent mithilfe der infrastructure.json -Konfigurationsdatei und die zweite Zeile fügt die app.json -Konfigurationsdatei an. Die folgenden Beispielbefehle sind für Linux. /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/tmp/infrastructure.json /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -m ec2 -s -c file:/tmp/app.json Die folgenden Beispielbefehle sind für Windows Server. & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:"C:\Program Files\Amazon\AmazonCloudWatchAgent\infrastructure.json" & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a append-config -m ec2 -s -c file:"C:\Program Files\Amazon\AmazonCloudWatchAgent\app.json" Die folgenden Beispiel-Konfigurationsdateien veranschaulichen eine Nutzung für dieses Feature. Die erste Konfigurationsdatei wird für alle Server in der Infrastruktur verwendet, und die zweite erfasst nur Protokolle von einer bestimmten Anwendung und wird an Server angehängt, die die Anwendung ausführen. infrastructure.json { "metrics": { "metrics_collected": { "cpu": { "resources": [ "*" ], "measurement": [ "usage_active" ], "totalcpu": true }, "mem": { "measurement": [ "used_percent" ] } } }, "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log" }, { "file_path": "/var/log/messages", "log_group_name": "/var/log/messages" } ] } } } } app.json { "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/app/app.log*", "log_group_name": "/app/app.log" } ] } } } } Die Dateinamen aller Konfigurationsdateien, die an die Konfiguration angehängt werden, müssen sich voneinander und vom Namen der anfänglichen Konfigurationsdatei unterscheiden. Wenn Sie append-config mit einer Konfigurationsdatei mit dem selben Dateinamen wie eine Konfigurationsdatei verwenden, die der Agent bereits verwendet, überschreibt der Append-Befehl die Informationen aus der ersten Konfigurationsdatei, anstatt Inhalte anzuhängen. Dies gilt auch, wenn sich zwei Konfigurationsdateien mit demselben Dateinamen in verschiedenen Dateipfaden befinden. Das vorstehende Beispiel zeigt die Verwendung von zwei Konfigurationsdateien. Es gibt jedoch keine Beschränkungen hinsichtlich der Anzahl der Konfigurationsdateien, die Sie an die Agentenkonfiguration anhängen können. Sie können auch die Nutzung von Konfigurationsdateien auf Servern und Konfigurationen in Parameter Store kombinieren. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Richten Sie den CloudWatch Agenten mit Linux mit verbesserter Sicherheit ein () SELinux Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://www.linkedin.com/uas/login?session_redirect=%2Fproducts%2Famdocs-lowcode-experience-platform%3FviewConnections%3Dtrue&trk=products_details_guest_face-pile-cta | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:25 |
https://www.linkedin.com/jobs/square-jobs-worldwide | 34,000+ Square jobs in Worldwide Skip to main content LinkedIn Square in Worldwide Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Sign in Create an account Any time Any time (33,152) Past month (22,832) Past week (10,154) Past 24 hours (1,727) Done Company Clear text Epic (686) Square (121) Accor (48) Madison Square Garden Entertainment Corp. (41) Dune (21) Done Job type Full-time (27,934) Part-time (4,225) Contract (678) Temporary (554) Volunteer (25) Done Experience level Internship (981) Entry level (15,759) Associate (843) Mid-Senior level (11,137) Director (1,221) Done Remote On-site (30,904) Hybrid (2,333) Remote (1,149) Done Get notified when a new job is posted. Set alert Sign in to set job alerts for “Square” roles. Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . 34,000+ Square Jobs in Worldwide Customer Success Advocate Customer Success Advocate Square United States Actively Hiring 3 days ago Store Manager Store Manager Reliance Retail Lucknow, Uttar Pradesh, India Actively Hiring 1 week ago Business Development Rep Associate Business Development Rep Associate Square Atlanta, GA Be an early applicant 2 weeks ago ****** ******** *********** ************** ****** ******** *********** ************** ****** ****** ****** Actively Hiring 1 week ago *********** ************** *********** ************** ******* ****** ****** ****** ****. *** ****, ** Actively Hiring 3 days ago ****** ******** *********** ************** ****** ******** *********** ************** ****** ****** ****** Actively Hiring 1 week ago ******** ******** ******* ****** ****** ****** ****. *** ****, ** Actively Hiring 4 days ago Sign in to view all job postings Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to view more jobs Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/zh_tw/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-common-scenarios.html | CloudWatch 代理程式的常用使用案例 - Amazon CloudWatch CloudWatch 代理程式的常用使用案例 - Amazon CloudWatch 文件 Amazon CloudWatch 使用者指南 以不同的使用者身分執行 CloudWatch 代理程式 CloudWatch 代理程式如何處理稀疏日誌檔案 將自訂維度新增至 CloudWatch 代理程式收集的指標 彙總或累計 CloudWatch 代理程式收集的指標 使用 CloudWatch 代理程式收集高解析度指標 將指標、日誌和追蹤傳送到不同帳戶 CloudWatch 代理程式與舊版 CloudWatch Logs 代理程式之間的時間戳記差異 附加 OpenTelemetry 收集器設定檔 本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。 CloudWatch 代理程式的常用使用案例 本節提供不同情境說明,概述如何完成 CloudWatch 代理程式的常見設定和自訂任務。 主題 以不同的使用者身分執行 CloudWatch 代理程式 CloudWatch 代理程式如何處理稀疏日誌檔案 將自訂維度新增至 CloudWatch 代理程式收集的指標 彙總或累計 CloudWatch 代理程式收集的指標 使用 CloudWatch 代理程式收集高解析度指標 將指標、日誌和追蹤傳送到不同帳戶 CloudWatch 代理程式與舊版 CloudWatch Logs 代理程式之間的時間戳記差異 附加 OpenTelemetry 收集器設定檔 以不同的使用者身分執行 CloudWatch 代理程式 根據預設,在 Linux 伺服器上以根使用者身分執行 CloudWatch。若要以不同的使用者身分執行代理,請在 CloudWatch 代理程式組態檔案中的 agent 區段使用 run_as_user 參數。此選項僅供 Linux 伺服器使用。 如果您已使用根使用者身分執行代理,並想要變更為使用不同的使用者身分,請使用下列程序之一。 在執行 Linux 的 EC2 執行個體上以其他使用者身分執行 CloudWatch 代理 下載並安裝新的 CloudWatch 代理程式套件。 建立新的 Linux 使用者,或使用 RPM 或 DEB 檔案所建立名為 cwagent 的預設使用者。 以下列方式之一提供此使用者的登入資料: 如果檔案 .aws/credentials 存在於根使用者的主目錄中,您必須為執行 CloudWatch 代理程式時使用的使用者建立憑證檔案。這個登入資料檔案會是 /home/ username /.aws/credentials 。然後,將 common-config.toml 中的 shared_credential_file 參數值設為登入資料檔案的路徑名稱。如需詳細資訊,請參閱 使用 AWS Systems Manager安裝 CloudWatch 代理程式 。 如果檔案 .aws/credentials 不存在於根使用者的主目錄中,您可執行下列操作之一: 建立憑證檔案,讓您要用的使用者執行 CloudWatch 代理程式。這個登入資料檔案會是 /home/ username /.aws/credentials 。然後,將 common-config.toml 中的 shared_credential_file 參數值設為登入資料檔案的路徑名稱。如需詳細資訊,請參閱 使用 AWS Systems Manager安裝 CloudWatch 代理程式 。 不要建立憑證檔案,而是將 IAM 角色連接到執行個體。代理會使用這個角色作為登入資料提供者。 在 CloudWatch 代理程式組態檔案中,在 agent 區段新增下列行: "run_as_user": " username " 視需要對組態檔案執行其他修改。如需詳細資訊,請參閱 建立 CloudWatch 代理程式組態檔案 提供使用者必要的許可。使用者必須擁有要收集之日誌檔的讀取 (r) 許可,而且必須擁有日誌檔路徑中每個目錄的 Execute (x) 許可。 使用您剛才修改的組態檔案啟動代理。 sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file: configuration-file-path 如要在執行 Linux 的內部部署伺服器上以其他使用者身分執行 CloudWatch 代理程式 下載並安裝新的 CloudWatch 代理程式套件。 建立新的 Linux 使用者,或使用 RPM 或 DEB 檔案所建立名為 cwagent 的預設使用者。 將此使用者的登入資料存放在使用者可以存取的路徑,例如, /home/ username /.aws/credentials 。 將 common-config.toml 中的 shared_credential_file 參數值設為登入資料檔案的路徑名稱。如需詳細資訊,請參閱 使用 AWS Systems Manager安裝 CloudWatch 代理程式 。 在 CloudWatch 代理程式組態檔案中,在 agent 區段新增下列行: "run_as_user": " username " 視需要對組態檔案執行其他修改。如需詳細資訊,請參閱 建立 CloudWatch 代理程式組態檔案 提供使用者必要的許可。使用者必須擁有要收集之日誌檔的讀取 (r) 許可,而且必須擁有日誌檔路徑中每個目錄的 Execute (x) 許可。 使用您剛才修改的組態檔案啟動代理。 sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file: configuration-file-path CloudWatch 代理程式如何處理稀疏日誌檔案 稀疏檔案是同時具有空白區塊和實際內容的檔案。稀疏檔案透過將代表空白區塊的簡短資訊 (而不是組成區塊的實際空白位元組) 寫入磁碟,以更有效率的方式使用磁碟空間。這使稀疏檔案的實際大小往往比其表面上的大小小得多。 不過,CloudWatch 代理程式處理稀疏檔案的方式與一般檔案並無不同。當代理程式讀取稀疏檔案時,空白區塊會被視為填滿空位元組的「真實」區塊。有鑑於此,CloudWatch 代理程式會將稀疏檔案表面大小的相同位元組數目發布至 CloudWatch。 設定 CloudWatch 代理程式來發布稀疏檔案可能會導致高出預期的 CloudWatch 成本,因此我們建議您不要這麼做。例如, /var/logs/lastlog 在 Linux 中通常是一個非常稀疏的檔案,我們建議您不要將它發布到 CloudWatch。 將自訂維度新增至 CloudWatch 代理程式收集的指標 若要新增自訂維度,例如代理程式收集的標籤指標,請將 append_dimensions 欄位新增至列出這些指標的代理程式組態檔案區段。 例如,以下組態檔案範例區段將名為 stackName 的自訂維度與 Prod 值,新增至代理程式收集的 cpu 和 disk 指標。 "cpu": { "resources":[ "*" ], "measurement":[ "cpu_usage_guest", "cpu_usage_nice", "cpu_usage_idle" ], "totalcpu":false, "append_dimensions": { "stackName":"Prod" } }, "disk": { "resources":[ "/", "/tmp" ], "measurement":[ "total", "used" ], "append_dimensions": { "stackName":"Prod" } } 請記住,無論何時,只要變更代理程式組態檔案,您就必須重新啟動代理程式,才能使變更生效。 彙總或累計 CloudWatch 代理程式收集的指標 若要彙整或累計代理程式收集的指標,請將 aggregation_dimensions 欄位新增至代理程式組態檔案中該指標的區段。 例如,以下組態檔案片段累計 AutoScalingGroupName 維度上的指標。每個 Auto Scaling 群組的所有執行個體的指標都將彙總並可整體檢視。 "metrics": { "cpu": { ...} "disk": { ...} "aggregation_dimensions" : [["AutoScalingGroupName"]] } 除了以 Auto Scaling 群組名稱彙整之外,若要彙整每個 InstanceId 和 InstanceType 維度的組合,請新增以下內容。 "metrics": { "cpu": { ...} "disk": { ...} "aggregation_dimensions" : [["AutoScalingGroupName"], ["InstanceId", "InstanceType"]] } 或者,若要將指標彙整至一個集合,請使用 [] 。 "metrics": { "cpu": { ...} "disk": { ...} "aggregation_dimensions" : [[]] } 請記住,無論何時,只要變更代理程式組態檔案,您就必須重新啟動代理程式,才能使變更生效。 使用 CloudWatch 代理程式收集高解析度指標 metrics_collection_interval 欄位指定收集指標的時間間隔 (以秒為單位)。藉由為此欄位指定小於 60 的值,就會以高解析度指標來收集指標。 例如,若您的指標都必須是高解析度,且每 10 秒收集一次,請為 metrics_collection_interval 區段下方的 agent 指定 10 作為值,以作為全域指標的收集間隔。 "agent": { "metrics_collection_interval": 10 } 或者,以下範例會設定每秒收集 cpu 指標一次,而所有其他指標則每分鐘收集一次。 "agent": { "metrics_collection_interval": 60 }, "metrics": { "metrics_collected": { "cpu": { "resources":[ "*" ], "measurement":[ "cpu_usage_guest" ], "totalcpu":false, "metrics_collection_interval": 1 }, "disk": { "resources":[ "/", "/tmp" ], "measurement":[ "total", "used" ] } } } 請記住,無論何時,只要變更代理程式組態檔案,您就必須重新啟動代理程式,才能使變更生效。 將指標、日誌和追蹤傳送到不同帳戶 若要讓 CloudWatch 代理程式將指標、日誌或追蹤傳送到不同帳戶,請在傳送伺服器上的代理程式組態檔案中指定 role_arn 參數。 role_arn 值會指定當將資料傳送給目標帳戶時,代理程式使用之目標帳戶的 IAM 角色。將指標或日誌交付給目標帳戶時,此角色可讓傳送帳戶擔任在目標帳戶中的對應角色。 您也可以在代理程式組態檔案中指定單獨的 role_arn 字串:一個用於傳送指標,一個用於傳送日誌,一個用於傳送追蹤。 以下範例是組態檔案 agent 區段的一部分,這部分會設定代理程式在將資料傳送給不同帳戶時使用 CrossAccountAgentRole 。 { "agent": { "credentials": { "role_arn": "arn:aws:iam::123456789012:role/CrossAccountAgentRole" } }, ..... } 或者,以下範例設定在傳送指標、日誌和追蹤時傳送帳戶所用的不同角色: "metrics": { "credentials": { "role_arn": "RoleToSendMetrics" }, "metrics_collected": { .... "logs": { "credentials": { "role_arn": "RoleToSendLogs" }, .... 需要政策 當您在代理程式組態檔案中指定 role_arn 時,也必須確定傳送和目標帳戶的 IAM 角色擁有特定的政策。傳送和目標帳戶的角色都應該要有 CloudWatchAgentServerPolicy 。如需將此政策指派給角色的詳細資訊,請參閱 先決條件 。 傳送帳戶的角色也必須包含下列政策。編輯角色時,您可以將此政策新增至 IAM 主控台中的 Permissions (許可) 標籤。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Resource": [ "arn:aws:iam:: 111122223333 :role/ agent-role-in-target-account " ] } ] } 目標帳戶中的角色必須包含以下政策,才能辨識傳送帳戶所使用的 IAM 角色。編輯角色時,您可以將此政策新增至 IAM 主控台中的 Trust relationships (信任關係) 標籤。這個角色是在傳送帳戶使用的政策中以 agent-role-in-target-account 指定的角色。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam:: 111122223333 :role/ role-in-sender-account " ] }, "Action": "sts:AssumeRole" } ] } CloudWatch 代理程式與舊版 CloudWatch Logs 代理程式之間的時間戳記差異 相較於較舊的 CloudWatch Logs 代理程式,CloudWatch 代理程式支援不同的時間戳記格式符號組。這些差異如下表所示。 兩種代理程式都支援的符號 僅 CloudWatch 代理程式支援的符號 僅舊版的 CloudWatch Logs 代理程式支援的符號 %A、%a、%b、%B、%d、%f、%H、%l、%m、%M、%p、%S、%y、%Y、%Z、%z %-d、%-l、%-m、%-M、%-S %c、%j、%U、%W、%w 如需更多有關新 CloudWatch 代理程式支援之符號意義的詳細資訊,請參閱 《Amazon CloudWatch 使用者指南》 中的 CloudWatch 代理程式組態檔案:Logs (日誌) 區段 。如需 CloudWatch Logs 代理程式支援之符號的詳細資訊,請參閱 《Amazon CloudWatch Logs 使用者指南》 中的 代理程式組態檔案 。 附加 OpenTelemetry 收集器設定檔 CloudWatch 代理程式支援補充 OpenTelemetry 收集器設定檔及其自己的設定檔。此功能可讓您透過 CloudWatch 代理程式組態,使用 CloudWatch Application Signals 或 Container Insights 等 CloudWatch 代理程式功能,並使用單一代理程式加入現有的 OpenTelemetry 收集器組態。 為避免與 CloudWatch 代理程式自動建立的管道發生合併衝突,建議您在 OpenTelemetry 收集器組態中的每個元件和管道後方,新增自訂字尾。 receivers: otlp/custom-suffix: protocols: http: exporters: awscloudwatchlogs/custom-suffix: log_group_name: "test-group" log_stream_name: "test-stream" service: pipelines: logs/custom-suffix: receivers: [otlp/custom-suffix] exporters: [awscloudwatchlogs/custom-suffix] 若要設定 CloudWatch 代理程式,請使用 fetch-config 選項啟動 CloudWatch 代理程式,並指定 CloudWatch 代理程式的設定檔。CloudWatch 代理程式需要至少一個 CloudWatch 代理程式設定檔。 /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -c file:/tmp/agent.json -s 接著,使用 append-config 選項指定 OpenTelemetry 收集器設定檔。 /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -c file:/tmp/otel.yaml -s 代理程式會在啟動時合併兩個設定檔,並記錄解析後的組態。 您的瀏覽器已停用或無法使用 Javascript。 您必須啟用 Javascript,才能使用 AWS 文件。請參閱您的瀏覽器說明頁以取得說明。 文件慣用形式 搭配相關遙測使用 CloudWatch 代理程式 CloudWatch 代理程式憑證偏好設定 此頁面是否有幫助? - 是 感謝您,讓我們知道我們做得很好! 若您有空,歡迎您告知我們值得讚許的地方,這樣才能保持良好服務。 此頁面是否有幫助? - 否 感謝讓我們知道此頁面仍須改善。很抱歉,讓您失望。 若您有空,歡迎您提供改善文件的方式。 | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html | Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten - Amazon CloudWatch Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Erforderliche Anmeldeinformationen Führen Sie den Assistenten zur CloudWatch Agentenkonfiguration aus CloudWatch Vordefinierte Metriksätze für Agenten Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Der Assistent für die Agentenkonfigurationsdatei amazon-cloudwatch-agent-config-wizard ,, stellt eine Reihe von Fragen, um Ihnen bei der Konfiguration des CloudWatch Agenten für Ihre Bedürfnisse zu helfen. In diesem Abschnitt werden die für die Konfigurationsdatei erforderlichen Anmeldeinformationen beschrieben. Es wird beschrieben, wie der Assistent für die CloudWatch Agentenkonfiguration ausgeführt wird. Außerdem werden die Metriken beschrieben, die im Assistenten vordefiniert sind. Erforderliche Anmeldeinformationen Der Assistent kann die zu verwendenden Anmeldeinformationen und die AWS Region automatisch erkennen, wenn Sie die AWS Anmeldeinformationen und die Konfigurationsdateien vor dem Start des Assistenten eingerichtet haben. Weitere Informationen zu diesen Dateien finden Sie unter Konfigurations- und Anmeldeinformationsdateien im AWS Systems Manager -Benutzerhandbuch . In der AWS Anmeldeinformationsdatei sucht der Assistent nach Standardanmeldedaten und sucht auch nach einem AmazonCloudWatchAgent Abschnitt wie dem folgenden: [AmazonCloudWatchAgent] aws_access_key_id = my_access_key aws_secret_access_key = my_secret_key Der Assistent zeigt die Standard-Anmeldeinformationen, die Anmeldeinformationen aus AmazonCloudWatchAgent und die Option Others an. Sie können auswählen, welche Anmeldeinformationen verwendet werden sollen. Bei Wahl von Others (Andere), können Sie Anmeldeinformationen eingeben. Verwenden Sie für my_access_key und my_secret_key die Schlüssel des IAM-Benutzers, der über Schreibberechtigungen für den Systems Manager Parameter Store verfügt. In der AWS Konfigurationsdatei können Sie die Region angeben, an die der Agent Metriken sendet, falls es sich um eine andere Region als den [default] Abschnitt handelt. Standardmäßig werden die Metriken in der Region veröffentlicht, in der sich die EC2 Amazon-Instance befindet. Wenn die Metriken in einer anderen Region veröffentlicht werden sollen, geben Sie hier die Region an. Im folgenden Beispiel werden die Metriken in der Region us-west-1 veröffentlicht. [AmazonCloudWatchAgent] region = us-west-1 Führen Sie den Assistenten zur CloudWatch Agentenkonfiguration aus Um die CloudWatch Agenten-Konfigurationsdatei zu erstellen Starten Sie den Assistenten zur CloudWatch Agentenkonfiguration, indem Sie Folgendes eingeben: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard Führen Sie auf einem Server mit Windows Server die folgenden Befehle aus, um den Assistenten zu starten: cd "C:\Program Files\Amazon\AmazonCloudWatchAgent" .\amazon-cloudwatch-agent-config-wizard.exe Beantworten Sie die Fragen zum Anpassen der Konfigurationsdatei für Ihren Server. Wenn Sie die Konfigurationsdatei lokal speichern, wird die Konfigurationsdatei config.json in /opt/aws/amazon-cloudwatch-agent/bin/ auf Linux-Servern und in C:\Program Files\Amazon\AmazonCloudWatchAgent auf Windows Server-Servern gespeichert. Anschließend können Sie diese Datei auf andere Server kopieren, auf denen der Agent installiert werden soll. Wenn Sie Systems Manager zum Installieren und Konfigurieren des Agenten verwenden, müssen Sie mit Yes (Ja) antworten, wenn Sie gefragt werden, ob Sie die Datei in Systems Manager Parameter Store speichern möchten. Sie können sich auch dafür entscheiden, die Datei im Parameter Store zu speichern, auch wenn Sie den SSM-Agent nicht zur Installation des CloudWatch Agenten verwenden. Zum Speichern der Datei in Parameter Store müssen Sie eine IAM-Rolle mit ausreichenden Berechtigungen verwenden. CloudWatch Vordefinierte Metriksätze für Agenten Der Assistent ist mit vordefinierten Metrikkategorien mit unterschiedlichen Detailebenen konfiguriert. Diese Metrikkategorien werden in den folgenden Tabellen dargestellt. Weitere Informationen zu diesen Metriken finden Sie unter Vom CloudWatch Agenten gesammelte Metriken . Anmerkung Parameter Store unterstützt Parameter in den Stufen „Standard“ und „Advanced“. Diese Parameterebenen beziehen sich nicht auf die Ebenen „Basic“, „Standard“ und „Advanced“ der Metrikdetails, die in diesen Tabellen beschrieben werden. EC2 Amazon-Instances, auf denen Linux läuft Detailstufe Enthaltene Metriken Basic Mem: mem_used_percent Disk: disk_used_percent Die disk -Metriken wie disk_used_percent haben eine Dimension für Partition , was bedeutet, dass die Anzahl der generierten benutzerdefinierten Metriken von der Anzahl der Partitionen abhängt, die Ihrer Instance zugeordnet sind. Die Anzahl der Festplattenpartitionen hängt davon ab, welches AMI Sie verwenden, und wie viele Amazon-EBS-Volumes Sie an den Server anfügen. Standard CPU: cpu_usage_idle , cpu_usage_iowait , cpu_usage_user , cpu_usage_system Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time Mem: mem_used_percent Swap: swap_used_percent Advanced CPU: cpu_usage_idle , cpu_usage_iowait , cpu_usage_user , cpu_usage_system Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time , diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Netstat: netstat_tcp_established , netstat_tcp_time_wait Swap: swap_used_percent On-Premises-Server mit Linux Detailstufe Enthaltene Metriken Basic Disk: disk_used_percent Diskio: diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Net: net_bytes_sent , net_bytes_recv , net_packets_sent , net_packets_recv Swap: swap_used_percent Standard CPU: cpu_usage_idle , cpu_usage_iowait Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time , diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Net: net_bytes_sent , net_bytes_recv , net_packets_sent , net_packets_recv Swap: swap_used_percent Advanced CPU: cpu_usage_guest , cpu_usage_idle , cpu_usage_iowait , cpu_usage_steal , cpu_usage_user , cpu_usage_system Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time , diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Net: net_bytes_sent , net_bytes_recv , net_packets_sent , net_packets_recv Netstat: netstat_tcp_established , netstat_tcp_time_wait Swap: swap_used_percent EC2 Amazon-Instances, auf denen Windows Server ausgeführt wird Anmerkung Die in dieser Tabelle aufgeführten Metriknamen zeigen an, wie die Metrik in der Konsole angezeigt wird. Der tatsächliche Name der Metrik enthält möglicherweise nicht das erste Wort. Der tatsächliche Metrikname für LogicalDisk % Free Space lautet beispielsweise nur % Free Space . Detailstufe Enthaltene Metriken Basic Memory: Memory % Committed Bytes In Use LogicalDisk: LogicalDisk % Free Space Standard Memory: Memory % Committed Bytes In Use Paging: Paging File % Usage Processor: Processor % Idle Time , Processor % Interrupt Time , Processor % User Time PhysicalDisk: PhysicalDisk % Disk Time LogicalDisk: LogicalDisk % Free Space Advanced Memory: Memory % Committed Bytes In Use Paging: Paging File % Usage Processor: Processor % Idle Time , Processor % Interrupt Time , Processor % User Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk % Disk Time , PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec TCP: TCPv4 Connections Established , TCPv6 Connections Established On-Premises-Server mit Windows Server Anmerkung Die in dieser Tabelle aufgeführten Metriknamen zeigen an, wie die Metrik in der Konsole angezeigt wird. Der tatsächliche Name der Metrik enthält möglicherweise nicht das erste Wort. Der tatsächliche Metrikname für LogicalDisk % Free Space lautet beispielsweise nur % Free Space . Detailstufe Enthaltene Metriken Basic Paging: Paging File % Usage Processor: Processor % Processor Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec Memory: Memory % Committed Bytes In Use Network Interface: Network Interface Bytes Sent/sec , Network Interface Bytes Received/sec , Network Interface Packets Sent/sec , Network Interface Packets Received/sec Standard Paging: Paging File % Usage Processor: Processor % Processor Time , Processor % Idle Time , Processor % Interrupt Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk % Disk Time , PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec Memory: Memory % Committed Bytes In Use Network Interface: Network Interface Bytes Sent/sec , Network Interface Bytes Received/sec , Network Interface Packets Sent/sec , Network Interface Packets Received/sec Advanced Paging: Paging File % Usage Processor: Processor % Processor Time , Processor % Idle Time , Processor % Interrupt Time , Processor % User Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk % Disk Time , PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec Memory: Memory % Committed Bytes In Use Network Interface: Network Interface Bytes Sent/sec , Network Interface Bytes Received/sec , Network Interface Packets Sent/sec , Network Interface Packets Received/sec TCP: TCPv4 Connections Established , TCPv6 Connections Established JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Beispiele für Konfigurationsdateien Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html | Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten - Amazon CloudWatch Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Erforderliche Anmeldeinformationen Führen Sie den Assistenten zur CloudWatch Agentenkonfiguration aus CloudWatch Vordefinierte Metriksätze für Agenten Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei mit dem Assistenten Der Assistent für die Agentenkonfigurationsdatei amazon-cloudwatch-agent-config-wizard ,, stellt eine Reihe von Fragen, um Ihnen bei der Konfiguration des CloudWatch Agenten für Ihre Bedürfnisse zu helfen. In diesem Abschnitt werden die für die Konfigurationsdatei erforderlichen Anmeldeinformationen beschrieben. Es wird beschrieben, wie der Assistent für die CloudWatch Agentenkonfiguration ausgeführt wird. Außerdem werden die Metriken beschrieben, die im Assistenten vordefiniert sind. Erforderliche Anmeldeinformationen Der Assistent kann die zu verwendenden Anmeldeinformationen und die AWS Region automatisch erkennen, wenn Sie die AWS Anmeldeinformationen und die Konfigurationsdateien vor dem Start des Assistenten eingerichtet haben. Weitere Informationen zu diesen Dateien finden Sie unter Konfigurations- und Anmeldeinformationsdateien im AWS Systems Manager -Benutzerhandbuch . In der AWS Anmeldeinformationsdatei sucht der Assistent nach Standardanmeldedaten und sucht auch nach einem AmazonCloudWatchAgent Abschnitt wie dem folgenden: [AmazonCloudWatchAgent] aws_access_key_id = my_access_key aws_secret_access_key = my_secret_key Der Assistent zeigt die Standard-Anmeldeinformationen, die Anmeldeinformationen aus AmazonCloudWatchAgent und die Option Others an. Sie können auswählen, welche Anmeldeinformationen verwendet werden sollen. Bei Wahl von Others (Andere), können Sie Anmeldeinformationen eingeben. Verwenden Sie für my_access_key und my_secret_key die Schlüssel des IAM-Benutzers, der über Schreibberechtigungen für den Systems Manager Parameter Store verfügt. In der AWS Konfigurationsdatei können Sie die Region angeben, an die der Agent Metriken sendet, falls es sich um eine andere Region als den [default] Abschnitt handelt. Standardmäßig werden die Metriken in der Region veröffentlicht, in der sich die EC2 Amazon-Instance befindet. Wenn die Metriken in einer anderen Region veröffentlicht werden sollen, geben Sie hier die Region an. Im folgenden Beispiel werden die Metriken in der Region us-west-1 veröffentlicht. [AmazonCloudWatchAgent] region = us-west-1 Führen Sie den Assistenten zur CloudWatch Agentenkonfiguration aus Um die CloudWatch Agenten-Konfigurationsdatei zu erstellen Starten Sie den Assistenten zur CloudWatch Agentenkonfiguration, indem Sie Folgendes eingeben: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard Führen Sie auf einem Server mit Windows Server die folgenden Befehle aus, um den Assistenten zu starten: cd "C:\Program Files\Amazon\AmazonCloudWatchAgent" .\amazon-cloudwatch-agent-config-wizard.exe Beantworten Sie die Fragen zum Anpassen der Konfigurationsdatei für Ihren Server. Wenn Sie die Konfigurationsdatei lokal speichern, wird die Konfigurationsdatei config.json in /opt/aws/amazon-cloudwatch-agent/bin/ auf Linux-Servern und in C:\Program Files\Amazon\AmazonCloudWatchAgent auf Windows Server-Servern gespeichert. Anschließend können Sie diese Datei auf andere Server kopieren, auf denen der Agent installiert werden soll. Wenn Sie Systems Manager zum Installieren und Konfigurieren des Agenten verwenden, müssen Sie mit Yes (Ja) antworten, wenn Sie gefragt werden, ob Sie die Datei in Systems Manager Parameter Store speichern möchten. Sie können sich auch dafür entscheiden, die Datei im Parameter Store zu speichern, auch wenn Sie den SSM-Agent nicht zur Installation des CloudWatch Agenten verwenden. Zum Speichern der Datei in Parameter Store müssen Sie eine IAM-Rolle mit ausreichenden Berechtigungen verwenden. CloudWatch Vordefinierte Metriksätze für Agenten Der Assistent ist mit vordefinierten Metrikkategorien mit unterschiedlichen Detailebenen konfiguriert. Diese Metrikkategorien werden in den folgenden Tabellen dargestellt. Weitere Informationen zu diesen Metriken finden Sie unter Vom CloudWatch Agenten gesammelte Metriken . Anmerkung Parameter Store unterstützt Parameter in den Stufen „Standard“ und „Advanced“. Diese Parameterebenen beziehen sich nicht auf die Ebenen „Basic“, „Standard“ und „Advanced“ der Metrikdetails, die in diesen Tabellen beschrieben werden. EC2 Amazon-Instances, auf denen Linux läuft Detailstufe Enthaltene Metriken Basic Mem: mem_used_percent Disk: disk_used_percent Die disk -Metriken wie disk_used_percent haben eine Dimension für Partition , was bedeutet, dass die Anzahl der generierten benutzerdefinierten Metriken von der Anzahl der Partitionen abhängt, die Ihrer Instance zugeordnet sind. Die Anzahl der Festplattenpartitionen hängt davon ab, welches AMI Sie verwenden, und wie viele Amazon-EBS-Volumes Sie an den Server anfügen. Standard CPU: cpu_usage_idle , cpu_usage_iowait , cpu_usage_user , cpu_usage_system Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time Mem: mem_used_percent Swap: swap_used_percent Advanced CPU: cpu_usage_idle , cpu_usage_iowait , cpu_usage_user , cpu_usage_system Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time , diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Netstat: netstat_tcp_established , netstat_tcp_time_wait Swap: swap_used_percent On-Premises-Server mit Linux Detailstufe Enthaltene Metriken Basic Disk: disk_used_percent Diskio: diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Net: net_bytes_sent , net_bytes_recv , net_packets_sent , net_packets_recv Swap: swap_used_percent Standard CPU: cpu_usage_idle , cpu_usage_iowait Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time , diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Net: net_bytes_sent , net_bytes_recv , net_packets_sent , net_packets_recv Swap: swap_used_percent Advanced CPU: cpu_usage_guest , cpu_usage_idle , cpu_usage_iowait , cpu_usage_steal , cpu_usage_user , cpu_usage_system Disk: disk_used_percent , disk_inodes_free Diskio: diskio_io_time , diskio_write_bytes , diskio_read_bytes , diskio_writes , diskio_reads Mem: mem_used_percent Net: net_bytes_sent , net_bytes_recv , net_packets_sent , net_packets_recv Netstat: netstat_tcp_established , netstat_tcp_time_wait Swap: swap_used_percent EC2 Amazon-Instances, auf denen Windows Server ausgeführt wird Anmerkung Die in dieser Tabelle aufgeführten Metriknamen zeigen an, wie die Metrik in der Konsole angezeigt wird. Der tatsächliche Name der Metrik enthält möglicherweise nicht das erste Wort. Der tatsächliche Metrikname für LogicalDisk % Free Space lautet beispielsweise nur % Free Space . Detailstufe Enthaltene Metriken Basic Memory: Memory % Committed Bytes In Use LogicalDisk: LogicalDisk % Free Space Standard Memory: Memory % Committed Bytes In Use Paging: Paging File % Usage Processor: Processor % Idle Time , Processor % Interrupt Time , Processor % User Time PhysicalDisk: PhysicalDisk % Disk Time LogicalDisk: LogicalDisk % Free Space Advanced Memory: Memory % Committed Bytes In Use Paging: Paging File % Usage Processor: Processor % Idle Time , Processor % Interrupt Time , Processor % User Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk % Disk Time , PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec TCP: TCPv4 Connections Established , TCPv6 Connections Established On-Premises-Server mit Windows Server Anmerkung Die in dieser Tabelle aufgeführten Metriknamen zeigen an, wie die Metrik in der Konsole angezeigt wird. Der tatsächliche Name der Metrik enthält möglicherweise nicht das erste Wort. Der tatsächliche Metrikname für LogicalDisk % Free Space lautet beispielsweise nur % Free Space . Detailstufe Enthaltene Metriken Basic Paging: Paging File % Usage Processor: Processor % Processor Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec Memory: Memory % Committed Bytes In Use Network Interface: Network Interface Bytes Sent/sec , Network Interface Bytes Received/sec , Network Interface Packets Sent/sec , Network Interface Packets Received/sec Standard Paging: Paging File % Usage Processor: Processor % Processor Time , Processor % Idle Time , Processor % Interrupt Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk % Disk Time , PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec Memory: Memory % Committed Bytes In Use Network Interface: Network Interface Bytes Sent/sec , Network Interface Bytes Received/sec , Network Interface Packets Sent/sec , Network Interface Packets Received/sec Advanced Paging: Paging File % Usage Processor: Processor % Processor Time , Processor % Idle Time , Processor % Interrupt Time , Processor % User Time LogicalDisk: LogicalDisk % Free Space PhysicalDisk: PhysicalDisk % Disk Time , PhysicalDisk Disk Write Bytes/sec , PhysicalDisk Disk Read Bytes/sec , PhysicalDisk Disk Writes/sec , PhysicalDisk Disk Reads/sec Memory: Memory % Committed Bytes In Use Network Interface: Network Interface Bytes Sent/sec , Network Interface Bytes Received/sec , Network Interface Packets Sent/sec , Network Interface Packets Received/sec TCP: TCPv4 Connections Established , TCPv6 Connections Established JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Beispiele für Konfigurationsdateien Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/installing-cloudwatch-agent-ssm.html#CloudWatch-Agent-profile-instance-fleet | Installieren Sie den CloudWatch Agenten mit AWS Systems Manager - Amazon CloudWatch Installieren Sie den CloudWatch Agenten mit AWS Systems Manager - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Installieren oder Aktualisieren des SSM-Agenten Überprüfen der Voraussetzungen für Systems Manager Überprüfen des Internetzugangs Laden Sie das CloudWatch Agentenpaket auf Ihre erste Instanz herunter Erstellen und Ändern der Agentenkonfigurationsdatei Installieren Sie den CloudWatch Agenten auf EC2 Instanzen Installieren Sie den CloudWatch Agenten auf EC2 Instanzen (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Installieren Sie den CloudWatch Agenten mit AWS Systems Manager Die Verwendung AWS Systems Manager erleichtert die Installation des CloudWatch Agenten auf einer Flotte von EC2 Amazon-Instances. Sie können den Agenten auf einen Server herunterladen und Ihre CloudWatch Agenten-Konfigurationsdatei für alle Server in der Flotte erstellen. Anschließend können Sie Systems Manager verwenden, um den Agenten auf den anderen Servern zu installieren. Verwenden Sie dazu die Konfigurationsdatei, die Sie erstellt haben. Verwenden Sie die folgenden Themen, um den CloudWatch Agenten mit zu installieren und auszuführen AWS Systems Manager. Themen Installieren oder Aktualisieren des SSM-Agenten. Überprüfen der Voraussetzungen für Systems Manager Überprüfen des Internetzugangs Laden Sie das CloudWatch Agentenpaket auf Ihre erste Instanz herunter Erstellen und Ändern der Agentenkonfigurationsdatei Installieren und starten Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen Installieren Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten Installieren oder Aktualisieren des SSM-Agenten. Auf einer EC2 Amazon-Instance erfordert der CloudWatch Agent, dass auf der Instance Version 2.2.93.0 oder höher des SSM-Agenten ausgeführt wird. Bevor Sie den CloudWatch Agenten installieren, aktualisieren oder installieren Sie den SSM Agent auf der Instance, falls Sie dies noch nicht getan haben. Informationen über das Installieren oder Aktualisieren des SSM Agent auf einer Instance, auf der Linux ausgeführt wird, finden Sie unter Installieren und Konfigurieren von SSM Agent auf Linux-Instances im AWS Systems Manager -Benutzerhandbuch . Informationen über das Installieren oder Aktualisieren des SSM Agenten finden Sie unter Installieren und Konfigurieren des SSM-Agenten im AWS Systems Manager -Benutzerhandbuch . Überprüfen der Voraussetzungen für Systems Manager Bevor Sie Systems Manager Run Command zur Installation und Konfiguration des CloudWatch Agenten verwenden, stellen Sie sicher, dass Ihre Instances die Mindestanforderungen von Systems Manager erfüllen. Weitere Informationen finden Sie unter Voraussetzungen für Systems Manager im AWS Systems Manager -Benutzerhandbuch . Überprüfen des Internetzugangs Ihre EC2 Amazon-Instances müssen in der Lage sein, eine Verbindung zu CloudWatch Endpunkten herzustellen. Dies kann über Internet Gateway-, NAT-Gateway- oder CloudWatch Interface-VPC-Endpunkte erfolgen. Weitere Informationen dazu, wie Sie den Internetzugang konfigurieren, finden Sie unter Internet-Gateways im Benutzerhandbuch zu Amazon VPC . Folgende auf Ihrem Proxy zu konfigurierende Endpunkte und Ports sind möglich: Wenn Sie den Agenten zum Sammeln von Metriken verwenden, müssen Sie zulassen, dass Sie die CloudWatch Endpunkte für die entsprechenden Regionen auflisten. Diese Endpunkte sind bei Amazon CloudWatch in der Allgemeine Amazon Web Services-Referenz aufgeführt. Wenn Sie den Agenten zum Sammeln von Protokollen verwenden, müssen Sie die Liste der CloudWatch Logs-Endpunkte für die entsprechenden Regionen zulassen. Diese Endpunkte sind in Amazon CloudWatch Logs im Allgemeine Amazon Web Services-Referenz aufgeführt. Wenn Sie den Agenten mit Systems Manager installieren oder die Konfigurationsdatei mit Parameter Store speichern, müssen Sie die Systems-Manager-Endpunkte für die entsprechenden Regionen der Zulassungsliste hinzufügen. Diese Endpunkte sind unter AWS -Systems Manager im Allgemeine Amazon Web Services-Referenz aufgeführt. Laden Sie das CloudWatch Agentenpaket auf Ihre erste Instanz herunter Gehen Sie wie folgt vor, um das CloudWatch Agentenpaket mit Systems Manager herunterzuladen. So laden Sie den CloudWatch Agenten mit Systems Manager herunter Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Liste der Befehlsdokumente die Option AWSPackageAWS-Configure aus. Wählen Sie im Bereich Targets (Ziele) die Instance, auf der der CloudWatch-Agent installiert werden soll. Wenn Sie eine bestimmte Instance nicht sehen, ist sie möglicherweise nicht als verwaltete Instance für die Verwendung mit Systems Manager konfiguriert. Weitere Informationen finden Sie im AWS Systems Manager Benutzerhandbuch unter Einrichtung AWS Systems Manager für Hybridumgebungen . Klicken Sie in der Liste Action auf Install . Geben Sie im Feld Name AmazonCloudWatchAgent ein. Lassen Sie Version auf latest (aktuell) eingestellt, damit die neueste Version des Agenten installiert wird. Klicken Sie auf Ausführen . Wählen Sie optional in den Bereichen Targets and outputs (Ziele und Ausgaben) die Schaltfläche neben einem Instance-Namen aus und wählen Sie View output (Ausgabe anzeigen) aus. Systems Manager zeigt jetzt an, dass der Agent erfolgreich installiert wurde. Erstellen und Ändern der Agentenkonfigurationsdatei Nachdem Sie den CloudWatch Agenten heruntergeladen haben, müssen Sie die Konfigurationsdatei erstellen, bevor Sie den Agenten auf Servern starten. Wenn Sie Ihre Agentenkonfigurationsdatei im Systems Manager Parameter Store speichern möchten, müssen Sie eine EC2 Instanz verwenden, um sie im Parameter Store zu speichern. Darüber hinaus müssen Sie dieser Instance zunächst die IAM-Rolle CloudWatchAgentAdminRole anfügen. Weitere Informationen zum Anhängen von Rollen finden Sie unter Anhängen einer IAM-Rolle an eine Instance im EC2 Amazon-Benutzerhandbuch . Weitere Informationen zum Erstellen der CloudWatch Agent-Konfigurationsdatei finden Sie unter. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Installieren und starten Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen Nachdem Sie eine CloudWatch Agentenkonfiguration im Parameter Store gespeichert haben, können Sie sie verwenden, wenn Sie den Agenten auf anderen Servern installieren. Führen Sie für jeden dieser Server die zuvor in diesem Abschnitt aufgeführten Schritte aus, um die Systems Manager-Voraussetzungen, die Version des SSM-Agenten und den Internetzugang zu überprüfen. Verwenden Sie dann die folgenden Anweisungen, um den CloudWatch Agenten mithilfe der von Ihnen erstellten CloudWatch Agenten-Konfigurationsdatei auf den zusätzlichen Instanzen zu installieren. Schritt 1: Laden Sie den CloudWatch Agenten herunter und installieren Sie ihn Um die CloudWatch Daten in eine andere Region senden zu können, stellen Sie sicher, dass die IAM-Rolle, die Sie dieser Instanz zugewiesen haben, berechtigt ist, die CloudWatch Daten in dieser Region zu schreiben. Im Folgenden finden Sie ein Beispiel für die Verwendung des aws configure Befehls zum Erstellen eines benannten Profils für den CloudWatch Agenten. Bei diesem Beispiel wird davon ausgegangen, dass Sie den Standardprofilnamen von AmazonCloudWatchAgent verwenden. Um das AmazonCloudWatchAgent Profil für den CloudWatch Agenten zu erstellen Geben Sie auf Linux-Servern den folgenden Befehl ein und befolgen Sie die Anweisungen: sudo aws configure --profile AmazonCloudWatchAgent Öffnen Sie Windows Server PowerShell als Administrator, geben Sie den folgenden Befehl ein und folgen Sie den Anweisungen. aws configure --profile AmazonCloudWatchAgent Installieren Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen Nachdem Sie eine CloudWatch Agentenkonfiguration im Parameter Store gespeichert haben, können Sie sie verwenden, wenn Sie den Agenten auf anderen Servern installieren. Führen Sie für jeden dieser Server die zuvor in diesem Abschnitt aufgeführten Schritte aus, um die Systems Manager-Voraussetzungen, die Version des SSM-Agenten und den Internetzugang zu überprüfen. Verwenden Sie dann die folgenden Anweisungen, um den CloudWatch Agenten mithilfe der von Ihnen erstellten CloudWatch Agenten-Konfigurationsdatei auf den zusätzlichen Instanzen zu installieren. Schritt 1: Laden Sie den CloudWatch Agenten herunter und installieren Sie ihn Sie müssen den Agenten auf jedem Server installieren, auf dem Sie den Agenten ausführen. Der CloudWatch Agent ist als Paket in Amazon Linux 2023 und Amazon Linux 2 verfügbar. Wenn Sie dieses Betriebssystem verwenden, können Sie das Paket mit Systems Manager installieren, indem Sie die folgenden Schritte befolgen. Anmerkung Sie müssen außerdem sicherstellen, dass der IAM-Rolle, die der Instance zugewiesen ist, die CloudWatchAgentServerPolicy angehängt ist. Weitere Informationen finden Sie unter Voraussetzungen . So verwenden Sie Systems Manager zur Installation des CloudWatch Agentenpakets Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Liste Command-Dokument die Option AWS- ausRunShellScript. Fügen Sie dann Folgendes in die Befehlsparameter ein. sudo yum install amazon-cloudwatch-agent Wählen Sie Ausführen aus Auf allen unterstützten Betriebssystemen können Sie das CloudWatch Agentenpaket entweder über Systems Manager Run Command oder über einen Amazon S3 S3-Download-Link herunterladen. Anmerkung Wenn Sie den CloudWatch Agenten installieren oder aktualisieren, wird nur die Option Deinstallieren und Neuinstallieren unterstützt. Sie können die Option In-place update (Direkte Aktualisierung) nicht verwenden. Systems Manager Run Command ermöglicht Ihnen die bedarfsgerechte Verwaltung der Konfiguration Ihrer Instances. Sie geben ein Systems-Manager-Dokument und Parameter an und führen Sie den Befehl auf einer oder mehreren Instances aus. Der SSM-Agent auf der Instance verarbeitet den Befehl und konfiguriert die Instance wie angegeben. Um den CloudWatch Agenten mit Run Command herunterzuladen Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Liste der Befehlsdokumente die Option AWSPackageAWS-Configure aus. Wählen Sie im Bereich Ziele die Instanz aus, auf der der Agent installiert werden soll. CloudWatch Wenn Sie eine bestimmte Instance nicht sehen, ist sie möglicherweise nicht für Run Command konfiguriert. Weitere Informationen finden Sie im AWS Systems Manager Benutzerhandbuch unter Einrichtung AWS Systems Manager von Hybridumgebungen . Klicken Sie in der Liste Action auf Install . Geben Sie im Feld Name (Name) AmazonCloudWatchAgent ein. Lassen Sie Version auf latest (aktuell) eingestellt, damit die neueste Version des Agenten installiert wird. Klicken Sie auf Ausführen . Wählen Sie optional in den Bereichen Targets and outputs (Ziele und Ausgaben) die Schaltfläche neben einem Instance-Namen aus und wählen Sie View output (Ausgabe anzeigen) aus. Systems Manager zeigt jetzt an, dass der Agent erfolgreich installiert wurde. Schritt 2: Starten Sie den CloudWatch Agenten mithilfe Ihrer Agenten-Konfigurationsdatei Führen Sie diese Schritte aus, um den Agenten mit dem Systems Manager Run Command zu starten. Informationen zum Einrichten des Agenten auf einem System, auf dem das sicherheitserweiterte Linux (SELinux) aktiviert ist, finden Sie unter. Richten Sie den CloudWatch Agenten mit Linux mit verbesserter Sicherheit ein () SELinux So starten Sie den CloudWatch Agenten mit Run Command Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Command-Dokumentliste die Option AmazonCloudWatch- ManageAgent . Wählen Sie im Bereich Ziele die Instanz aus, auf der Sie den CloudWatch Agenten installiert haben. Klicken Sie in der Liste Action auf Configure . Klicken Sie in der Liste Optional Configuration Source auf ssm . Geben Sie in das Feld Optionaler Konfigurationsstandort den Namen des Systems-Manager-Parameternamens der Agenten-Konfigurationsdatei ein, die Sie erstellt und im Systems-Manager-Parameter-Speicher gespeichert haben, wie in Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei erläutert. Klicken Sie in der Liste Optional Restart auf yes , um den Agent zu starten, nachdem Sie diese Schritte abgeschlossen haben. Klicken Sie auf Ausführen . Wählen Sie optional in den Bereichen Targets and outputs (Ziele und Ausgaben) die Schaltfläche neben einem Instance-Namen aus und wählen Sie View output (Ausgabe anzeigen) aus. Systems Manager zeigt jetzt an, dass der Agent erfolgreich gestartet wurde. (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten Der CloudWatch Agent enthält eine Konfigurationsdatei mit dem Namen common-config.toml . Sie können diese Datei verwenden, um optional Proxy- und Regionsinformationen anzugeben. Auf einem Server, auf dem Linux ausgeführt wird, befindet sich diese Datei im Verzeichnis /opt/aws/amazon-cloudwatch-agent/etc . Auf einem Server, auf dem Windows Server ausgeführt wird, befindet sich diese Datei im Verzeichnis C:\ProgramData\Amazon\AmazonCloudWatchAgent . common-config.toml lautet standardmäßig folgendermaßen: # This common-config is used to configure items used for both ssm and cloudwatch access ## Configuration for shared credential. ## Default credential strategy will be used if it is absent here: ## Instance role is used for EC2 case by default. ## AmazonCloudWatchAgent profile is used for onPremise case by default. # [credentials] # shared_credential_profile = " { profile_name}" # shared_credential_file= " { file_name}" ## Configuration for proxy. ## System-wide environment-variable will be read if it is absent here. ## i.e. HTTP_PROXY/http_proxy; HTTPS_PROXY/https_proxy; NO_PROXY/no_proxy ## Note: system-wide environment-variable is not accessible when using ssm run-command. ## Absent in both here and environment-variable means no proxy will be used. # [proxy] # http_proxy = " { http_url}" # https_proxy = " { https_url}" # no_proxy = " { domain}" Alle Zeilen sind anfangs auskommentiert. Zum Festlegen des Anmeldeinformationsprofils oder der Proxy-Einstellungen entfernen Sie # aus der Zeile und geben einen Wert an. Sie können diese Datei manuell oder mithilfe von RunShellScript Run Command in Systems Manager: bearbeiten: shared_credential_profile — Für lokale Server gibt diese Zeile das Profil mit den IAM-Benutzeranmeldeinformationen an, an das Daten gesendet werden sollen. CloudWatch Wenn diese Zeile auskommentiert bleibt, wird AmazonCloudWatchAgent verwendet. Auf einer EC2 Instanz können Sie diese Zeile verwenden, damit der CloudWatch Agent Daten von dieser Instanz an eine andere CloudWatch Region sendet. AWS Geben Sie hierzu ein benanntes Profil an, das ein Feld region zur Angabe der Zielregion enthält. Wenn Sie einen shared_credential_profile angeben, müssen Sie auch das # am Anfang der [credentials] -Zeile entfernen. shared_credential_file – Damit der Agent in einer nicht im Standardpfad abgelegten Datei nach Anmeldeinformationen sucht, müssen Sie den vollständigen Pfad und den Dateinamen hier angeben. Der Standardpfad ist unter Linux /root/.aws und unter Windows Server C:\\Users\\Administrator\\.aws . Das erste Beispiel unten zeigt die Syntax einer gültigen shared_credential_file -Zeile für Linux-Server, und das zweite Beispiel ist für Windows-Server gültig. Auf Windows Server müssen Sie die \-Zeichen mit einem Escape-Zeichen versehen. shared_credential_file= "/usr/ username /credentials" shared_credential_file= "C:\\Documents and Settings\\ username \\.aws\\credentials" Wenn Sie einen shared_credential_file angeben, müssen Sie auch das # am Anfang der [credentials] -Zeile entfernen. Proxy-Einstellungen – Falls Ihre Server HTTP- oder HTTPS-Proxys verwenden, um AWS -Services zu kontaktieren, geben Sie diese Proxys in den Feldern http_proxy und https_proxy an. Falls es URLs welche gibt, die vom Proxying ausgeschlossen werden sollen, geben Sie sie in das no_proxy Feld ein, getrennt durch Kommas. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Manuelle Installation bei Amazon EC2 Installieren Sie den CloudWatch Agenten auf lokalen Servern Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#comments-api | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://www.linkedin.com/uas/login?session_redirect=%2Fservices%2Fproducts%2Famdocs-lowcode-experience-platform%2F&fromSignIn=true&trk=products_details_guest_nav-header-signin | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html#Solution-NVIDIA-GPU-Agent-Config | Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Esta solução auxilia na configuração da coleta de métricas prontas para uso com agentes do CloudWatch para workloads da GPU da NVIDIA que estão sendo executadas em instâncias do EC2. Além disso, a solução ajuda na configuração de um painel do CloudWatch configurado previamente. Para obter informações gerais sobre todas as soluções de observabilidade do CloudWatch, consulte Soluções de observabilidade do CloudWatch . Tópicos Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Requisitos Esta solução é aplicável nas seguintes condições: Computação: Amazon EC2 Fornecimento de suporte para até 500 GPUs em todas as instâncias do EC2 em uma Região da AWS específica Versão mais recente do agente do CloudWatch SSM Agent instalado na instância do EC2 A instância do EC2 deve ter um driver da NVIDIA instalado. Os drivers da NVIDIA são instalados previamente em algumas imagens de máquina da Amazon (AMIs). Caso contrário, é possível instalar o driver manualmente. Para obter mais informações, consulte Instalação de drivers NVIDIA em instâncias Linux . nota O AWS Systems Manager (SSM Agent) está instalado previamente em algumas imagens de máquinas da Amazon (AMIs) fornecidas pela AWS e por entidades externas confiáveis. Se o agente não estiver instalado, você poderá instalá-lo manualmente usando o procedimento adequado para o seu tipo de sistema operacional. Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Linux Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para macOS Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Windows Server Benefícios A solução disponibiliza monitoramento da NVIDIA, fornecendo insights valiosos para os seguintes casos de uso: Análise do uso da GPU e da memória para identificar gargalos de performance ou a necessidade de obtenção de recursos adicionais. Monitoramento da temperatura e do consumo de energia para garantir que as GPUs operem dentro dos limites seguros. Avaliação da performance do codificador para workloads de vídeo na GPU. Verificação da conectividade PCIe para garantir que atendam à geração e à largura esperadas. Monitoramento das velocidades do relógio da GPU para identificar problemas de ajuste de escala ou de controle de utilização. A seguir, apresentamos as principais vantagens da solução: Automatiza a coleta de métricas para a NVIDIA usando a configuração do agente do CloudWatch, o que elimina a necessidade de instrumentação manual. Fornece um painel do CloudWatch consolidado e configurado previamente para as métricas da NVIDIA. O painel gerenciará automaticamente as métricas das novas instâncias do EC2 para a NVIDIA que foram configuradas usando a solução, mesmo que essas métricas não estejam disponíveis no momento de criação do painel. A imagem apresentada a seguir é um exemplo do painel para esta solução. Custos Esta solução cria e usa recursos em sua conta. A cobrança será realizada com base no uso padrão, que inclui o seguinte: Todas as métricas coletadas pelo agente do CloudWatch são cobradas como métricas personalizadas. O número de métricas usadas por esta solução depende do número de hosts do EC2. Cada host do EC2 configurado para a solução publica um total de 17 métricas por GPU. Um painel personalizado. As operações da API solicitadas pelo agente do CloudWatch para publicar as métricas. Com a configuração padrão para esta solução, o agente do CloudWatch chama a operação PutMetricData uma vez por minuto para cada host do EC2. Isso significa que a API PutMetricData será chamada 30*24*60=43,200 em um mês com 30 dias para cada host do EC2. Para obter mais informações sobre os preços do CloudWatch, consulte Preço do Amazon CloudWatch . A calculadora de preços pode ajudar a estimar os custos mensais aproximados para o uso desta solução. Como usar a calculadora de preços para estimar os custos mensais da solução Abra a calculadora de preços do Amazon CloudWatch . Em Escolher uma região , selecione a região em que você gostaria de implantar a solução. Na seção Métricas , em Número de métricas , insira 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . Na seção APIs , em Número de solicitações de API , insira 43200 * number of EC2 instances configured for this solution . Por padrão, o agente do CloudWatch executa uma operação PutMetricData a cada minuto para cada host do EC2. Na seção Painéis e alarmes , em Número de painéis , insira 1 . É possível visualizar os custos mensais estimados na parte inferior da calculadora de preços. Configuração do agente do CloudWatch para esta solução O agente do CloudWatch é um software que opera de maneira contínua e autônoma em seus servidores e em ambientes com contêineres. Ele coleta métricas, logs e rastreamentos da infraestrutura e das aplicações e os envia para o CloudWatch e para o X-Ray. Para obter mais informações sobre o agente do CloudWatch, consulte Coleta de métricas, logs e rastreamentos usando o agente do CloudWatch . Nesta solução, a configuração do agente coleta um conjunto de métricas para ajudar você a começar a monitorar e realizar a observabilidade da GPU da NVIDIA. O agente do CloudWatch pode ser configurado para coletar mais métricas da GPU da NVIDIA do que as que são exibidas por padrão no painel. Para obter uma lista de todas as métricas da GPU da NVIDIA que você pode coletar, consulte Colete métricas de GPU NVIDIA . Configuração do agente para esta solução As métricas coletadas pelo agente são definidas na configuração do agente. A solução fornece configurações do agente para a coleta das métricas recomendadas com dimensões adequadas para o painel da solução. Use a configuração do agente do CloudWatch apresentada a seguir em instâncias do EC2 equipadas com GPUs da NVIDIA. A configuração será armazenada como um parâmetro no Parameter Store do SSM, conforme detalhado posteriormente em Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implantação do agente para a sua solução Existem várias abordagens para instalar o agente do CloudWatch, dependendo do caso de uso. Recomendamos o uso do Systems Manager para esta solução. Ele fornece uma experiência no console e simplifica o gerenciamento de uma frota de servidores gerenciados em uma única conta da AWS. As instruções apresentadas nesta seção usam o Systems Manager e são destinadas para situações em que o agente do CloudWatch não está em execução com as configurações existentes. É possível verificar se o agente do CloudWatch está em execução ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se você já estiver executando o agente do CloudWatch nos hosts do EC2 nos quais a workload está implantada e gerenciando as configurações do agente, pode pular as instruções apresentadas nesta seção e usar o mecanismo de implantação existente para atualizar a configuração. Certifique-se de combinar a configuração do agente da GPU da NVIDIA com a configuração do agente existente e, em seguida, implante a configuração combinada. Se você estiver usando o Systems Manager para armazenar e gerenciar a configuração do agente do CloudWatch, poderá combinar a configuração com o valor do parâmetro existente. Para obter mais informações, consulte Managing CloudWatch agent configuration files . nota Ao usar o Systems Manager para implantar as configurações do agente do CloudWatch apresentadas a seguir, qualquer configuração existente do agente do CloudWatch nas suas instâncias do EC2 será substituída ou sobrescrita. É possível modificar essa configuração para atender às necessidades do ambiente ou do caso de uso específico. As métricas definidas na configuração representam o requisito mínimo necessário para o painel fornecido pela solução. O processo de implantação inclui as seguintes etapas: Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias. Etapa 2: armazenar o arquivo de configuração recomendado do agente no Systems Manager Parameter Store. Etapa 3: instalar o agente do CloudWatch em uma ou mais instâncias do EC2 usando uma pilha do CloudFormation. Etapa 4: verificar se a configuração do agente foi realizada corretamente. Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias Você deve conceder permissão para o Systems Manager instalar e configurar o agente do CloudWatch. Além disso, é necessário conceder permissão para que o agente do CloudWatch publique a telemetria da instância do EC2 para o CloudWatch. Certifique-se de que o perfil do IAM anexado à instância tenha as políticas do IAM CloudWatchAgentServerPolicy e AmazonSSMManagedInstanceCore associadas. Após criar o perfil, associe-o às suas instâncias do EC2. Para anexar um perfil a uma instância do EC2, siga as etapas apresentadas em Attach an IAM role to an instance . Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store O Parameter Store simplifica a instalação do agente do CloudWatch em uma instância do EC2 ao armazenar e gerenciar os parâmetros de configuração de forma segura, eliminando a necessidade de valores com codificação rígida. Isso garante um processo de implantação mais seguro e flexível ao possibilitar o gerenciamento centralizado e as atualizações simplificadas para as configurações em diversas instâncias. Use as etapas apresentadas a seguir para armazenar o arquivo de configuração recomendado do agente do CloudWatch como um parâmetro no Parameter Store. Como criar o arquivo de configuração do agente do CloudWatch como um parâmetro Abra o console AWS Systems Manager em https://console.aws.amazon.com/systems-manager/ . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. No painel de navegação, escolha Gerenciamento de aplicações e, em seguida, Parameter Store . Siga as etapas apresentadas a seguir para criar um novo parâmetro para a configuração. Escolha Criar Parâmetro . Na caixa Nome , insira um nome que será usado para referenciar o arquivo de configuração do agente do CloudWatch nas etapas posteriores. Por exemplo, . AmazonCloudWatch-NVIDIA-GPU-Configuration (Opcional) Na caixa Descrição , digite uma descrição para o parâmetro. Em Camadas de parâmetros , escolha Padrão . Para Tipo , escolha String . Em Tipo de dados , selecione texto . Na caixa Valor , cole o bloco em JSON correspondente que foi listado em Configuração do agente para esta solução . Escolha Criar Parâmetro . Etapa 3: instalar o agente do CloudWatch e aplicar a configuração usando um modelo do CloudFormation É possível usar o AWS CloudFormation para instalar o agente e configurá-lo para usar a configuração do agente do CloudWatch criada nas etapas anteriores. Como instalar e configurar o agente do CloudWatch para esta solução Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como CWAgentInstallationStack . Na seção Parâmetros , especifique o seguinte: Para CloudWatchAgentConfigSSM , insira o nome do parâmetro do Systems Manager para a configuração do agente que você criou anteriormente, como AmazonCloudWatch-NVIDIA-GPU-Configuration . Para selecionar as instâncias de destino, você tem duas opções. Para InstanceIds , especifique uma lista delimitada por vírgulas de IDs de instâncias nas quais você deseja instalar o agente do CloudWatch com esta configuração. É possível listar uma única instância ou várias instâncias. Se você estiver realizando implantações em grande escala, é possível especificar a TagKey e o TagValue correspondente para direcionar todas as instâncias do EC2 associadas a essa etiqueta e a esse valor. Se você especificar uma TagKey , é necessário especificar um TagValue correspondente. (Para um grupo do Auto Scaling, especifique aws:autoscaling:groupName para a TagKey e defina o nome do grupo do Auto Scaling para a TagValue para realizar a implantação em todas as instâncias do grupo do Auto Scaling.) Analise as configurações e, em seguida, escolha Criar pilha . Se você desejar editar o arquivo de modelo previamente para personalizá-lo, selecione a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . nota Após a conclusão desta etapa, este parâmetro do Systems Manager será associado aos agentes do CloudWatch em execução nas instâncias de destino. Isto significa que: Se o parâmetro do Systems Manager for excluído, o agente será interrompido. Se o parâmetro do Systems Manager for editado, as alterações de configuração serão aplicadas automaticamente ao agente na frequência programada, que, por padrão, é de 30 dias. Se você desejar aplicar imediatamente as alterações a este parâmetro do Systems Manager, você deverá executar esta etapa novamente. Para obter mais informações sobre as associações, consulte Working with associations in Systems Manager . Etapa 4: verificar se a configuração do agente foi realizada corretamente É possível verificar se o agente do CloudWatch está instalado ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se o agente do CloudWatch não estiver instalado e em execução, certifique-se de que todas as configurações foram realizadas corretamente. Certifique-se de ter anexado um perfil com as permissões adequadas para a instância do EC2, conforme descrito na Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias . Certifique-se de ter configurado corretamente o JSON para o parâmetro do Systems Manager. Siga as etapas em Solução de problemas de instalação do atendente do CloudWatch com o CloudFormation . Se todas as configurações estiverem corretas, as métricas da GPU da NVIDIA serão publicadas no CloudWatch e estarão disponíveis para visualização. É possível verificar no console do CloudWatch para assegurar que as métricas estão sendo publicadas corretamente. Como verificar se as métricas da GPU da NVIDIA estão sendo publicadas no CloudWatch Abra o console do CloudWatch, em https://console.aws.amazon.com/cloudwatch/ . Escolha Métricas e, depois, Todas as métricas . Certifique-se de ter selecionado a região na qual a solução foi implantada, escolha Namespaces personalizados e, em seguida, selecione CWAgent . Pesquise pelas métricas mencionadas em Configuração do agente para esta solução , como nvidia_smi_utilization_gpu . Caso encontre resultados para essas métricas, isso significa que elas estão sendo publicadas no CloudWatch. Criação do painel da solução com a GPU da NVIDIA O painel fornecido por esta solução apresenta métricas das GPUs da NVIDIA ao agregar e apresentar as métricas em todas as instâncias. O painel mostra um detalhamento dos principais colaboradores (que corresponde aos dez principais por widget de métrica) para cada métrica. Isso ajuda a identificar rapidamente discrepâncias ou instâncias que contribuem significativamente para as métricas observadas. Para criar o painel, é possível usar as seguintes opções: Usar o console do CloudWatch para criar o painel. Usar o console do AWS CloudFormation para implantar o painel. Fazer o download do código de infraestrutura como código do AWS CloudFormation e integrá-lo como parte da automação de integração contínua (CI). Ao usar o console do CloudWatch para criar um painel, é possível visualizá-lo previamente antes de criá-lo e incorrer em custos. nota O painel criado com o CloudFormation nesta solução exibe métricas da região em que a solução está implantada. Certifique-se de que a pilha do CloudFormation seja criada na mesma região em que as métricas da GPU da NVIDIA são publicadas. Se você especificou um namespace personalizado diferente de CWAgent na configuração do agente do CloudWatch, será necessário alterar o modelo do CloudFormation para o painel, substituindo CWAgent pelo namespace personalizado que você está usando. Como criar o painel usando o console do CloudWatch Abra o console do CloudWatch e acesse Criar painel usando este link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Insira o nome do painel e, em seguida, escolha Criar painel . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Visualize previamente o painel e escolha Salvar para criá-lo. Como criar o painel usando o CloudFormation Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como NVIDIA-GPU-DashboardStack . Na seção Parâmetros , especifique o nome do painel no parâmetro DashboardName . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Confirme as funcionalidades de acesso relacionadas às transformações na seção Capacidades e transformações . Lembre-se de que o CloudFormation não adiciona recursos do IAM. Analise as configurações e, em seguida, escolha Criar pilha . Quando o status da pilha mostrar CREATE_COMPLETE , selecione a guia Recursos na pilha criada e, em seguida, escolha o link exibido em ID físico para acessar o painel. Como alternativa, é possível acessar o painel diretamente no console do CloudWatch ao selecionar Painéis no painel de navegação do console à esquerda e localizar o nome do painel na seção Painéis personalizados . Se você desejar editar o arquivo de modelo para personalizá-lo para atender a uma necessidade específica, é possível usar a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . É possível usar este link para fazer download do modelo: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Como começar a usar o painel da GPU da NVIDIA A seguir, apresentamos algumas tarefas que você pode realizar para explorar o novo painel da GPU da NVIDIA. Essas tarefas permitem a validação do funcionamento correto do painel e fornecem uma experiência prática ao usá-lo para monitorar as GPUs da NVIDIA. À medida que realiza as tarefas, você se familiarizará com a navegação no painel e com a interpretação das métricas visualizadas. Análise da utilização da GPU Na seção Utilização , localize os widgets de Utilização da GPU e Utilização da memória . Eles mostram, respectivamente, a porcentagem de tempo em que a GPU está sendo ativamente usada para cálculos e a porcentagem de uso da memória global para leitura ou gravação. Uma utilização elevada pode indicar possíveis gargalos de performance ou a necessidade de obtenção de recursos adicionais de GPU. Análise do uso de memória da GPU Na seção Memória , localize os widgets Memória total , Memória usada e Memória livre . Esses widgets fornecem insights sobre a capacidade total de memória das GPUs, além de indicar a quantidade de memória que, no momento, está sendo consumida ou disponível. A pressão de memória pode acarretar em problemas de performance ou erros por falta de memória, portanto, é fundamental monitorar essas métricas e garantir que a workload tenha memória suficiente disponível. Monitoramento da temperatura e do consumo de energia Na seção Temperatura/Potência , localize os widgets de Temperatura da CPU e Consumo de energia . Essas métricas são essenciais para garantir que as GPUs estejam operando dentro dos limites seguros de temperatura e consumo de energia. Identificação da performance do codificador Na seção Codificador , localize os widgets de Contagem de sessões do codificador , Média de FPS e Latência média . Essas métricas são relevantes se você estiver executando workloads de codificação de vídeo em suas GPUs. O monitoramento dessas métricas é fundamental para garantir a operação ideal dos codificadores e identificar possíveis gargalos ou problemas de performance. Verificação do status do link do PCIe Na seção PCIe , localize os widgets de Geração do link do PCIe e de Largura do link do PCIe . Essas métricas fornecem informações sobre o link do PCIe que estabelece conexão entre a GPU e o sistema de host. Certifique-se de que o link esteja operando com a geração e a largura esperadas para evitar possíveis limitações de performance causadas por gargalos do PCIe. Análise dos relógios da GPU Na seção Relógio , localize os widgets de Relógio de gráficos , Relógio de SM , Relógio de memória e Relógio de vídeo . Essas métricas apresentam as frequências operacionais atuais dos diversos componentes da GPU. O monitoramento desses relógios pode ajudar a identificar possíveis problemas com o ajuste de escala ou com o controle de utilização do relógio da GPU, que podem impactar a performance. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Workload do NGINX no EC2 Workload do Kafka no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html#Solution-NVIDIA-GPU-On-EC2-Requirements | Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Esta solução auxilia na configuração da coleta de métricas prontas para uso com agentes do CloudWatch para workloads da GPU da NVIDIA que estão sendo executadas em instâncias do EC2. Além disso, a solução ajuda na configuração de um painel do CloudWatch configurado previamente. Para obter informações gerais sobre todas as soluções de observabilidade do CloudWatch, consulte Soluções de observabilidade do CloudWatch . Tópicos Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Requisitos Esta solução é aplicável nas seguintes condições: Computação: Amazon EC2 Fornecimento de suporte para até 500 GPUs em todas as instâncias do EC2 em uma Região da AWS específica Versão mais recente do agente do CloudWatch SSM Agent instalado na instância do EC2 A instância do EC2 deve ter um driver da NVIDIA instalado. Os drivers da NVIDIA são instalados previamente em algumas imagens de máquina da Amazon (AMIs). Caso contrário, é possível instalar o driver manualmente. Para obter mais informações, consulte Instalação de drivers NVIDIA em instâncias Linux . nota O AWS Systems Manager (SSM Agent) está instalado previamente em algumas imagens de máquinas da Amazon (AMIs) fornecidas pela AWS e por entidades externas confiáveis. Se o agente não estiver instalado, você poderá instalá-lo manualmente usando o procedimento adequado para o seu tipo de sistema operacional. Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Linux Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para macOS Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Windows Server Benefícios A solução disponibiliza monitoramento da NVIDIA, fornecendo insights valiosos para os seguintes casos de uso: Análise do uso da GPU e da memória para identificar gargalos de performance ou a necessidade de obtenção de recursos adicionais. Monitoramento da temperatura e do consumo de energia para garantir que as GPUs operem dentro dos limites seguros. Avaliação da performance do codificador para workloads de vídeo na GPU. Verificação da conectividade PCIe para garantir que atendam à geração e à largura esperadas. Monitoramento das velocidades do relógio da GPU para identificar problemas de ajuste de escala ou de controle de utilização. A seguir, apresentamos as principais vantagens da solução: Automatiza a coleta de métricas para a NVIDIA usando a configuração do agente do CloudWatch, o que elimina a necessidade de instrumentação manual. Fornece um painel do CloudWatch consolidado e configurado previamente para as métricas da NVIDIA. O painel gerenciará automaticamente as métricas das novas instâncias do EC2 para a NVIDIA que foram configuradas usando a solução, mesmo que essas métricas não estejam disponíveis no momento de criação do painel. A imagem apresentada a seguir é um exemplo do painel para esta solução. Custos Esta solução cria e usa recursos em sua conta. A cobrança será realizada com base no uso padrão, que inclui o seguinte: Todas as métricas coletadas pelo agente do CloudWatch são cobradas como métricas personalizadas. O número de métricas usadas por esta solução depende do número de hosts do EC2. Cada host do EC2 configurado para a solução publica um total de 17 métricas por GPU. Um painel personalizado. As operações da API solicitadas pelo agente do CloudWatch para publicar as métricas. Com a configuração padrão para esta solução, o agente do CloudWatch chama a operação PutMetricData uma vez por minuto para cada host do EC2. Isso significa que a API PutMetricData será chamada 30*24*60=43,200 em um mês com 30 dias para cada host do EC2. Para obter mais informações sobre os preços do CloudWatch, consulte Preço do Amazon CloudWatch . A calculadora de preços pode ajudar a estimar os custos mensais aproximados para o uso desta solução. Como usar a calculadora de preços para estimar os custos mensais da solução Abra a calculadora de preços do Amazon CloudWatch . Em Escolher uma região , selecione a região em que você gostaria de implantar a solução. Na seção Métricas , em Número de métricas , insira 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . Na seção APIs , em Número de solicitações de API , insira 43200 * number of EC2 instances configured for this solution . Por padrão, o agente do CloudWatch executa uma operação PutMetricData a cada minuto para cada host do EC2. Na seção Painéis e alarmes , em Número de painéis , insira 1 . É possível visualizar os custos mensais estimados na parte inferior da calculadora de preços. Configuração do agente do CloudWatch para esta solução O agente do CloudWatch é um software que opera de maneira contínua e autônoma em seus servidores e em ambientes com contêineres. Ele coleta métricas, logs e rastreamentos da infraestrutura e das aplicações e os envia para o CloudWatch e para o X-Ray. Para obter mais informações sobre o agente do CloudWatch, consulte Coleta de métricas, logs e rastreamentos usando o agente do CloudWatch . Nesta solução, a configuração do agente coleta um conjunto de métricas para ajudar você a começar a monitorar e realizar a observabilidade da GPU da NVIDIA. O agente do CloudWatch pode ser configurado para coletar mais métricas da GPU da NVIDIA do que as que são exibidas por padrão no painel. Para obter uma lista de todas as métricas da GPU da NVIDIA que você pode coletar, consulte Colete métricas de GPU NVIDIA . Configuração do agente para esta solução As métricas coletadas pelo agente são definidas na configuração do agente. A solução fornece configurações do agente para a coleta das métricas recomendadas com dimensões adequadas para o painel da solução. Use a configuração do agente do CloudWatch apresentada a seguir em instâncias do EC2 equipadas com GPUs da NVIDIA. A configuração será armazenada como um parâmetro no Parameter Store do SSM, conforme detalhado posteriormente em Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implantação do agente para a sua solução Existem várias abordagens para instalar o agente do CloudWatch, dependendo do caso de uso. Recomendamos o uso do Systems Manager para esta solução. Ele fornece uma experiência no console e simplifica o gerenciamento de uma frota de servidores gerenciados em uma única conta da AWS. As instruções apresentadas nesta seção usam o Systems Manager e são destinadas para situações em que o agente do CloudWatch não está em execução com as configurações existentes. É possível verificar se o agente do CloudWatch está em execução ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se você já estiver executando o agente do CloudWatch nos hosts do EC2 nos quais a workload está implantada e gerenciando as configurações do agente, pode pular as instruções apresentadas nesta seção e usar o mecanismo de implantação existente para atualizar a configuração. Certifique-se de combinar a configuração do agente da GPU da NVIDIA com a configuração do agente existente e, em seguida, implante a configuração combinada. Se você estiver usando o Systems Manager para armazenar e gerenciar a configuração do agente do CloudWatch, poderá combinar a configuração com o valor do parâmetro existente. Para obter mais informações, consulte Managing CloudWatch agent configuration files . nota Ao usar o Systems Manager para implantar as configurações do agente do CloudWatch apresentadas a seguir, qualquer configuração existente do agente do CloudWatch nas suas instâncias do EC2 será substituída ou sobrescrita. É possível modificar essa configuração para atender às necessidades do ambiente ou do caso de uso específico. As métricas definidas na configuração representam o requisito mínimo necessário para o painel fornecido pela solução. O processo de implantação inclui as seguintes etapas: Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias. Etapa 2: armazenar o arquivo de configuração recomendado do agente no Systems Manager Parameter Store. Etapa 3: instalar o agente do CloudWatch em uma ou mais instâncias do EC2 usando uma pilha do CloudFormation. Etapa 4: verificar se a configuração do agente foi realizada corretamente. Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias Você deve conceder permissão para o Systems Manager instalar e configurar o agente do CloudWatch. Além disso, é necessário conceder permissão para que o agente do CloudWatch publique a telemetria da instância do EC2 para o CloudWatch. Certifique-se de que o perfil do IAM anexado à instância tenha as políticas do IAM CloudWatchAgentServerPolicy e AmazonSSMManagedInstanceCore associadas. Após criar o perfil, associe-o às suas instâncias do EC2. Para anexar um perfil a uma instância do EC2, siga as etapas apresentadas em Attach an IAM role to an instance . Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store O Parameter Store simplifica a instalação do agente do CloudWatch em uma instância do EC2 ao armazenar e gerenciar os parâmetros de configuração de forma segura, eliminando a necessidade de valores com codificação rígida. Isso garante um processo de implantação mais seguro e flexível ao possibilitar o gerenciamento centralizado e as atualizações simplificadas para as configurações em diversas instâncias. Use as etapas apresentadas a seguir para armazenar o arquivo de configuração recomendado do agente do CloudWatch como um parâmetro no Parameter Store. Como criar o arquivo de configuração do agente do CloudWatch como um parâmetro Abra o console AWS Systems Manager em https://console.aws.amazon.com/systems-manager/ . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. No painel de navegação, escolha Gerenciamento de aplicações e, em seguida, Parameter Store . Siga as etapas apresentadas a seguir para criar um novo parâmetro para a configuração. Escolha Criar Parâmetro . Na caixa Nome , insira um nome que será usado para referenciar o arquivo de configuração do agente do CloudWatch nas etapas posteriores. Por exemplo, . AmazonCloudWatch-NVIDIA-GPU-Configuration (Opcional) Na caixa Descrição , digite uma descrição para o parâmetro. Em Camadas de parâmetros , escolha Padrão . Para Tipo , escolha String . Em Tipo de dados , selecione texto . Na caixa Valor , cole o bloco em JSON correspondente que foi listado em Configuração do agente para esta solução . Escolha Criar Parâmetro . Etapa 3: instalar o agente do CloudWatch e aplicar a configuração usando um modelo do CloudFormation É possível usar o AWS CloudFormation para instalar o agente e configurá-lo para usar a configuração do agente do CloudWatch criada nas etapas anteriores. Como instalar e configurar o agente do CloudWatch para esta solução Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como CWAgentInstallationStack . Na seção Parâmetros , especifique o seguinte: Para CloudWatchAgentConfigSSM , insira o nome do parâmetro do Systems Manager para a configuração do agente que você criou anteriormente, como AmazonCloudWatch-NVIDIA-GPU-Configuration . Para selecionar as instâncias de destino, você tem duas opções. Para InstanceIds , especifique uma lista delimitada por vírgulas de IDs de instâncias nas quais você deseja instalar o agente do CloudWatch com esta configuração. É possível listar uma única instância ou várias instâncias. Se você estiver realizando implantações em grande escala, é possível especificar a TagKey e o TagValue correspondente para direcionar todas as instâncias do EC2 associadas a essa etiqueta e a esse valor. Se você especificar uma TagKey , é necessário especificar um TagValue correspondente. (Para um grupo do Auto Scaling, especifique aws:autoscaling:groupName para a TagKey e defina o nome do grupo do Auto Scaling para a TagValue para realizar a implantação em todas as instâncias do grupo do Auto Scaling.) Analise as configurações e, em seguida, escolha Criar pilha . Se você desejar editar o arquivo de modelo previamente para personalizá-lo, selecione a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . nota Após a conclusão desta etapa, este parâmetro do Systems Manager será associado aos agentes do CloudWatch em execução nas instâncias de destino. Isto significa que: Se o parâmetro do Systems Manager for excluído, o agente será interrompido. Se o parâmetro do Systems Manager for editado, as alterações de configuração serão aplicadas automaticamente ao agente na frequência programada, que, por padrão, é de 30 dias. Se você desejar aplicar imediatamente as alterações a este parâmetro do Systems Manager, você deverá executar esta etapa novamente. Para obter mais informações sobre as associações, consulte Working with associations in Systems Manager . Etapa 4: verificar se a configuração do agente foi realizada corretamente É possível verificar se o agente do CloudWatch está instalado ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se o agente do CloudWatch não estiver instalado e em execução, certifique-se de que todas as configurações foram realizadas corretamente. Certifique-se de ter anexado um perfil com as permissões adequadas para a instância do EC2, conforme descrito na Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias . Certifique-se de ter configurado corretamente o JSON para o parâmetro do Systems Manager. Siga as etapas em Solução de problemas de instalação do atendente do CloudWatch com o CloudFormation . Se todas as configurações estiverem corretas, as métricas da GPU da NVIDIA serão publicadas no CloudWatch e estarão disponíveis para visualização. É possível verificar no console do CloudWatch para assegurar que as métricas estão sendo publicadas corretamente. Como verificar se as métricas da GPU da NVIDIA estão sendo publicadas no CloudWatch Abra o console do CloudWatch, em https://console.aws.amazon.com/cloudwatch/ . Escolha Métricas e, depois, Todas as métricas . Certifique-se de ter selecionado a região na qual a solução foi implantada, escolha Namespaces personalizados e, em seguida, selecione CWAgent . Pesquise pelas métricas mencionadas em Configuração do agente para esta solução , como nvidia_smi_utilization_gpu . Caso encontre resultados para essas métricas, isso significa que elas estão sendo publicadas no CloudWatch. Criação do painel da solução com a GPU da NVIDIA O painel fornecido por esta solução apresenta métricas das GPUs da NVIDIA ao agregar e apresentar as métricas em todas as instâncias. O painel mostra um detalhamento dos principais colaboradores (que corresponde aos dez principais por widget de métrica) para cada métrica. Isso ajuda a identificar rapidamente discrepâncias ou instâncias que contribuem significativamente para as métricas observadas. Para criar o painel, é possível usar as seguintes opções: Usar o console do CloudWatch para criar o painel. Usar o console do AWS CloudFormation para implantar o painel. Fazer o download do código de infraestrutura como código do AWS CloudFormation e integrá-lo como parte da automação de integração contínua (CI). Ao usar o console do CloudWatch para criar um painel, é possível visualizá-lo previamente antes de criá-lo e incorrer em custos. nota O painel criado com o CloudFormation nesta solução exibe métricas da região em que a solução está implantada. Certifique-se de que a pilha do CloudFormation seja criada na mesma região em que as métricas da GPU da NVIDIA são publicadas. Se você especificou um namespace personalizado diferente de CWAgent na configuração do agente do CloudWatch, será necessário alterar o modelo do CloudFormation para o painel, substituindo CWAgent pelo namespace personalizado que você está usando. Como criar o painel usando o console do CloudWatch Abra o console do CloudWatch e acesse Criar painel usando este link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Insira o nome do painel e, em seguida, escolha Criar painel . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Visualize previamente o painel e escolha Salvar para criá-lo. Como criar o painel usando o CloudFormation Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como NVIDIA-GPU-DashboardStack . Na seção Parâmetros , especifique o nome do painel no parâmetro DashboardName . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Confirme as funcionalidades de acesso relacionadas às transformações na seção Capacidades e transformações . Lembre-se de que o CloudFormation não adiciona recursos do IAM. Analise as configurações e, em seguida, escolha Criar pilha . Quando o status da pilha mostrar CREATE_COMPLETE , selecione a guia Recursos na pilha criada e, em seguida, escolha o link exibido em ID físico para acessar o painel. Como alternativa, é possível acessar o painel diretamente no console do CloudWatch ao selecionar Painéis no painel de navegação do console à esquerda e localizar o nome do painel na seção Painéis personalizados . Se você desejar editar o arquivo de modelo para personalizá-lo para atender a uma necessidade específica, é possível usar a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . É possível usar este link para fazer download do modelo: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Como começar a usar o painel da GPU da NVIDIA A seguir, apresentamos algumas tarefas que você pode realizar para explorar o novo painel da GPU da NVIDIA. Essas tarefas permitem a validação do funcionamento correto do painel e fornecem uma experiência prática ao usá-lo para monitorar as GPUs da NVIDIA. À medida que realiza as tarefas, você se familiarizará com a navegação no painel e com a interpretação das métricas visualizadas. Análise da utilização da GPU Na seção Utilização , localize os widgets de Utilização da GPU e Utilização da memória . Eles mostram, respectivamente, a porcentagem de tempo em que a GPU está sendo ativamente usada para cálculos e a porcentagem de uso da memória global para leitura ou gravação. Uma utilização elevada pode indicar possíveis gargalos de performance ou a necessidade de obtenção de recursos adicionais de GPU. Análise do uso de memória da GPU Na seção Memória , localize os widgets Memória total , Memória usada e Memória livre . Esses widgets fornecem insights sobre a capacidade total de memória das GPUs, além de indicar a quantidade de memória que, no momento, está sendo consumida ou disponível. A pressão de memória pode acarretar em problemas de performance ou erros por falta de memória, portanto, é fundamental monitorar essas métricas e garantir que a workload tenha memória suficiente disponível. Monitoramento da temperatura e do consumo de energia Na seção Temperatura/Potência , localize os widgets de Temperatura da CPU e Consumo de energia . Essas métricas são essenciais para garantir que as GPUs estejam operando dentro dos limites seguros de temperatura e consumo de energia. Identificação da performance do codificador Na seção Codificador , localize os widgets de Contagem de sessões do codificador , Média de FPS e Latência média . Essas métricas são relevantes se você estiver executando workloads de codificação de vídeo em suas GPUs. O monitoramento dessas métricas é fundamental para garantir a operação ideal dos codificadores e identificar possíveis gargalos ou problemas de performance. Verificação do status do link do PCIe Na seção PCIe , localize os widgets de Geração do link do PCIe e de Largura do link do PCIe . Essas métricas fornecem informações sobre o link do PCIe que estabelece conexão entre a GPU e o sistema de host. Certifique-se de que o link esteja operando com a geração e a largura esperadas para evitar possíveis limitações de performance causadas por gargalos do PCIe. Análise dos relógios da GPU Na seção Relógio , localize os widgets de Relógio de gráficos , Relógio de SM , Relógio de memória e Relógio de vídeo . Essas métricas apresentam as frequências operacionais atuais dos diversos componentes da GPU. O monitoramento desses relógios pode ajudar a identificar possíveis problemas com o ajuste de escala ou com o controle de utilização do relógio da GPU, que podem impactar a performance. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Workload do NGINX no EC2 Workload do Kafka no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html#Solution-NVIDIA-GPU-Agent-Step2 | Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Esta solução auxilia na configuração da coleta de métricas prontas para uso com agentes do CloudWatch para workloads da GPU da NVIDIA que estão sendo executadas em instâncias do EC2. Além disso, a solução ajuda na configuração de um painel do CloudWatch configurado previamente. Para obter informações gerais sobre todas as soluções de observabilidade do CloudWatch, consulte Soluções de observabilidade do CloudWatch . Tópicos Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Requisitos Esta solução é aplicável nas seguintes condições: Computação: Amazon EC2 Fornecimento de suporte para até 500 GPUs em todas as instâncias do EC2 em uma Região da AWS específica Versão mais recente do agente do CloudWatch SSM Agent instalado na instância do EC2 A instância do EC2 deve ter um driver da NVIDIA instalado. Os drivers da NVIDIA são instalados previamente em algumas imagens de máquina da Amazon (AMIs). Caso contrário, é possível instalar o driver manualmente. Para obter mais informações, consulte Instalação de drivers NVIDIA em instâncias Linux . nota O AWS Systems Manager (SSM Agent) está instalado previamente em algumas imagens de máquinas da Amazon (AMIs) fornecidas pela AWS e por entidades externas confiáveis. Se o agente não estiver instalado, você poderá instalá-lo manualmente usando o procedimento adequado para o seu tipo de sistema operacional. Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Linux Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para macOS Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Windows Server Benefícios A solução disponibiliza monitoramento da NVIDIA, fornecendo insights valiosos para os seguintes casos de uso: Análise do uso da GPU e da memória para identificar gargalos de performance ou a necessidade de obtenção de recursos adicionais. Monitoramento da temperatura e do consumo de energia para garantir que as GPUs operem dentro dos limites seguros. Avaliação da performance do codificador para workloads de vídeo na GPU. Verificação da conectividade PCIe para garantir que atendam à geração e à largura esperadas. Monitoramento das velocidades do relógio da GPU para identificar problemas de ajuste de escala ou de controle de utilização. A seguir, apresentamos as principais vantagens da solução: Automatiza a coleta de métricas para a NVIDIA usando a configuração do agente do CloudWatch, o que elimina a necessidade de instrumentação manual. Fornece um painel do CloudWatch consolidado e configurado previamente para as métricas da NVIDIA. O painel gerenciará automaticamente as métricas das novas instâncias do EC2 para a NVIDIA que foram configuradas usando a solução, mesmo que essas métricas não estejam disponíveis no momento de criação do painel. A imagem apresentada a seguir é um exemplo do painel para esta solução. Custos Esta solução cria e usa recursos em sua conta. A cobrança será realizada com base no uso padrão, que inclui o seguinte: Todas as métricas coletadas pelo agente do CloudWatch são cobradas como métricas personalizadas. O número de métricas usadas por esta solução depende do número de hosts do EC2. Cada host do EC2 configurado para a solução publica um total de 17 métricas por GPU. Um painel personalizado. As operações da API solicitadas pelo agente do CloudWatch para publicar as métricas. Com a configuração padrão para esta solução, o agente do CloudWatch chama a operação PutMetricData uma vez por minuto para cada host do EC2. Isso significa que a API PutMetricData será chamada 30*24*60=43,200 em um mês com 30 dias para cada host do EC2. Para obter mais informações sobre os preços do CloudWatch, consulte Preço do Amazon CloudWatch . A calculadora de preços pode ajudar a estimar os custos mensais aproximados para o uso desta solução. Como usar a calculadora de preços para estimar os custos mensais da solução Abra a calculadora de preços do Amazon CloudWatch . Em Escolher uma região , selecione a região em que você gostaria de implantar a solução. Na seção Métricas , em Número de métricas , insira 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . Na seção APIs , em Número de solicitações de API , insira 43200 * number of EC2 instances configured for this solution . Por padrão, o agente do CloudWatch executa uma operação PutMetricData a cada minuto para cada host do EC2. Na seção Painéis e alarmes , em Número de painéis , insira 1 . É possível visualizar os custos mensais estimados na parte inferior da calculadora de preços. Configuração do agente do CloudWatch para esta solução O agente do CloudWatch é um software que opera de maneira contínua e autônoma em seus servidores e em ambientes com contêineres. Ele coleta métricas, logs e rastreamentos da infraestrutura e das aplicações e os envia para o CloudWatch e para o X-Ray. Para obter mais informações sobre o agente do CloudWatch, consulte Coleta de métricas, logs e rastreamentos usando o agente do CloudWatch . Nesta solução, a configuração do agente coleta um conjunto de métricas para ajudar você a começar a monitorar e realizar a observabilidade da GPU da NVIDIA. O agente do CloudWatch pode ser configurado para coletar mais métricas da GPU da NVIDIA do que as que são exibidas por padrão no painel. Para obter uma lista de todas as métricas da GPU da NVIDIA que você pode coletar, consulte Colete métricas de GPU NVIDIA . Configuração do agente para esta solução As métricas coletadas pelo agente são definidas na configuração do agente. A solução fornece configurações do agente para a coleta das métricas recomendadas com dimensões adequadas para o painel da solução. Use a configuração do agente do CloudWatch apresentada a seguir em instâncias do EC2 equipadas com GPUs da NVIDIA. A configuração será armazenada como um parâmetro no Parameter Store do SSM, conforme detalhado posteriormente em Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implantação do agente para a sua solução Existem várias abordagens para instalar o agente do CloudWatch, dependendo do caso de uso. Recomendamos o uso do Systems Manager para esta solução. Ele fornece uma experiência no console e simplifica o gerenciamento de uma frota de servidores gerenciados em uma única conta da AWS. As instruções apresentadas nesta seção usam o Systems Manager e são destinadas para situações em que o agente do CloudWatch não está em execução com as configurações existentes. É possível verificar se o agente do CloudWatch está em execução ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se você já estiver executando o agente do CloudWatch nos hosts do EC2 nos quais a workload está implantada e gerenciando as configurações do agente, pode pular as instruções apresentadas nesta seção e usar o mecanismo de implantação existente para atualizar a configuração. Certifique-se de combinar a configuração do agente da GPU da NVIDIA com a configuração do agente existente e, em seguida, implante a configuração combinada. Se você estiver usando o Systems Manager para armazenar e gerenciar a configuração do agente do CloudWatch, poderá combinar a configuração com o valor do parâmetro existente. Para obter mais informações, consulte Managing CloudWatch agent configuration files . nota Ao usar o Systems Manager para implantar as configurações do agente do CloudWatch apresentadas a seguir, qualquer configuração existente do agente do CloudWatch nas suas instâncias do EC2 será substituída ou sobrescrita. É possível modificar essa configuração para atender às necessidades do ambiente ou do caso de uso específico. As métricas definidas na configuração representam o requisito mínimo necessário para o painel fornecido pela solução. O processo de implantação inclui as seguintes etapas: Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias. Etapa 2: armazenar o arquivo de configuração recomendado do agente no Systems Manager Parameter Store. Etapa 3: instalar o agente do CloudWatch em uma ou mais instâncias do EC2 usando uma pilha do CloudFormation. Etapa 4: verificar se a configuração do agente foi realizada corretamente. Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias Você deve conceder permissão para o Systems Manager instalar e configurar o agente do CloudWatch. Além disso, é necessário conceder permissão para que o agente do CloudWatch publique a telemetria da instância do EC2 para o CloudWatch. Certifique-se de que o perfil do IAM anexado à instância tenha as políticas do IAM CloudWatchAgentServerPolicy e AmazonSSMManagedInstanceCore associadas. Após criar o perfil, associe-o às suas instâncias do EC2. Para anexar um perfil a uma instância do EC2, siga as etapas apresentadas em Attach an IAM role to an instance . Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store O Parameter Store simplifica a instalação do agente do CloudWatch em uma instância do EC2 ao armazenar e gerenciar os parâmetros de configuração de forma segura, eliminando a necessidade de valores com codificação rígida. Isso garante um processo de implantação mais seguro e flexível ao possibilitar o gerenciamento centralizado e as atualizações simplificadas para as configurações em diversas instâncias. Use as etapas apresentadas a seguir para armazenar o arquivo de configuração recomendado do agente do CloudWatch como um parâmetro no Parameter Store. Como criar o arquivo de configuração do agente do CloudWatch como um parâmetro Abra o console AWS Systems Manager em https://console.aws.amazon.com/systems-manager/ . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. No painel de navegação, escolha Gerenciamento de aplicações e, em seguida, Parameter Store . Siga as etapas apresentadas a seguir para criar um novo parâmetro para a configuração. Escolha Criar Parâmetro . Na caixa Nome , insira um nome que será usado para referenciar o arquivo de configuração do agente do CloudWatch nas etapas posteriores. Por exemplo, . AmazonCloudWatch-NVIDIA-GPU-Configuration (Opcional) Na caixa Descrição , digite uma descrição para o parâmetro. Em Camadas de parâmetros , escolha Padrão . Para Tipo , escolha String . Em Tipo de dados , selecione texto . Na caixa Valor , cole o bloco em JSON correspondente que foi listado em Configuração do agente para esta solução . Escolha Criar Parâmetro . Etapa 3: instalar o agente do CloudWatch e aplicar a configuração usando um modelo do CloudFormation É possível usar o AWS CloudFormation para instalar o agente e configurá-lo para usar a configuração do agente do CloudWatch criada nas etapas anteriores. Como instalar e configurar o agente do CloudWatch para esta solução Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como CWAgentInstallationStack . Na seção Parâmetros , especifique o seguinte: Para CloudWatchAgentConfigSSM , insira o nome do parâmetro do Systems Manager para a configuração do agente que você criou anteriormente, como AmazonCloudWatch-NVIDIA-GPU-Configuration . Para selecionar as instâncias de destino, você tem duas opções. Para InstanceIds , especifique uma lista delimitada por vírgulas de IDs de instâncias nas quais você deseja instalar o agente do CloudWatch com esta configuração. É possível listar uma única instância ou várias instâncias. Se você estiver realizando implantações em grande escala, é possível especificar a TagKey e o TagValue correspondente para direcionar todas as instâncias do EC2 associadas a essa etiqueta e a esse valor. Se você especificar uma TagKey , é necessário especificar um TagValue correspondente. (Para um grupo do Auto Scaling, especifique aws:autoscaling:groupName para a TagKey e defina o nome do grupo do Auto Scaling para a TagValue para realizar a implantação em todas as instâncias do grupo do Auto Scaling.) Analise as configurações e, em seguida, escolha Criar pilha . Se você desejar editar o arquivo de modelo previamente para personalizá-lo, selecione a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . nota Após a conclusão desta etapa, este parâmetro do Systems Manager será associado aos agentes do CloudWatch em execução nas instâncias de destino. Isto significa que: Se o parâmetro do Systems Manager for excluído, o agente será interrompido. Se o parâmetro do Systems Manager for editado, as alterações de configuração serão aplicadas automaticamente ao agente na frequência programada, que, por padrão, é de 30 dias. Se você desejar aplicar imediatamente as alterações a este parâmetro do Systems Manager, você deverá executar esta etapa novamente. Para obter mais informações sobre as associações, consulte Working with associations in Systems Manager . Etapa 4: verificar se a configuração do agente foi realizada corretamente É possível verificar se o agente do CloudWatch está instalado ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se o agente do CloudWatch não estiver instalado e em execução, certifique-se de que todas as configurações foram realizadas corretamente. Certifique-se de ter anexado um perfil com as permissões adequadas para a instância do EC2, conforme descrito na Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias . Certifique-se de ter configurado corretamente o JSON para o parâmetro do Systems Manager. Siga as etapas em Solução de problemas de instalação do atendente do CloudWatch com o CloudFormation . Se todas as configurações estiverem corretas, as métricas da GPU da NVIDIA serão publicadas no CloudWatch e estarão disponíveis para visualização. É possível verificar no console do CloudWatch para assegurar que as métricas estão sendo publicadas corretamente. Como verificar se as métricas da GPU da NVIDIA estão sendo publicadas no CloudWatch Abra o console do CloudWatch, em https://console.aws.amazon.com/cloudwatch/ . Escolha Métricas e, depois, Todas as métricas . Certifique-se de ter selecionado a região na qual a solução foi implantada, escolha Namespaces personalizados e, em seguida, selecione CWAgent . Pesquise pelas métricas mencionadas em Configuração do agente para esta solução , como nvidia_smi_utilization_gpu . Caso encontre resultados para essas métricas, isso significa que elas estão sendo publicadas no CloudWatch. Criação do painel da solução com a GPU da NVIDIA O painel fornecido por esta solução apresenta métricas das GPUs da NVIDIA ao agregar e apresentar as métricas em todas as instâncias. O painel mostra um detalhamento dos principais colaboradores (que corresponde aos dez principais por widget de métrica) para cada métrica. Isso ajuda a identificar rapidamente discrepâncias ou instâncias que contribuem significativamente para as métricas observadas. Para criar o painel, é possível usar as seguintes opções: Usar o console do CloudWatch para criar o painel. Usar o console do AWS CloudFormation para implantar o painel. Fazer o download do código de infraestrutura como código do AWS CloudFormation e integrá-lo como parte da automação de integração contínua (CI). Ao usar o console do CloudWatch para criar um painel, é possível visualizá-lo previamente antes de criá-lo e incorrer em custos. nota O painel criado com o CloudFormation nesta solução exibe métricas da região em que a solução está implantada. Certifique-se de que a pilha do CloudFormation seja criada na mesma região em que as métricas da GPU da NVIDIA são publicadas. Se você especificou um namespace personalizado diferente de CWAgent na configuração do agente do CloudWatch, será necessário alterar o modelo do CloudFormation para o painel, substituindo CWAgent pelo namespace personalizado que você está usando. Como criar o painel usando o console do CloudWatch Abra o console do CloudWatch e acesse Criar painel usando este link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Insira o nome do painel e, em seguida, escolha Criar painel . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Visualize previamente o painel e escolha Salvar para criá-lo. Como criar o painel usando o CloudFormation Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como NVIDIA-GPU-DashboardStack . Na seção Parâmetros , especifique o nome do painel no parâmetro DashboardName . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Confirme as funcionalidades de acesso relacionadas às transformações na seção Capacidades e transformações . Lembre-se de que o CloudFormation não adiciona recursos do IAM. Analise as configurações e, em seguida, escolha Criar pilha . Quando o status da pilha mostrar CREATE_COMPLETE , selecione a guia Recursos na pilha criada e, em seguida, escolha o link exibido em ID físico para acessar o painel. Como alternativa, é possível acessar o painel diretamente no console do CloudWatch ao selecionar Painéis no painel de navegação do console à esquerda e localizar o nome do painel na seção Painéis personalizados . Se você desejar editar o arquivo de modelo para personalizá-lo para atender a uma necessidade específica, é possível usar a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . É possível usar este link para fazer download do modelo: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Como começar a usar o painel da GPU da NVIDIA A seguir, apresentamos algumas tarefas que você pode realizar para explorar o novo painel da GPU da NVIDIA. Essas tarefas permitem a validação do funcionamento correto do painel e fornecem uma experiência prática ao usá-lo para monitorar as GPUs da NVIDIA. À medida que realiza as tarefas, você se familiarizará com a navegação no painel e com a interpretação das métricas visualizadas. Análise da utilização da GPU Na seção Utilização , localize os widgets de Utilização da GPU e Utilização da memória . Eles mostram, respectivamente, a porcentagem de tempo em que a GPU está sendo ativamente usada para cálculos e a porcentagem de uso da memória global para leitura ou gravação. Uma utilização elevada pode indicar possíveis gargalos de performance ou a necessidade de obtenção de recursos adicionais de GPU. Análise do uso de memória da GPU Na seção Memória , localize os widgets Memória total , Memória usada e Memória livre . Esses widgets fornecem insights sobre a capacidade total de memória das GPUs, além de indicar a quantidade de memória que, no momento, está sendo consumida ou disponível. A pressão de memória pode acarretar em problemas de performance ou erros por falta de memória, portanto, é fundamental monitorar essas métricas e garantir que a workload tenha memória suficiente disponível. Monitoramento da temperatura e do consumo de energia Na seção Temperatura/Potência , localize os widgets de Temperatura da CPU e Consumo de energia . Essas métricas são essenciais para garantir que as GPUs estejam operando dentro dos limites seguros de temperatura e consumo de energia. Identificação da performance do codificador Na seção Codificador , localize os widgets de Contagem de sessões do codificador , Média de FPS e Latência média . Essas métricas são relevantes se você estiver executando workloads de codificação de vídeo em suas GPUs. O monitoramento dessas métricas é fundamental para garantir a operação ideal dos codificadores e identificar possíveis gargalos ou problemas de performance. Verificação do status do link do PCIe Na seção PCIe , localize os widgets de Geração do link do PCIe e de Largura do link do PCIe . Essas métricas fornecem informações sobre o link do PCIe que estabelece conexão entre a GPU e o sistema de host. Certifique-se de que o link esteja operando com a geração e a largura esperadas para evitar possíveis limitações de performance causadas por gargalos do PCIe. Análise dos relógios da GPU Na seção Relógio , localize os widgets de Relógio de gráficos , Relógio de SM , Relógio de memória e Relógio de vídeo . Essas métricas apresentam as frequências operacionais atuais dos diversos componentes da GPU. O monitoramento desses relógios pode ajudar a identificar possíveis problemas com o ajuste de escala ou com o controle de utilização do relógio da GPU, que podem impactar a performance. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Workload do NGINX no EC2 Workload do Kafka no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html#Solution-NVIDIA-GPU-Dashboard | Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Esta solução auxilia na configuração da coleta de métricas prontas para uso com agentes do CloudWatch para workloads da GPU da NVIDIA que estão sendo executadas em instâncias do EC2. Além disso, a solução ajuda na configuração de um painel do CloudWatch configurado previamente. Para obter informações gerais sobre todas as soluções de observabilidade do CloudWatch, consulte Soluções de observabilidade do CloudWatch . Tópicos Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Requisitos Esta solução é aplicável nas seguintes condições: Computação: Amazon EC2 Fornecimento de suporte para até 500 GPUs em todas as instâncias do EC2 em uma Região da AWS específica Versão mais recente do agente do CloudWatch SSM Agent instalado na instância do EC2 A instância do EC2 deve ter um driver da NVIDIA instalado. Os drivers da NVIDIA são instalados previamente em algumas imagens de máquina da Amazon (AMIs). Caso contrário, é possível instalar o driver manualmente. Para obter mais informações, consulte Instalação de drivers NVIDIA em instâncias Linux . nota O AWS Systems Manager (SSM Agent) está instalado previamente em algumas imagens de máquinas da Amazon (AMIs) fornecidas pela AWS e por entidades externas confiáveis. Se o agente não estiver instalado, você poderá instalá-lo manualmente usando o procedimento adequado para o seu tipo de sistema operacional. Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Linux Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para macOS Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Windows Server Benefícios A solução disponibiliza monitoramento da NVIDIA, fornecendo insights valiosos para os seguintes casos de uso: Análise do uso da GPU e da memória para identificar gargalos de performance ou a necessidade de obtenção de recursos adicionais. Monitoramento da temperatura e do consumo de energia para garantir que as GPUs operem dentro dos limites seguros. Avaliação da performance do codificador para workloads de vídeo na GPU. Verificação da conectividade PCIe para garantir que atendam à geração e à largura esperadas. Monitoramento das velocidades do relógio da GPU para identificar problemas de ajuste de escala ou de controle de utilização. A seguir, apresentamos as principais vantagens da solução: Automatiza a coleta de métricas para a NVIDIA usando a configuração do agente do CloudWatch, o que elimina a necessidade de instrumentação manual. Fornece um painel do CloudWatch consolidado e configurado previamente para as métricas da NVIDIA. O painel gerenciará automaticamente as métricas das novas instâncias do EC2 para a NVIDIA que foram configuradas usando a solução, mesmo que essas métricas não estejam disponíveis no momento de criação do painel. A imagem apresentada a seguir é um exemplo do painel para esta solução. Custos Esta solução cria e usa recursos em sua conta. A cobrança será realizada com base no uso padrão, que inclui o seguinte: Todas as métricas coletadas pelo agente do CloudWatch são cobradas como métricas personalizadas. O número de métricas usadas por esta solução depende do número de hosts do EC2. Cada host do EC2 configurado para a solução publica um total de 17 métricas por GPU. Um painel personalizado. As operações da API solicitadas pelo agente do CloudWatch para publicar as métricas. Com a configuração padrão para esta solução, o agente do CloudWatch chama a operação PutMetricData uma vez por minuto para cada host do EC2. Isso significa que a API PutMetricData será chamada 30*24*60=43,200 em um mês com 30 dias para cada host do EC2. Para obter mais informações sobre os preços do CloudWatch, consulte Preço do Amazon CloudWatch . A calculadora de preços pode ajudar a estimar os custos mensais aproximados para o uso desta solução. Como usar a calculadora de preços para estimar os custos mensais da solução Abra a calculadora de preços do Amazon CloudWatch . Em Escolher uma região , selecione a região em que você gostaria de implantar a solução. Na seção Métricas , em Número de métricas , insira 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . Na seção APIs , em Número de solicitações de API , insira 43200 * number of EC2 instances configured for this solution . Por padrão, o agente do CloudWatch executa uma operação PutMetricData a cada minuto para cada host do EC2. Na seção Painéis e alarmes , em Número de painéis , insira 1 . É possível visualizar os custos mensais estimados na parte inferior da calculadora de preços. Configuração do agente do CloudWatch para esta solução O agente do CloudWatch é um software que opera de maneira contínua e autônoma em seus servidores e em ambientes com contêineres. Ele coleta métricas, logs e rastreamentos da infraestrutura e das aplicações e os envia para o CloudWatch e para o X-Ray. Para obter mais informações sobre o agente do CloudWatch, consulte Coleta de métricas, logs e rastreamentos usando o agente do CloudWatch . Nesta solução, a configuração do agente coleta um conjunto de métricas para ajudar você a começar a monitorar e realizar a observabilidade da GPU da NVIDIA. O agente do CloudWatch pode ser configurado para coletar mais métricas da GPU da NVIDIA do que as que são exibidas por padrão no painel. Para obter uma lista de todas as métricas da GPU da NVIDIA que você pode coletar, consulte Colete métricas de GPU NVIDIA . Configuração do agente para esta solução As métricas coletadas pelo agente são definidas na configuração do agente. A solução fornece configurações do agente para a coleta das métricas recomendadas com dimensões adequadas para o painel da solução. Use a configuração do agente do CloudWatch apresentada a seguir em instâncias do EC2 equipadas com GPUs da NVIDIA. A configuração será armazenada como um parâmetro no Parameter Store do SSM, conforme detalhado posteriormente em Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implantação do agente para a sua solução Existem várias abordagens para instalar o agente do CloudWatch, dependendo do caso de uso. Recomendamos o uso do Systems Manager para esta solução. Ele fornece uma experiência no console e simplifica o gerenciamento de uma frota de servidores gerenciados em uma única conta da AWS. As instruções apresentadas nesta seção usam o Systems Manager e são destinadas para situações em que o agente do CloudWatch não está em execução com as configurações existentes. É possível verificar se o agente do CloudWatch está em execução ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se você já estiver executando o agente do CloudWatch nos hosts do EC2 nos quais a workload está implantada e gerenciando as configurações do agente, pode pular as instruções apresentadas nesta seção e usar o mecanismo de implantação existente para atualizar a configuração. Certifique-se de combinar a configuração do agente da GPU da NVIDIA com a configuração do agente existente e, em seguida, implante a configuração combinada. Se você estiver usando o Systems Manager para armazenar e gerenciar a configuração do agente do CloudWatch, poderá combinar a configuração com o valor do parâmetro existente. Para obter mais informações, consulte Managing CloudWatch agent configuration files . nota Ao usar o Systems Manager para implantar as configurações do agente do CloudWatch apresentadas a seguir, qualquer configuração existente do agente do CloudWatch nas suas instâncias do EC2 será substituída ou sobrescrita. É possível modificar essa configuração para atender às necessidades do ambiente ou do caso de uso específico. As métricas definidas na configuração representam o requisito mínimo necessário para o painel fornecido pela solução. O processo de implantação inclui as seguintes etapas: Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias. Etapa 2: armazenar o arquivo de configuração recomendado do agente no Systems Manager Parameter Store. Etapa 3: instalar o agente do CloudWatch em uma ou mais instâncias do EC2 usando uma pilha do CloudFormation. Etapa 4: verificar se a configuração do agente foi realizada corretamente. Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias Você deve conceder permissão para o Systems Manager instalar e configurar o agente do CloudWatch. Além disso, é necessário conceder permissão para que o agente do CloudWatch publique a telemetria da instância do EC2 para o CloudWatch. Certifique-se de que o perfil do IAM anexado à instância tenha as políticas do IAM CloudWatchAgentServerPolicy e AmazonSSMManagedInstanceCore associadas. Após criar o perfil, associe-o às suas instâncias do EC2. Para anexar um perfil a uma instância do EC2, siga as etapas apresentadas em Attach an IAM role to an instance . Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store O Parameter Store simplifica a instalação do agente do CloudWatch em uma instância do EC2 ao armazenar e gerenciar os parâmetros de configuração de forma segura, eliminando a necessidade de valores com codificação rígida. Isso garante um processo de implantação mais seguro e flexível ao possibilitar o gerenciamento centralizado e as atualizações simplificadas para as configurações em diversas instâncias. Use as etapas apresentadas a seguir para armazenar o arquivo de configuração recomendado do agente do CloudWatch como um parâmetro no Parameter Store. Como criar o arquivo de configuração do agente do CloudWatch como um parâmetro Abra o console AWS Systems Manager em https://console.aws.amazon.com/systems-manager/ . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. No painel de navegação, escolha Gerenciamento de aplicações e, em seguida, Parameter Store . Siga as etapas apresentadas a seguir para criar um novo parâmetro para a configuração. Escolha Criar Parâmetro . Na caixa Nome , insira um nome que será usado para referenciar o arquivo de configuração do agente do CloudWatch nas etapas posteriores. Por exemplo, . AmazonCloudWatch-NVIDIA-GPU-Configuration (Opcional) Na caixa Descrição , digite uma descrição para o parâmetro. Em Camadas de parâmetros , escolha Padrão . Para Tipo , escolha String . Em Tipo de dados , selecione texto . Na caixa Valor , cole o bloco em JSON correspondente que foi listado em Configuração do agente para esta solução . Escolha Criar Parâmetro . Etapa 3: instalar o agente do CloudWatch e aplicar a configuração usando um modelo do CloudFormation É possível usar o AWS CloudFormation para instalar o agente e configurá-lo para usar a configuração do agente do CloudWatch criada nas etapas anteriores. Como instalar e configurar o agente do CloudWatch para esta solução Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como CWAgentInstallationStack . Na seção Parâmetros , especifique o seguinte: Para CloudWatchAgentConfigSSM , insira o nome do parâmetro do Systems Manager para a configuração do agente que você criou anteriormente, como AmazonCloudWatch-NVIDIA-GPU-Configuration . Para selecionar as instâncias de destino, você tem duas opções. Para InstanceIds , especifique uma lista delimitada por vírgulas de IDs de instâncias nas quais você deseja instalar o agente do CloudWatch com esta configuração. É possível listar uma única instância ou várias instâncias. Se você estiver realizando implantações em grande escala, é possível especificar a TagKey e o TagValue correspondente para direcionar todas as instâncias do EC2 associadas a essa etiqueta e a esse valor. Se você especificar uma TagKey , é necessário especificar um TagValue correspondente. (Para um grupo do Auto Scaling, especifique aws:autoscaling:groupName para a TagKey e defina o nome do grupo do Auto Scaling para a TagValue para realizar a implantação em todas as instâncias do grupo do Auto Scaling.) Analise as configurações e, em seguida, escolha Criar pilha . Se você desejar editar o arquivo de modelo previamente para personalizá-lo, selecione a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . nota Após a conclusão desta etapa, este parâmetro do Systems Manager será associado aos agentes do CloudWatch em execução nas instâncias de destino. Isto significa que: Se o parâmetro do Systems Manager for excluído, o agente será interrompido. Se o parâmetro do Systems Manager for editado, as alterações de configuração serão aplicadas automaticamente ao agente na frequência programada, que, por padrão, é de 30 dias. Se você desejar aplicar imediatamente as alterações a este parâmetro do Systems Manager, você deverá executar esta etapa novamente. Para obter mais informações sobre as associações, consulte Working with associations in Systems Manager . Etapa 4: verificar se a configuração do agente foi realizada corretamente É possível verificar se o agente do CloudWatch está instalado ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se o agente do CloudWatch não estiver instalado e em execução, certifique-se de que todas as configurações foram realizadas corretamente. Certifique-se de ter anexado um perfil com as permissões adequadas para a instância do EC2, conforme descrito na Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias . Certifique-se de ter configurado corretamente o JSON para o parâmetro do Systems Manager. Siga as etapas em Solução de problemas de instalação do atendente do CloudWatch com o CloudFormation . Se todas as configurações estiverem corretas, as métricas da GPU da NVIDIA serão publicadas no CloudWatch e estarão disponíveis para visualização. É possível verificar no console do CloudWatch para assegurar que as métricas estão sendo publicadas corretamente. Como verificar se as métricas da GPU da NVIDIA estão sendo publicadas no CloudWatch Abra o console do CloudWatch, em https://console.aws.amazon.com/cloudwatch/ . Escolha Métricas e, depois, Todas as métricas . Certifique-se de ter selecionado a região na qual a solução foi implantada, escolha Namespaces personalizados e, em seguida, selecione CWAgent . Pesquise pelas métricas mencionadas em Configuração do agente para esta solução , como nvidia_smi_utilization_gpu . Caso encontre resultados para essas métricas, isso significa que elas estão sendo publicadas no CloudWatch. Criação do painel da solução com a GPU da NVIDIA O painel fornecido por esta solução apresenta métricas das GPUs da NVIDIA ao agregar e apresentar as métricas em todas as instâncias. O painel mostra um detalhamento dos principais colaboradores (que corresponde aos dez principais por widget de métrica) para cada métrica. Isso ajuda a identificar rapidamente discrepâncias ou instâncias que contribuem significativamente para as métricas observadas. Para criar o painel, é possível usar as seguintes opções: Usar o console do CloudWatch para criar o painel. Usar o console do AWS CloudFormation para implantar o painel. Fazer o download do código de infraestrutura como código do AWS CloudFormation e integrá-lo como parte da automação de integração contínua (CI). Ao usar o console do CloudWatch para criar um painel, é possível visualizá-lo previamente antes de criá-lo e incorrer em custos. nota O painel criado com o CloudFormation nesta solução exibe métricas da região em que a solução está implantada. Certifique-se de que a pilha do CloudFormation seja criada na mesma região em que as métricas da GPU da NVIDIA são publicadas. Se você especificou um namespace personalizado diferente de CWAgent na configuração do agente do CloudWatch, será necessário alterar o modelo do CloudFormation para o painel, substituindo CWAgent pelo namespace personalizado que você está usando. Como criar o painel usando o console do CloudWatch Abra o console do CloudWatch e acesse Criar painel usando este link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Insira o nome do painel e, em seguida, escolha Criar painel . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Visualize previamente o painel e escolha Salvar para criá-lo. Como criar o painel usando o CloudFormation Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como NVIDIA-GPU-DashboardStack . Na seção Parâmetros , especifique o nome do painel no parâmetro DashboardName . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Confirme as funcionalidades de acesso relacionadas às transformações na seção Capacidades e transformações . Lembre-se de que o CloudFormation não adiciona recursos do IAM. Analise as configurações e, em seguida, escolha Criar pilha . Quando o status da pilha mostrar CREATE_COMPLETE , selecione a guia Recursos na pilha criada e, em seguida, escolha o link exibido em ID físico para acessar o painel. Como alternativa, é possível acessar o painel diretamente no console do CloudWatch ao selecionar Painéis no painel de navegação do console à esquerda e localizar o nome do painel na seção Painéis personalizados . Se você desejar editar o arquivo de modelo para personalizá-lo para atender a uma necessidade específica, é possível usar a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . É possível usar este link para fazer download do modelo: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Como começar a usar o painel da GPU da NVIDIA A seguir, apresentamos algumas tarefas que você pode realizar para explorar o novo painel da GPU da NVIDIA. Essas tarefas permitem a validação do funcionamento correto do painel e fornecem uma experiência prática ao usá-lo para monitorar as GPUs da NVIDIA. À medida que realiza as tarefas, você se familiarizará com a navegação no painel e com a interpretação das métricas visualizadas. Análise da utilização da GPU Na seção Utilização , localize os widgets de Utilização da GPU e Utilização da memória . Eles mostram, respectivamente, a porcentagem de tempo em que a GPU está sendo ativamente usada para cálculos e a porcentagem de uso da memória global para leitura ou gravação. Uma utilização elevada pode indicar possíveis gargalos de performance ou a necessidade de obtenção de recursos adicionais de GPU. Análise do uso de memória da GPU Na seção Memória , localize os widgets Memória total , Memória usada e Memória livre . Esses widgets fornecem insights sobre a capacidade total de memória das GPUs, além de indicar a quantidade de memória que, no momento, está sendo consumida ou disponível. A pressão de memória pode acarretar em problemas de performance ou erros por falta de memória, portanto, é fundamental monitorar essas métricas e garantir que a workload tenha memória suficiente disponível. Monitoramento da temperatura e do consumo de energia Na seção Temperatura/Potência , localize os widgets de Temperatura da CPU e Consumo de energia . Essas métricas são essenciais para garantir que as GPUs estejam operando dentro dos limites seguros de temperatura e consumo de energia. Identificação da performance do codificador Na seção Codificador , localize os widgets de Contagem de sessões do codificador , Média de FPS e Latência média . Essas métricas são relevantes se você estiver executando workloads de codificação de vídeo em suas GPUs. O monitoramento dessas métricas é fundamental para garantir a operação ideal dos codificadores e identificar possíveis gargalos ou problemas de performance. Verificação do status do link do PCIe Na seção PCIe , localize os widgets de Geração do link do PCIe e de Largura do link do PCIe . Essas métricas fornecem informações sobre o link do PCIe que estabelece conexão entre a GPU e o sistema de host. Certifique-se de que o link esteja operando com a geração e a largura esperadas para evitar possíveis limitações de performance causadas por gargalos do PCIe. Análise dos relógios da GPU Na seção Relógio , localize os widgets de Relógio de gráficos , Relógio de SM , Relógio de memória e Relógio de vídeo . Essas métricas apresentam as frequências operacionais atuais dos diversos componentes da GPU. O monitoramento desses relógios pode ajudar a identificar possíveis problemas com o ajuste de escala ou com o controle de utilização do relógio da GPU, que podem impactar a performance. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Workload do NGINX no EC2 Workload do Kafka no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:25 |
https://git-scm.com/book/tl/v2/GitHub-Pag-aambag-sa-isang-Proyekto | Git - Pag-aambag sa isang Proyekto About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Pagsisimula 1.1 Tungkol sa Bersyon Kontrol 1.2 Isang Maikling Kasaysayan ng Git 1.3 Pangunahing Kaalaman sa Git 1.4 Ang Command Line 1.5 Pag-install ng Git 1.6 Unang Beses na Pag-Setup ng Git 1.7 Pagkuha ng Tulong 1.8 Buod 2. Mga Pangunahing Kaalaman sa Git 2.1 Pagkuha ng Repositoryo ng Git 2.2 Pagtatala ng mga Pagbabago sa Repositoryo 2.3 Pagtitingin sa Kasaysayan ng Commit 2.4 Pag-Undo ng mga Bagay 2.5 Paggawa gamit ang mga Remote 2.6 Pag-tag 2.7 Mga Alyas sa Git 2.8 Buod 3. Pag-branch ng Git 3.1 Mga Branch sa Maikling Salita 3.2 Batayan ng Pag-branch at Pag-merge 3.3 Pamamahala ng Branch 3.4 Mga Daloy ng Trabaho sa Pag-branch 3.5 Remote na mga Branch 3.6 Pag-rebase 3.7 Buod 4. Git sa Server 4.1 Ang Mga Protokol 4.2 Pagkuha ng Git sa isang Server 4.3 Ang paglikha ng iyong Pampublikong Susi ng SSH 4.4 Pag-Setup ng Server 4.5 Git Daemon 4.6 Smart HTTP 4.7 GitWeb 4.8 GitLab 4.9 Mga Opsyon ng Naka-host sa Third Party 4.10 Buod 5. Distributed Git 5.1 Distributed Workflows 5.2 Contributing to a Project 5.3 Maintaining a Project 5.4 Summary 6. GitHub 6.1 Pag-setup at pagsasaayos ng Account 6.2 Pag-aambag sa isang Proyekto 6.3 Pagpapanatili ng isang Proyekto 6.4 Pamamahala ng isang organisasyon 6.5 Pag-iiskrip sa GitHub 6.6 Buod 7. Mga Git na Kasangkapan 7.1 Pagpipili ng Rebisyon 7.2 Staging na Interactive 7.3 Pag-stash at Paglilinis 7.4 Pag-sign sa Iyong Trabaho 7.5 Paghahanap 7.6 Pagsulat muli ng Kasaysayan 7.7 Ang Reset Demystified 7.8 Advanced na Pag-merge 7.9 Ang Rerere 7.10 Pagdebug gamit ang Git 7.11 Mga Submodule 7.12 Pagbibigkis 7.13 Pagpapalit 7.14 Kredensyal na ImbakanCredential Storage 7.15 Buod 8. Pag-aangkop sa Sariling Pangangailagan ng Git 8.1 Kompigurasyon ng Git 8.2 Mga Katangian ng Git 8.3 Mga Hook ng Git 8.4 An Example Git-Enforced Policy 8.5 Buod 9. Ang Git at iba pang mga Sistema 9.1 Git bilang isang Kliyente 9.2 Paglilipat sa Git 9.3 Buod 10. Mga Panloob ng GIT 10.1 Plumbing and Porcelain 10.2 Git Objects 10.3 Git References 10.4 Packfiles 10.5 Ang Refspec 10.6 Transfer Protocols 10.7 Pagpapanatili At Pagbalik ng Datos 10.8 Mga Variable sa Kapaligiran 10.9 Buod A1. Appendix A: Git in Other Environments A1.1 Grapikal Interfaces A1.2 Git in Visual Studio A1.3 Git sa Eclipse A1.4 Git in Bash A1.5 Git in Zsh A1.6 Git sa Powershell A1.7 Summary A2. Appendix B: Pag-embed ng Git sa iyong Mga Aplikasyon A2.1 Command-line Git A2.2 Libgit2 A2.3 JGit A3. Appendix C: Mga Kautusan ng Git A3.1 Setup at Config A3.2 Pagkuha at Paglikha ng Mga Proyekto A3.3 Pangunahing Snapshotting A3.4 Branching at Merging A3.5 Pagbabahagi at Pagbabago ng mga Proyekto A3.6 Pagsisiyasat at Paghahambing A3.7 Debugging A3.8 Patching A3.9 Email A3.10 External Systems A3.11 Administration A3.12 Pagtutuberong mga Utos 2nd Edition 6.2 GitHub - Pag-aambag sa isang Proyekto Pag-aambag sa isang Proyekto Ngayon na na-set up na ang ating account, tingnan natin ang ilang mga detalye na maaaring kapaki-pakinabang sa pagtulong sa iyo na mag-ambag sa isang umiiral na proyekto. Pag-fork ng mga Proyekto Kung gusto mong mag-ambag sa isang umiiral na proyekto na wala kang access sa pag-push, maaari kang mag-“fork” sa proyekto. Kapag ikaw ay nag-“fork” ng isang proyekto, gagawan ka ng GitHub ng kopya sa proyekto na ganap na sa iyo; ito ay nasa iyong namespace, at maaari kang mag-push dito. Sa kasaysayan, ang termino na “fork” ay medyo negatibo sa konteksto, ibig sabihin na may kumuha sa isang open-source na proyekto sa isang ibang direksiyon, na minsan ay lumilikha ng pakikipagkumpitensya sa proyekto at paghahati sa mga nag-aambag. Sa GitHub, ang “fork” ay ang parehong proyekto lamang sa iyong sariling namespace, nagbibigay-daan sa iyo na pampublikong gumawa ng mga pagbabago sa isang proyekto bilang isang paraan na mag-ambag sa isang mas bukas na pamamaraan. Sa paraang ito, hindi na kailangan mag-alala ng mga proyekto tungkol sa pagdagdag ng mga user bilang mga tagatulong na bigyan sila ng access sa pag-push. Maaari mag-force ang mga tao sa isang proyekto, mag-push dito, at mag-ambag sa kanilang mga pagbabago pabalik sa kanilang orihinal na repositoryo sa pamamagitan ng paglilikha ng tinatawag na Kahilingan na Pull, na ating itatalakay sa susunod. Ito ay nagbubukas ng isang thread ng diskusyon sa pagsusuri ng code, at ang may-ari at nag-aambag ay maaari makikipag-usap tungkol sa pagbabago hanggang ang may-ari ay masaya dito, kung saan ang may-ari ay maaaring pagsamahin ito. Para ma-fork ang isang proyekto, bisitahin ang pahina ng proyekto at i-click ang pindutan na “Fork” na nasa kanang itaas ng pahina. Figure 89. Ang “Fork” na pindutan. Pagkatapos ng ilang segundo, dadalhin ka sa pahina ng iyong bagong proyekto, na may sariling kopya ng code na maaaring mabago. Ang Daloy ng GitHub Ang GitHub ay dinisenyo sa paligid ng isang partikular na workflow sa pakikipagtulungan, nakasentro sa mga Kahilingan na Pull. Ang daloy na ito ay gumagana kung nakikipagtulungan ka sa isang mahigpit na pangkat sa isang solong ibinahaging repositoryo, o isang kompanyang ibinahagi sa mundo o network ng mga estranghero na nag-aambag sa isang proyekto sa pamamagitan ng dose-dosenang mga fork. Ito ay nakasentro sa workflow na Paksa na mga Branch na tinalakay sa Pag-branch ng Git . Narito kung paano ito gumagana: I-fork ang proyekto Lumikha ng isang branch ng paksa mula sa master . Gumawa ng ilang commits upang mapabuti ang proyekto. I-push ang branch na ito sa iyong proyekto sa GitHub. Magbukas ng isang Kahilingan na Pull sa GitHub. Talakayin, at opsyonal na patuloy na gumawa. Pagsasamahin o isasara ng may-ari ng proyekto ang Kahilingan na Pull. Ito ang karaniwan na workflow ng Tagapamahala ng Paglagom na tinalakay sa Integration-Manager Workflow , ngunit sa halip na gagamit ng email sa pakikipag-ugnayan at pagsusuri ng mga pagbabago, ang pangkat ay gumagamit ng mga kagamitan ng GitHub na nakabatay sa web. Talakayin natin ang isang halimbawa ng pagpapanukala ng pagbabago sa isang open-source na proyekto na naka-host sa GitHub gamit ang daloy na ito. Paglilikha ng isang Kahilingan na Pull Naghahanap ng code si Tony na tatakbo sa kanyang Arduino programmable microcontroller at nakatagpo ng isang mahusay na file ng programa sa GitHub sa https://github.com/schacon/blink . Figure 90. Ang proyekto na gusto nating tulungan. Ang tanging problema ay ang masyadong mabilis ang kumukurap na rate, sa tingin namin ito ay mas mahusay na maghintay ng 3 segundo sa halip ng 1 sa pagitan ng bawat pagbabago ng estado. Kaya ating pabutihin ang programa at isumite ito pabalik sa proyekto bilang isang iminungkahing pagbabago. Una, ating i-click ang pindutan na Fork tulad ng nabanggit kanina upang makuha ang ating sariling kopya ng proyekto. Ang pangalan ng gumagamit dito ay “tonychacon” kaya ang kopya ng ating proyekto ay nasa https://github.com/tonychacon/blink at diyan natin maaaring mabago ito. Atin itong lokal na i-clone, lumikha ng isang branch ng paksa, gumawa ng pagbabago sa code at sa wakas ay i-push ang pagbabago na iyon pabalik sa GitHub. $ git clone https://github.com/tonychacon/blink (1) Nagko-clone sa 'blink'... $ cd blink $ git checkout -b slow-blink (2) Lumipat sa isang bagong branch na 'slow-blink' $ sed -i '' 's/1000/3000/' blink.ino (macOS) (3) # Kung ikaw ay nasa isang sistema na Linux, gawin ito sa halip: # $ sed -i 's/1000/3000/' blink.ino (3) $ git diff --word-diff (4) diff --git a/blink.ino b/blink.ino index 15b9911..a6cc5a5 100644 --- a/blink.ino +++ b/blink.ino @@ -18,7 +18,7 @@ void setup() { // ang loop na gawain ay tumatakbo nang paulit-ulit magpakailanman: void loop() { digitalWrite(led, HIGH); // i-on ang LED (HIGH ay ang antas ng boltahe) [-delay(1000);-]{+delay(3000);+} // maghintay ng isang segundo digitalWrite(led, LOW); // i-off ang LED sa pamamagitan ng paggawa ng boltahe sa LOW [-delay(1000);-]{+delay(3000);+} // maghintay ng isang segundo } $ git commit -a -m 'mas mahusay ang tatlong segundo' (5) [slow-blink 5ca509d] mas mahusay ang tatlong segundo 1 file ang nabago, 2 pagsisingit(+), 2 pagtatanggal(-) $ git push origin slow-blink (6) Username para sa 'https://github.com': tonychacon Password para sa 'https://tonychacon@github.com': Pagbibilang ng mga bagay: 5, tapos na. Delta compression na gumagamit ng hanggang 8 threads. Nagko-compress ng mga bagay: 100% (3/3), tapos na. Nagsusulat ng mga bagay: 100% (3/3), 340 bytes | 0 bytes/s, tapos na. Kabuuan 3 (delta 1), muling nagamit 0 (delta 0) Sa https://github.com/tonychacon/blink * [new branch] slow-blink -> slow-blink I-clone nang pa-lokal ang ating fork sa proyekto Lumikha ng isang mapaglarawang branch ng paksa Gawin ang ating pagbabago sa code Suriin na ang pagbabago ay mabuti I-commit ang ating pagbabago sa branch ng paksa I-push ang ating bagong branch ng paksa pabalik sa ating fork sa GitHub Ngayon kung tayo ay babalik sa ating fork sa GitHub, makikita natin na napansin ng GitHub na tayo ay nag-push ng isang bagong branch ng paksa at nagtatanghal sa atin ng isang malaking berdeng pindutan upang suriin ang ating mga pagbabago at magbukas ng isang Kahilingan na Pull sa orihinal na proyekto. Maaari kang pumunta sa pahina ng “Branches” sa https://github.com/<user>/<project>/branches upang hanapin ang iyong branch at magbukas ng isang bagong Kahilingan na Pull mula doon. Figure 91. Pindutan na Kahilingan na Pull Kung i-click natin ang berdeng pindutan na iyon, makikita natin ang isang screen na nagtatanong sa atin na magbigay ng titulo at paglalarawan sa ating Kahilingan na Pull. Ito ay halos palaging kapaki-pakibanang na maglagay ng ilang pagsisikap dito, dahil ang isang mahusay na paglalarawan ay nakakatulong sa may-ari ng orihinal na proyekto na matukoy ang anumang sinubukan mong gawin, kung ang iminungkahing pagbabago ay tama, o kung ang pagtatanggap ng mga pagbabago ay makakabuti sa orihinal na proyekto. Nakikita rin natin ang isang listahan ng mga gumawa sa ating branch ng paksa na “ahead” sa ating branch na master (sa kasong ito, isa lamang) at ang isang pinag-isang diff sa lahat ng mga pagbabago na gagawin kung dapat bang ang branch na ito ay isama ng may-ari ng proyekto. Figure 92. Pahina ng paglilikha ng Kahilingan na Pull Kapag pinindot mo ang pindutan na Lumikha ng kahilingan na pull , ang may-ari ng proyekto na iyong na-fork ay makakakuha ng abiso na may nagmumungkahi ng isang pagbabago at magli-link sa isang pahina kung saan naroon ang lahat ng mga impormasyon na ito. Kahit ang mga Kahilingan na Pull ay pangkaraniwan na ginamit para sa mga pampublikong proyekto kagaya nito kung saan ang nag-aambag ay mayroong kumpletong pagbabago na handang gawin, ito rin ay madalas na ginamit sa mga panloob na mga proyekto sa simula ng cycle ng development. Dahil maaari kang patuloy na mag-push sa branch ng paksa kahit pagkatapos na nabuksan ang Kahilingan na Pull, ito’y madalas na nakabukas nang maaga at ginamit bilang isang paraan upang ulitin ang paggawa bilang isang pangkat sa loob ng konteksto, sa halip na buksan sa kaduluhan ng proseso. Pag-uulit sa Kahilingan na Pull Sa puntong ito, ang may-ari ng proyekto ay maaari tumingin sa iminungkahing pagbabago at pagsamahin ito, tanggihan ito o magkomento dito. Sabihin natin na hindi niya gusto ang ideya, ngunit mas gusto ng isang bahagyang mas mahabang oras para sa liwanag na i-off kaysa i-on. Kung saan ang pag-uusap na ito ay maaaring maganap sa email sa workflow na ipinakita sa << _distributed_git >>, sa GitHub nangyayari ito online. Maaaring suriin ng may-ari ng proyekto ang pinag-isang diff at mag-iwan ng komento sa pamamagitan ng pag-click sa alinman sa mga linya. Figure 93. Magkomento sa isang partikular na linya ng code sa isang Kahilingan na Pull Kapag ang tagapangasiwa ay gumagawa ng komentong ito, ang taong nagbukas ng Kahilingan na Pull (at sa katunayan, ang sinumang iba pa na nanonood sa repositoryo) ay makakakuha ng isang abiso. Susubukan nating baguhin ang pagpapasadya na ito sa ibang pagkakataon, ngunit kung mayroon siyang mga abiso sa email na naka-on, makakakuha si Tony ng isang email na katulad nito: Figure 94. Mga komento naipadala bilang mga abiso sa email Sinuman ay maaari ring mag-iwan ng mga pangkalahatang komento sa Kahilingan na Pull. Sa Pahina ng Talakayan ng Kahilingan na Pull , maaari nating makita na ang may-ari ng proyekto ay kapwa nagkokomento sa isang linya ng code at pagkatapos ay nag-iiwan ng pangkalahatang komento sa seksyon ng talakayan. Maaari mong makita na ang mga komento ng code ay dinala na rin sa pag-uusap. Figure 95. Pahina ng Talakayan ng Kahilingan na Pull Ngayon ang nag-aambag ay maaaring makakita kung ano ang kanilang kailangang gawin upang makakuha ng mga natanggap na pagbabago. Sa kabutihang-palad ito ay napakatuwiran. Kung saan sa email, maaaring kailangan mong muling igulong ang iyong mga serye at muling isumite ito sa listahan ng mga mail, sa GitHub i-commit mo lamang ulit sa branch ng paksa at i-push, kung saan ay awtomatikong na-update sa Kahilingan na Pull. Sa Panghuling Kahilingan na Pull maaari mo ring makita ang komento sa lumang code ay nabagsak sa nabagong Kahiligan na Pull, dahil ito ay nagawa sa isang linya na nabago na. Pagdaragdag ng mga commit sa umiiral na Kahilingan na Pull ay hindi nag-trigger ng abiso, kaya sa sandaling mag-push si Tony ng mga pagtatama, nagdedesisyon siya na mag-iwan ng komento upang ipaalam sa may-ari ng proyekto na siya ay gumawa ng hiniling na pagbabago. Figure 96. Panghuling Kahilingan na Pull Isang nakawiwiling bagay na mapapansin ay kung nag-click ka sa tab ng “Nabagong mga File” sa Kahilingan na Pull na ito, makakakuha ka ng “unified” diff — iyon ang kabuuang pinagsama-samang pagkakaiba na ipapakilala sa iyong pangunahing branch kapag ang paksang ito ay pinagsama. Sa mga termino ng git diff , ito talaga ay awtomatikong nagpapakita sa iyo ng git diff master... para sa branch kung saan nakabatay ang Kahilingan na Pull na ito. Tingnan sa Determining What Is Introduced para sa karagdagang uri ng diff na ito. Ang ibang bagay na iyong mapapansin ay ang pagsusuri ng GitHub kung ang Kahilingan na Pull ay nasama nang malinis at nagbibigay ng isang pindutan na gagawa ng pagsama para sa iyo sa server. Ang pindutan na ito ay nakikita lamang kung ikaw ay may access sa pagsulat sa repositoryo at ang isang walang halaga na pagsasama ay posible. Kung i-click mo ito, magsasagawa ang GitHub ng isang pagsasama na “non-fast-forward”, ibig sabihin na kahit ang pagsasama ay maaari na isang fast-forward, ito ay lilikha pa rin ng isang commit na pagsamahin. Kung naisin mo, maari mo lamamg i-pull ang branch at lokal na isama ito. Kapag isama mo ang branch na ito sa branch na master branch at i-push ito sa GitHub, awtomatikong masasara ang Kahilingan na Pull. Ito ang pinagbabatayang workflow na ginagamit ng karamihan sa mga proyekto ng GitHub. Ang mga branch ng paksa ay nalikha, mga Kahilingan na Pull ay nabuksan sa kanila, isang talakayin ang naganap, posibleng mas maraming trabaho ang ginagawa sa branch at sa huli ang kahilingan ay sarado o pinagsama. Example 9. Hindi Lamang Forks Mahalaga na tandaan na maaari ka ring magbukas ng isang Kahilingan na Pull sa pagitan ng dalawang branch sa parehong repositoryo. Kung ikaw ay nagtatrabaho sa isang tampok at pareho kayong may access sa pagsulat sa proyekto, maaari kang mag-push ng branch ng paksa repositoryo at magbukas ng isang Kahilingan na Pull dito sa branch na master sa parehong proyekto na iyon upang masimulan ang pagsusuri sa review at proseso ng talakayin. Walang kinakailangang pag-fork. Mga Advanced na Kahilingan na Pull Ngayon na ating natalakay ang mga pangunahing kaalaman sa pag-aambag sa isang proyekto sa GitHub, talakayin natin ang ilang mga kagiliw-giliw na tip at kaalaman tungkol sa mga Kahilingan na Pull upang ikaw ay mas magiging epektibo sa paggamit nito. Mga Kahilangan na Pull bilang mga Patch Mahalagang maintindihan na maraming mga proyekto ang hindi talaga nag-iisip ng Kahilangan na Pull bilang mga queue ng perpektong mga patch na dapat ilapat nang malinis sa pagkakasunud-sunod, tulad ng karamihan sa mga proyekto nakabatay sa listahan ng mail na iniisip ang mga kontribusyon ng serye ng patch. Iniisip ng karamihan sa mga proyekto ng GitHub ang mga branch ng Kahilingan na Pull bilang umuulit na pag-uusap sa paligid ng isang ipinanukalang pagbabago, nagtatapos sa isang pinag-isang diff na inilalapat sa pamamagitan ng pagsasama. Ito ay isang mahalagang pagkakaiba, dahil sa pangkalahatan, ang pagbabago ay iminungkahi bago ang code ay naisip na maging perpekto, na kung saan ay malayong mas bihira sa mga listahan ng mail na nakabatay sa patch serye ng mga kontribusyon. Ito ay nagbibigay-daan sa isang mas maagang pag-uusap sa mga tagapagpanatili upang ang pagdating sa tamang solusyon ay higit pa sa pagsisikap ng komunidad. Kung ang code ay iminungkahi sa isang Kahilingan na Pull at mga tagapagpanatili o komunidad ay nagmumungkahi ng pagbabago, ang mga serye ng patch ay pangkalahatang hindi nagulong muli, ngunit sa halip ang pagkakaiba ay na-psu bilang isang bagong commit sa branch, na inililipat ang pag-uusap nang pasulong sa konteksto ng nakaraang trabaho na buo. Halimbawa, kung babalikan mo at tingnang muli sa Panghuling Kahilingan na Pull , mapapansin mo na ang nag-aambagag ay hindi nag-rebase sa kanyang commit at nagpadala ng ibang Kahilingan na Pull. Sa halip ay nagdagdag sila ng mga bagong commit at nag-push sila sa umiiiral na branch. Sa paraang ito kung ikaw ay babalik at titingin sa Kahilingan na Pull na ito sa hinaharap, madali mong mahanap ang lahat ng konteksto kung bakit ang mga desisyon ay nagawa. Pasadyang pagpindot sa “Merge” na pindutan sa site ay naglilikha ng isang commit na pagsasama na nagsasangguni sa Kahilingan na Pull upang madaling bumalik at magsaliksik sa orihinal na pag-uusap kung kinakailangan. Pagpapanatili sa Upstream Kung ang Kahilingan na Pull ay nagiging luma o hindi malinas na naisama, gugustuhin mong ayusin ito upaang ang tagapanatili ay madali makapagsama dito. Susuriin ito ng GitHub para sa iyo at ipapaalam sa iyo sa ibaba ng bawat Kahilingan na Pull kung ang pagsasama ay walang halaga o hindi. Figure 97. Hindi malinis na nasama ang Kahilingan na Pull Kung nakakita ka ng isang bagay tulad ng Hindi malinis na nasama ang Kahilingan na Pull , gugustuhin mong ayusin ang iyong branch upang ito ay nagiging berde at ang tagapagpanatili ay hindi kailangang gumawa ng dagdag na trabaho. Mayroon kang dalawang pangunahing mga opsyon upang magawa ito. Maaari mong i-rebase ang iyong branch sa itaas ng anumang ang branch na target (karaniwan ang branch na master repositoryo na iyong na-fork), o maaari mong isama ang branch na target sa iyong branch. Pipiliin ng karamihan sa mga developer sa GitHub na gawin ang paghuli, para sa parehong dahilan na ating tinalakay sa nakaraang seksyon. Ang mahalaga ay ang kasaysayan at ang panghuling merge, kaya ang pag-rebase ay hindi nagkukuha sa iyo ng marami maliban sa isang bahagyang mas malinis na kasaysayan at sa pagbabalik ay malayo na mas mahirap at madaling kapitan ng error. Kung gusto mong mag-merge sa branch ng target upang magawa mong maaari i-merge ang iyong Kahilingan na Pull,idadagdag mo ang orihinal na repositoryo bilang isang bagong remote, mag-fetch mula dito, mag-merge ng pangunahing branch ng repositoryo na iyon sa iyong branch ng paksa, mag-ayos ng anumang mga isyo at panghuli ay i-push ito sa parehong branch kung saan mo binuksan ang Kahilingan na Pull. Halimbawa, sabihin natin na sa halimbawa na “tonychacon” na ginamit natin dati, ang orihinal na may-akda ay gumawa ng pagbabago na maaari maglikha ng salungatan sa Kahilingan na Pull. Puntahan natin ang mga hakbang na iyon. $ git remote add upstream https://github.com/schacon/blink (1) $ git fetch upstream (2) remote: Nagbibilang ng mga bagay: 3, tapos na. remote: Nagko-compress ng mga bagay: 100% (3/3), tapos na. Nag-a-unpack ng mga bagay: 100% (3/3), tapos na. remote: Kabuuan 3 (delta 0), muling nagamit 0 (delta 0) Mula sa https://github.com/schacon/blink * [new branch] master -> upstream/master $ git merge upstream/master (3) Auto-merging blink.ino SALUNGANTAN (nilalaman): Salungatan sa merge sa blink.ino Nabigo ang awtomating merge; ayusin ang mga salungatan at pagkatapos i-commit ang resulta. $ vim blink.ino (4) $ git add blink.ino $ git commit [slow-blink 3c8d735] Merge remote-tracking branch 'upstream/master' \ sa slower-blink $ git push origin slow-blink (5) Nagbibilang ng mga bagay: 6, tapos na. Delta compression gumagamit hanggang 8 threads. Nagko-compress ng mga bagay: 100% (6/6), tapos na. Nagsusulat ng mga bagay: 100% (6/6), 682 bytes | 0 bytes/s, tapos na. Kabuuan 6 (delta 2), muling nagamit 0 (delta 0) Sa https://github.com/tonychacon/blink ef4725c..3c8d735 slower-blink -> slow-blink Idagdag ang orihinal na repositoryo bilang isang remote na pinangalanang “upstream” Kunin ang pinakabagong trabaho mula sa remote na iyon I-merge ang pangunahing branch sa repositoryo na iyon sa iyong branch ng paksa Ayusin ang nangyaring salungatan I-push pabalik sa parehong branch ng paksa Kapag ginawa mo iyon, ang Kahilingan na Pull ay awtomatikong mai-update at muling susuriin upang makita kung na-merge ito nang malinis. Figure 98. Na-merge nang malinis na ngayon ang Kahilingan na Pull Isa sa mga magagandang bagay tungkol sa Git ay maaari mong patuloy na gawin iyon. Kung mayroon kang isang napakatagal na proyekto, madali mong ma-merge mula sa branch ng target nang paulit-ulit at kailangan lamang makitungo sa mga salungatan na nangyari mula noong huli na ikaw ay nag-merge, ginagawa ang proseso na napapamahalaan. Kung talagang gusto mong i-rebase ang branch upang malinis ito, maaari mong tiyak na gawin ito, ngunit ito ay lubos na hinihikayat na huwag piliting mag-push sa branch na ang Kahilingan na Pull ay binuksan na. Kung ang ibang tao ay naka-pull na dito at nakagawa ng mas maraming trabaho, mararanasan mo ang lahat ng isyo na nakabalangkas sa Ang mga Panganib ng Pag-rebase . Sa halip, i-push ang branch na naka-rebase sa isang bagong branch sa GitHub at magbukas ng isang bagong Kahilingan na Pull na tumutukoy sa luma, pagkatapos ay isara ang orihinal. Mga Reperensiya Ang iyong susunod na tanong ay maaaring “Paano ko ireperensiya ang lumang Kahilingan na Pull?”. Lumilitaw na mayroong maraming, maraming mga paraan upang magamit ang iba pang mga bagay halos kahit saan maaari kang sumulat sa GitHub. Simulan natin kung paano magtukoy ng ibang Kahilingan na Pull o isang Isyu. Lahat ng mga Kahilingan na Pull at mga Isyu ay mga nakatalagang numero at ito ay natatangi sa proyekto. Halimbawa, hindi ka maaaring magkaroon ng Kahilingan na Pull #3 at Isyu #3. Kung gusto mong magreperensiya ng anumang Kahilingan na Pull o Isyu mula sa iba pa, maaari mo lamang ilagay ang #<num> sa anumang komento o paglalarawan. Maaari ka ring maging mas tiyak kung ang Isyu o ang Kahilingan na Pull ay nabubuhay sa ibang lugar; sumulat ng username#<num> kung ikaw ay nagtutukoy ng isang Isyu o Kahilingan na Pull sa isang fork ng repositoryo kung ikaw ay nasaan, o username/repo#<num> upang ireperensiya ang ilang bagay sa ibang repositoryo. Tingnan natin ang isang halimbawa. Sabihing ating na-rebase ang branch sa nakaraang halimbawa, naglikha ng isang bagong kahilingan na pull para rito, at ngayon gusto natin na ireperensiya ang lumang kahilingan na pull mula sa bago. Gusto din natin na ireperensiya ang isang isyu sa fork ng repositoryo at isyu sa ganap na naiibang proyekto. Maaari nating punan ang paglalarawan kagaya ng Mga Pagtukoy sa isang Kahilingan na Pull. . Figure 99. Mga Pagtukoy sa isang Kahilingan na Pull. Kapag tayo ay nagsumite ng kahilingan na pull na ito, makikita natin ang lahat ng mga naibigay tulad ng Mga pagtukoy na naibigay sa isang Kahilingan na Pull. . Figure 100. Mga pagtukoy na naibigay sa isang Kahilingan na Pull. Pansinin na ang buong URL ng GitHub na ating nilagay doon ay pinaikli sa kailangan lamang na impormasyon. Ngayon kung babalik si Tony at isasara ang orihinal na Kahilingan na Pull, maaari nating makita na sa pagbabanggit nito sa isang bago, awtomatikong nilikha ng GitHub ang isang trackback event sa timeline ng Kahilingan na Pull. Ito ay nangangahulugan na sinuman ang bibisita sa Kahilingan na Pull na ito at makakakita na ito ay naisara ay maaaring madaling maka-link pabalik sa pumalit nito. Magiging tulad ng I-link pabalik sa bagong Kahilingan na Pull sa nakasarang timeline ng Kahilingan na Pull. ang link. Figure 101. I-link pabalik sa bagong Kahilingan na Pull sa nakasarang timeline ng Kahilingan na Pull. Bilang karagdagan sa mga numero ng isyu, maaari ka ring magreperensiya ng isang tiyak na commit sa pamamagitan ng SHA-1. Kailangan mong magtukoy ng isang buong 40 karakter na SHA-1, ngunit kung nakikita ito ng GitHub sa komento, ito ay mali-link direkta sa commit. Muli, maaari mong ireperensiya ang mga commit sa mga force o ibang mga repositoryo sa parehong paraan na iyong ginawa sa mga isyu. Pinalasang Markdown ng GitHub Ang pag-link sa ibang mga Isyu ay simula lamang ng kawili-wiling mga bagay na maaari mong magawa sa halos anumang kahon ng teksto sa GitHub. Sa mga paglalarawan ng Isyu at Kahilingan na Pull, mga komento, mga komento ng code at marami pa, maaari kang gumamit ng tinatawag na “Pinalasang Markdown ng GitHub”. Ang markdown ay katulad ng pagsusulat sa isang payak na teksto ngunit ibinigay ng mas sagana.. Tingnan ang Isang halimbawa ng Pinalasang Markdown ng GitHub na nakasulat at nagawa. para sa isang halimbawa kung paano masusulat ang mga komento at teksto at pagkatapos ay gawin gamit ang Markdown. Figure 102. Isang halimbawa ng Pinalasang Markdown ng GitHub na nakasulat at nagawa. Ang timpla ng Markdown ng GitHub ay nagdadagdag ng mas maraming bagay na maaari mong gawin lampas sa pangunahing Markdown syntax. Ang mga ito ay maaaring magagamit kapag naglilikha ng kapaki-pakinabang na Kahilingan na Pull o mga komento o paglalarawan ng Isyu. Mga Listahan ng Gawain Ang una talagang kapaki-pakinabang na tampok ng Markdown na tiyak sa GitHub, lalo na para sa gamit sa mga Kahilingan na Pull, Listahan ng Gawain. Ang listahan ng gawain ay isang listahan ng mga checkbox ng mga bagay na gusto mong tapusin. Paglalagay sa mga ito sa isang Isyu o Kahilingan na Pull ay karaniwang nagpapahiwatig ng mga bagay na gusto mong matapos bago mo isaalang-alang ang mga kompletong aytem. Maaari kang lumikha ng isang listahan ng gawain kagaya nito: - [X] Isulat ang code - [ ] Isulat lahat ang mga pasulit - [ ] Idokumento ang code Kung ating isasama ang mga ito sa paglalarawan ng ating Kahilingan na Pull o Isyu, makikita natin ito na ginawa tulad ng Mga listahan ng gawain na nagawa sa isang komento ng Markdown. Figure 103. Mga listahan ng gawain na nagawa sa isang komento ng Markdown. Ito ay kadalasan ginamit sa mga Kahilingan na pull upang ipahiwatig kung ano ang lahat na gusto mong tapusin sa branch bago maging handang i-merge ang Kahilingan na Pull. Ang talagang magandang bahagi ay maaari kang mag-click lamang sa mga checkbox upang ma-update ang komento — hindi mo na kailangan na baguhin nang direkta ang Markdown upang masuri ang mga gawain. Ano pa, hahanapin ng GitHub ang mga listahan ng gawain sa iyong mga Isyu at Kahilingan na Pull at ipakita ang mga ito bilang metadata sa mga pahina na naglilista sa kanila. Halimbawa, kung mayroon kang Kahilingan na Pull sa mga gawain at titingnan mo ang pahina ng pangkalahatang-ideya ng lahat ng Kahilingan na Pull, maaari mong makita kung gaano kalayo ang ginawa nito. Tinutulungan nito ang mga tao na iwaksi ang Kahilingan na Pull sa mga subtask at tulungan ang ibang tao na subaybayan ang pag-unlad ng branch. Makikita mo ang halimbawa nito sa Buod ng listahan ng gawain sa listahan ng Kahilingan na Pull. . Figure 104. Buod ng listahan ng gawain sa listahan ng Kahilingan na Pull. Ang mga ito ay hindi kapani-paniwala na kapaki-pakinabang kapag binuksan mo ang isang Kahilingan na Pull nang maaga at gamitin ito upang subaybayan ang iyong pag-unlad sa pamamagitan ng pagpapatupad ng mga tampok. Mga Code Snippet Maaari ka ring magdagdag ng mga code snippet sa mga komento. Ito ay lalong kapaki-pakinabang kung nais mong ipakita ang isang bagay na maaari mong subukan na gawin bago aktwal na ipatupad ito bilang isang commit sa iyong branch. Ito ay kadalasang ginagamit upang magdagdag ng halimbawa ng code kung ano ang hindi gumagana o kung ano ang maaaring ipatupad ng Kahilingan na Pull. Upang magdagdag ng isang snippet sa code, kailangan mong i-“fence” ito sa mga backtick. ```java for(int i=0 ; i < 5 ; i++) { System.out.println("i ay : " + i); } ``` Kung nagdagdag ka ng isang pangalan ng wika tulad ng ginawa natin doon sa java , susubukan din ng GitHub na i-highlight ang syntax ng snippet. Sa kaso sa itaas na halimbawa, ito ay matatapos sa paggawa tulad ng Halimbawa ng naka-render fenced code. . Figure 105. Halimbawa ng naka-render fenced code. Pag-quote Kung tumutugon ka sa isang maliit na bahagi ng isang mahabang komento, maaari kang pumiling mag-quote ng iba pang komento sa pamamagitan ng nauna sa mga linya kasama ang > na karakter. Sa katunayan, ito ay karaniwan at kapaki-pakinabang na may shortcut sa keyboard para dito. Kung iyong i-highlight ang teksto sa isang komento na nais mong direktang tumugon at pindutin ang r key, ito ay mag-quote sa teksto na iyon sa kahon ng komento para sa iyo. Ang mga quote ay magiging tulad nito: > Kahit na ito'y Nobler sa isip na magdusa > Ang mga Sling at Arrow ng mapangahas na Kapalaran, Gaano kalaki ang mga sling na ito at sa partikular, ang mga arrow na ito? Kapag nagawa, ang komento ay magiging tulad ng Halimbawa ng nagawang pag-quote. . Figure 106. Halimbawa ng nagawang pag-quote. Emoji Sa wakas, maaari ka ring gumamit ng emoji sa iyong mga komento. Talagang ginagamit ito nang lubos sa mga komento na nakikita mo sa maraming mga isyu ng GitHub at mga Kahilingan na Pull. Mayroon ding isang emoji helper sa GitHub. Kung nagta-type ka ng komento at nagsimula ka ng isang : na karakter, tutulungan ka ng autocompleter na makita kung ano ang iyong hinahanap. Figure 107. Emoji autocompleter in action. Ang Emojis ay kumukuha sa form ng :<name>: saanman sa komento. Halimbawa, maaari kang magsulat ng ilang bagay tulad nito: Ako ay :eyes: sa :bug: at ako'y :cold_sweat:. :trophy: para :microscope: ito. :+1: at :sparkles: sa :ship: na ito, ito'y :fire::poop:! :clap::tada::panda_face: Kapag nagawa, ito ay magiging tulad ng Mabibigay na pagkokomento na emoji. . Figure 108. Mabibigay na pagkokomento na emoji. Hindi sa ito ay hindi kapani-paniwala kapaki-pakinabang, ngunit ito ay nagdaragdag ng isang elemento ng kasiyahan at damdamin sa isang daluyan na kung hindi man ay mahirap upang ihatid ang damdamin. Mayroon talagang maraming bilang ng mga serbisyo ng web na gumagamit sa mga araw na ito ng mga karakter na emoji. Isang magaling na sheet ng panlilinlang na magamit upang mahanap ang emoji na nagpapahayag kung ano ang gusto mong sabihin ay matatagpuan sa: http://www.emoji-cheat-sheet.com Mga Larawan Hindi ito Pinalasang Markdown ng GitHub, ngunit ito ay lubhang kapaki-pakinabang. Bilang karagdagan sa pagdaragdag ng mga link ng larawan ng Markdown sa mga komento, na maaaring magin mahirap na mahanap at para ma-embed ang mga URL, ang GitHub ay nagpapahintulot sa iyo na i-drag at ihulog ang mga larawan sa mga text area upang ma-embed ang mga ito. Figure 109. I-drag at ihulog ang mga larawan upang ma-upload ito at auto-embed ang mga ito. Kung titingnan mo sa I-drag at ihulog ang mga larawan upang ma-upload ito at auto-embed ang mga ito. , maaari kang makakita ng maliit na hint na “Naka-parse bilang Markdown” sa itaas ng text area. Pag-click doon ay magbibigay sa iyo ng buong sheet ng panlilinlang sa lahat na maaari mong gawin sa Markdown ng GitHub. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#collect-by-url | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://www.linkedin.com/products/splunk-enterprise/ | Splunk Enterprise | LinkedIn Skip to main content LinkedIn Splunk in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Splunk Enterprise Big Data Analytics Software by Splunk See who's skilled in this Add as skill Learn more Report this product About Search, analysis and visualization for actionable insights from all of your data. Similar products BigQuery BigQuery Big Data Analytics Software Mindtree Decision Moments Mindtree Decision Moments Big Data Analytics Software Momenta+ Momenta+ Big Data Analytics Software TEDAx TEDAx Big Data Analytics Software moneo moneo Big Data Analytics Software Data Analyst Data Analyst Big Data Analytics Software Sign in to see more Show more Show less Splunk products Splunk Cloud Platform Splunk Cloud Platform Application Performance Monitoring (APM) Software Splunk Enterprise Security Splunk Enterprise Security Security Information & Event Management (SIEM) Software Splunk IT Service Intelligence (ITSI) Splunk IT Service Intelligence (ITSI) AIOps Platforms Splunk Mission Control Splunk Mission Control Splunk Security Orchestration, Automation and Response (SOAR) Splunk Security Orchestration, Automation and Response (SOAR) Security Orchestration, Automation, and Response (SOAR) Software Splunk User Behavior Analytics Splunk User Behavior Analytics User & Entity Behavior Analytics (UEBA) Software Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:25 |
https://www.linkedin.com/jobs/spotify-jobs-worldwide | 4,000+ Spotify jobs in Worldwide Skip to main content LinkedIn Spotify in Worldwide Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Sign in Join for free Any time Any time (4,627) Past month (2,561) Past week (820) Past 24 hours (173) Done Company Clear text Paradox EN (1,250) Capgemini (306) Canonical (214) Insider One (172) Spotify (171) Done Job type Full-time (4,202) Part-time (94) Contract (184) Temporary (23) Volunteer (2) Done Experience level Internship (198) Entry level (1,265) Associate (141) Mid-Senior level (2,117) Director (371) Done Remote Remote (2,023) On-site (1,629) Hybrid (965) Done Get notified about new Spotify jobs in Worldwide . Sign in to create job alert 4,000+ Spotify Jobs in Worldwide Business Development Representative - Backstage Business Development Representative - Backstage Spotify New York, NY 4 days ago Payroll Specialist Payroll Specialist Spotify New York, NY 20 hours ago Data Scientist - Business Analytics Data Scientist - Business Analytics Spotify New York, NY 2 weeks ago **** ********* - ****** ********* (*********** *********) **** ********* - ****** ********* (*********** *********) ******* *** ****, ** 2 days ago ******* ********* - ********* ******* ********* - ********* ******* *** ****, ** 10 hours ago ****** ****** ******* *****, Î**-**-******, ****** 4 days ago ********* *******, **** ***** ********* *******, **** ***** ******* *** ****, ** 10 hours ago Sign in to view all job postings Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to view more jobs Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#param-search-keyword | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#discover-by-discover-url | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | 관련 엔터티에 대한 CloudWatch 에이전트 서비스 및 환경 이름 구성 - Amazon CloudWatch 관련 엔터티에 대한 CloudWatch 에이전트 서비스 및 환경 이름 구성 - Amazon CloudWatch 설명서 Amazon CloudWatch 사용자 가이드 관련 엔터티에 대한 CloudWatch 에이전트 서비스 및 환경 이름 구성 CloudWatch 에이전트는 엔터티 데이터와 함께 지표 및 로그를 전송하여 CloudWatch 콘솔의 관련 내용 살펴보기 창을 지원할 수 있습니다. 서비스 이름 또는 환경 이름은 CloudWatch Agent JSON 구성 을 통해 구성할 수 있습니다. 참고 에이전트 구성은 재정의할 수 있습니다. 에이전트가 관련 엔터티에 대해 전송할 데이터를 확인하는 방법에 대한 자세한 내용은 관련 원격 측정과 함께 CloudWatch 에이전트 사용 섹션을 참조하세요. 지표의 경우 에이전트, 지표 또는 플러그인 수준에서 구성할 수 있습니다. 로그의 경우 에이전트, 로그 또는 파일 수준에서 구성할 수 있습니다. 항상 가장 구체적인 구성이 사용됩니다. 예를 들어 구성이 에이전트 수준 및 지표 수준에 있는 경우, 지표는 지표 구성을 사용하며 다른 모든 항목(로그)은 에이전트 구성을 사용합니다. 다음 예제에서는 서비스 이름과 환경 이름을 구성하는 다양한 방법을 보여줍니다. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 Amazon EC2 인스턴스에서 Prometheus 지표 수집 설정 및 구성 CloudWatch 에이전트 시작 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Konfigurieren Sie CloudWatch Agentendienst- und Umgebungsnamen für verwandte Entitäten - Amazon CloudWatch Konfigurieren Sie CloudWatch Agentendienst- und Umgebungsnamen für verwandte Entitäten - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Konfigurieren Sie CloudWatch Agentendienst- und Umgebungsnamen für verwandte Entitäten Der CloudWatch Agent kann Metriken und Protokolle mit Entitätsdaten senden, um den Themenbereich Erkunden in der CloudWatch Konsole zu unterstützen. Der Dienstname oder der Umgebungsname kann in der JSON-Konfiguration des CloudWatch Agenten konfiguriert werden. Anmerkung Die Agentenkonfiguration kann überschrieben werden. Einzelheiten dazu, wie der Agent entscheidet, welche Daten für verwandte Entitäten gesendet werden sollen, finden Sie unter Verwenden des Agenten mit zugehöriger Telemetrie CloudWatch . Metriken können auf Agenten-, Metrik- oder Plug-in-Ebene konfiguriert werden. Protokolle können auf Agenten-, Protokoll- oder Dateiebene konfiguriert werden. Es wird immer die spezifischste Konfiguration verwendet. Wenn die Konfiguration beispielsweise auf Agentenebene und Metrikebene existiert, verwenden die Metriken die Metrikkonfiguration und alles andere (Protokolle) verwendet die Agentenkonfiguration. Das folgende Beispiel zeigt verschiedene Möglichkeiten, den Service- und Umgebungsnamen zu konfigurieren. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Prometheus-Metrikerfassung auf Amazon-Instances einrichten und konfigurieren EC2 Starten Sie den CloudWatch Agenten Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html | Real-time processing of log data with subscriptions - Amazon CloudWatch Logs Real-time processing of log data with subscriptions - Amazon CloudWatch Logs Documentation Amazon CloudWatch User Guide Real-time processing of log data with subscriptions You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems. When log events are sent to the receiving service, they are base64 encoded and compressed with the gzip format. You can also use CloudWatch Logs centralization to replicate log data from multiple accounts and regions into a central location. For more information, see Cross-account cross-Region log centralization . To begin subscribing to log events, create the receiving resource, such as a Amazon Kinesis Data Streams stream, where the events will be delivered. A subscription filter defines the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information about where to send matching log events to. Log events are sent to the receiving resource soon after being ingested, usually less than three minutes. Note If a log group with a subscription uses log transformation, the filter pattern is compared to the transformed versions of the log events. For more information, see Transform logs during ingestion . You can create subscriptions at the account level and at the log group level. Each account can have one account-level subscription filter per Region. Each log group can have up to two subscription filters associated with it. Note If the destination service returns a retryable error such as a throttling exception or a retryable service exception (HTTP 5xx for example), CloudWatch Logs continues to retry delivery for up to 24 hours. CloudWatch Logs doesn't try to re-deliver if the error is a non-retryable error, such as AccessDeniedException or ResourceNotFoundException. In these cases the subscription filter is disabled for up to 10 minutes, and then CloudWatch Logs retries sending logs to the destination. During this disabled period, logs are skipped. CloudWatch Logs also produces CloudWatch metrics about the forwarding of log events to subscriptions. For more information, see Monitoring with CloudWatch metrics . You can also use a CloudWatch Logs subscription to stream log data in near real time to an Amazon OpenSearch Service cluster. For more information, see Streaming CloudWatch Logs data to Amazon OpenSearch Service . Subscriptions are supported only for log groups in the Standard log class. For more information about log classes, see Log classes . Note Subscription filters might batch log events to optimize transmission and reduce the amount of calls made to the destination. Batching is not guaranteed but is used when possible. For batch processing and analysis of log data on a schedule, consider using Automating log analysis with scheduled queries . Scheduled queries run CloudWatch Logs Insights queries automatically and deliver results to destinations such as Amazon S3 buckets or Amazon EventBridge event buses. Contents Concepts Log group-level subscription filters Account-level subscription filters Cross-account cross-Region subscriptions Confused deputy prevention Log recursion prevention Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Deleting a metric filter Concepts Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/logs/LogsAnomalyDetection.html | Erkennung von Protokollanomalien - CloudWatch Amazon-Protokolle Erkennung von Protokollanomalien - CloudWatch Amazon-Protokolle Dokumentation Amazon CloudWatch Benutzer-Leitfaden Schweregrad und Priorität von Anomalien und Mustern Sichtbarkeit und Zeit der Anomalie Unterdrückung einer Anomalie Häufig gestellte Fragen Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Erkennung von Protokollanomalien Sie können Anomalien in Ihren Protokolldaten auf zwei Arten erkennen: indem Sie einen Log-Anomalie-Detektor für die kontinuierliche Überwachung erstellen oder indem Sie den anomaly detection Befehl in CloudWatch Logs Insights-Abfragen für Analysen auf Abruf verwenden. Ein Detektor für Protokollanomalien scannt die in eine Protokollgruppe aufgenommenen Protokollereignisse und findet automatisch Anomalien in den Protokolldaten. Bei der Erkennung von Anomalien werden maschinelles Lernen und Mustererkennung eingesetzt, um Ausgangswerte für typische Protokollinhalte festzulegen. Für On-Demand-Analysen können Sie den anomaly detection Befehl in CloudWatch Logs Insights-Abfragen verwenden, um ungewöhnliche Muster in Zeitreihendaten zu identifizieren. Weitere Informationen zur abfragebasierten Erkennung von Anomalien finden Sie unter. Verwendung der Anomalieerkennung in Logs Insights CloudWatch Nachdem Sie einen Anomaliedetektor für eine Protokollgruppe erstellt haben, trainiert er anhand der Protokollereignisse der letzten zwei Wochen in der Protokollgruppe. Die Trainingszeit kann bis zu 15 Minuten dauern. Nach Abschluss der Schulung werden eingehende Protokolle analysiert, um Anomalien zu identifizieren. Die Anomalien werden dann in der CloudWatch Protokollkonsole angezeigt, sodass Sie sie untersuchen können. CloudWatch Die Protokollmustererkennung extrahiert Protokollmuster, indem statische und dynamische Inhalte in Ihren Protokollen identifiziert werden. Muster sind nützlich für die Analyse großer Protokollsätze, da eine große Anzahl von Protokollereignissen oft in wenige Muster komprimiert werden kann. Sehen Sie sich zum Beispiel das folgende Beispiel mit drei Protokollereignissen an. 2023-01-01 19:00:01 [INFO] Calling DynamoDB to store for ResourceID: 12342342k124-12345 2023-01-01 19:00:02 [INFO] Calling DynamoDB to store for ResourceID: 324892398123-1234R 2023-01-01 19:00:03 [INFO] Calling DynamoDB to store for ResourceID: 3ff231242342-12345 Im vorherigen Beispiel folgen alle drei Protokollereignisse einem Muster: <Date-1> <Time-2> [INFO] Calling DynamoDB to store for resource id <ResourceID-3> Felder innerhalb eines Musters werden als Token bezeichnet. Felder, die innerhalb eines Musters variieren, z. B. eine Anforderungs-ID oder ein Zeitstempel, werden als dynamische Token bezeichnet. Jeder unterschiedliche Wert, der für ein dynamisches Token gefunden wird, wird als Tokenwert bezeichnet. Wenn CloudWatch Logs den Datentyp ableiten kann, für den ein dynamisches Token steht, wird das Token als < string - number > angezeigt. Das string ist eine Beschreibung des Datentyps, für den das Token steht. Das number zeigt, an welcher Stelle des Musters dieses Token im Vergleich zu den anderen dynamischen Tokens erscheint. CloudWatch Logs weist die Zeichenfolge als Teil des Namens auf der Grundlage der Analyse des Inhalts der Protokollereignisse zu, in denen der Name enthalten ist. Wenn CloudWatch Logs nicht ableiten kann, für welche Art von Daten ein dynamisches Token steht, zeigt es das Token als <Token- number > number an und gibt an, an welcher Stelle im Muster dieses Token im Vergleich zu den anderen dynamischen Tokens vorkommt. Zu den häufigsten Beispielen für dynamische Token gehören Fehlercodes, IP-Adressen, Zeitstempel und Anfragen. IDs Die Erkennung von Protokollanomalien verwendet diese Muster, um Anomalien zu finden. Nach der Trainingszeit für das Modell des Anomaliedetektors werden die Protokolle anhand bekannter Trends bewertet. Der Anomaliedetektor kennzeichnet signifikante Schwankungen als Anomalien. In diesem Kapitel wird beschrieben, wie Sie die Erkennung von Anomalien aktivieren, Anomalien anzeigen, Alarme für Protokollanomaliedetektoren erstellen und Metriken zur Veröffentlichung von Protokollanomaliedetektoren erstellen. Außerdem wird beschrieben, wie der Anomaliedetektor und seine Ergebnisse mit verschlüsselt werden können. AWS Key Management Service Für die Erstellung von Protokollanomaliedetektoren fallen keine Gebühren an. Schweregrad und Priorität von Anomalien und Mustern Jeder von einem Protokollanomaliedetektor gefundenen Anomalie wird eine Priorität zugewiesen. Jedem gefundenen Muster wird ein Schweregrad zugewiesen. Die Priorität wird automatisch berechnet und basiert sowohl auf dem Schweregrad des Musters als auch auf dem Ausmaß der Abweichung von den erwarteten Werten. Wenn beispielsweise ein bestimmter Token-Wert plötzlich um 500% steigt, kann diese Anomalie als HIGH Priorität eingestuft werden, auch wenn ihr Schweregrad NONE Der Schweregrad basiert nur auf Schlüsselwörtern, die in den Mustern wie FATAL ERROR , und vorkommen. WARN Wenn keines dieser Schlüsselwörter gefunden wird, wird der Schweregrad eines Musters als gekennzeichnet NONE . Sichtbarkeit und Zeit der Anomalie Wenn Sie einen Anomaliedetektor erstellen, geben Sie die maximale Sichtbarkeitsdauer für Anomalien an. Dies ist die Anzahl der Tage, an denen die Anomalie in der Konsole angezeigt und durch den ListAnomalies API-Vorgang zurückgegeben wird. Tritt eine Anomalie nach Ablauf dieses Zeitraums weiterhin auf, wird sie automatisch als normales Verhalten akzeptiert und das Anomaliedetektormodell meldet sie nicht mehr als Anomalie. Wenn Sie die Sichtbarkeitszeit bei der Erstellung eines Anomaliedetektors nicht anpassen, werden 21 Tage als Standard verwendet. Unterdrückung einer Anomalie Nachdem eine Anomalie gefunden wurde, können Sie wählen, ob sie vorübergehend oder dauerhaft unterdrückt werden soll. Das Unterdrücken einer Anomalie bewirkt, dass der Anomaliedetektor dieses Ereignis für den von Ihnen angegebenen Zeitraum nicht mehr als Anomalie kennzeichnet. Wenn Sie eine Anomalie unterdrücken, können Sie wählen, ob Sie nur diese bestimmte Anomalie oder alle Anomalien unterdrücken möchten, die mit dem Muster zusammenhängen, in dem die Anomalie gefunden wurde. Sie können unterdrückte Anomalien weiterhin in der Konsole anzeigen. Sie können sich auch dafür entscheiden, sie nicht mehr zu unterdrücken. Häufig gestellte Fragen AWS Verwendet es meine Daten, um Algorithmen für maschinelles Lernen für den AWS Gebrauch oder für andere Kunden zu trainieren? Nein. Das durch das Training erstellte Modell zur Erkennung von Anomalien basiert auf den Protokollereignissen in einer Protokollgruppe und wird nur innerhalb dieser Protokollgruppe und dieses AWS Kontos verwendet. Welche Arten von Protokollereignissen eignen sich gut für die Erkennung von Anomalien? Die Erkennung von Protokollanomalien eignet sich gut für: Anwendungsprotokolle und andere Protokolltypen, bei denen die meisten Protokolleinträge typischen Mustern entsprechen. Protokollgruppen mit Ereignissen, die Schlüsselwörter für die Protokollebene oder den Schweregrad enthalten, wie INFO , ERROR und DEBUG , eignen sich besonders gut für die Erkennung von Protokollanomalien. Die Erkennung von Protokollanomalien ist nicht geeignet für: Protokollereignisse mit extrem langen JSON-Strukturen wie Logs. CloudTrail Die Musteranalyse analysiert nur die ersten 1500 Zeichen einer Protokollzeile, sodass alle Zeichen, die diese Grenze überschreiten, übersprungen werden. Audit- oder Zugriffsprotokolle, wie z. B. VPC-Flow-Logs, werden bei der Erkennung von Anomalien ebenfalls weniger Erfolg haben. Die Anomalieerkennung dient der Erkennung von Anwendungsproblemen und ist daher möglicherweise nicht für Netzwerk- oder Zugriffsanomalien geeignet. Um festzustellen, ob ein Anomaliedetektor für eine bestimmte Protokollgruppe geeignet ist, verwenden Sie die CloudWatch Protokollmusteranalyse, um die Anzahl der Muster in den Protokollereignissen in der Gruppe zu ermitteln. Wenn die Anzahl der Muster nicht mehr als etwa 300 beträgt, funktioniert die Erkennung von Anomalien möglicherweise gut. Weitere Informationen zur Musteranalyse finden Sie unter Musteranalyse . Was wird als Anomalie gekennzeichnet? Die folgenden Ereignisse können dazu führen, dass ein Protokollereignis als Anomalie gekennzeichnet wird: Ein Protokollereignis mit einem Muster, das in der Protokollgruppe noch nie zuvor beobachtet wurde. Eine signifikante Abweichung von einem bekannten Muster. Ein neuer Wert für ein dynamisches Token mit einem diskreten Satz üblicher Werte. Eine starke Änderung der Häufigkeit, mit der ein Wert für ein dynamisches Token vorkommt. Obwohl alle oben genannten Punkte möglicherweise als Anomalien gekennzeichnet werden, bedeuten sie nicht alle, dass die Anwendung schlecht funktioniert. Beispielsweise könnten eine higher-than-usual Reihe von 200 Erfolgswerten als Anomalie gekennzeichnet werden. In Fällen wie diesem könnten Sie erwägen, diese Anomalien zu unterdrücken, die nicht auf Probleme hinweisen. Was passiert mit sensiblen Daten, die maskiert werden? Alle Teile von Protokollereignissen, die als sensible Daten maskiert sind, werden nicht auf Anomalien überprüft. Weitere Informationen zum Maskieren vertraulicher Daten finden Sie unter Hilfe zum Schutz vertraulicher Protokolldaten durch Maskierung. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Fehlerbehebung bei geplanten Abfragen Verwendung der Anomalieerkennung in Logs Insights CloudWatch Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://git-scm.com/book/ko/v2/Git-%ec%84%9c%eb%b2%84-%ec%9a%94%ec%95%bd | Git - 요약 About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. 시작하기 1.1 버전 관리란? 1.2 짧게 보는 Git의 역사 1.3 Git 기초 1.4 CLI 1.5 Git 설치 1.6 Git 최초 설정 1.7 도움말 보기 1.8 요약 2. Git의 기초 2.1 Git 저장소 만들기 2.2 수정하고 저장소에 저장하기 2.3 커밋 히스토리 조회하기 2.4 되돌리기 2.5 리모트 저장소 2.6 태그 2.7 Git Alias 2.8 요약 3. Git 브랜치 3.1 브랜치란 무엇인가 3.2 브랜치와 Merge 의 기초 3.3 브랜치 관리 3.4 브랜치 워크플로 3.5 리모트 브랜치 3.6 Rebase 하기 3.7 요약 4. Git 서버 4.1 프로토콜 4.2 서버에 Git 설치하기 4.3 SSH 공개키 만들기 4.4 서버 설정하기 4.5 Git 데몬 4.6 스마트 HTTP 4.7 GitWeb 4.8 GitLab 4.9 또 다른 선택지, 호스팅 4.10 요약 5. 분산 환경에서의 Git 5.1 분산 환경에서의 워크플로 5.2 프로젝트에 기여하기 5.3 프로젝트 관리하기 5.4 요약 6. GitHub 6.1 계정 만들고 설정하기 6.2 GitHub 프로젝트에 기여하기 6.3 GitHub 프로젝트 관리하기 6.4 Organization 관리하기 6.5 GitHub 스크립팅 6.6 요약 7. Git 도구 7.1 리비전 조회하기 7.2 대화형 명령 7.3 Stashing과 Cleaning 7.4 내 작업에 서명하기 7.5 검색 7.6 히스토리 단장하기 7.7 Reset 명확히 알고 가기 7.8 고급 Merge 7.9 Rerere 7.10 Git으로 버그 찾기 7.11 서브모듈 7.12 Bundle 7.13 Replace 7.14 Credential 저장소 7.15 요약 8. Git맞춤 8.1 Git 설정하기 8.2 Git Attributes 8.3 Git Hooks 8.4 정책 구현하기 8.5 요약 9. Git과 여타 버전 관리 시스템 9.1 Git: 범용 Client 9.2 Git으로 옮기기 9.3 요약 10. Git의 내부 10.1 Plumbing 명령과 Porcelain 명령 10.2 Git 개체 10.3 Git Refs 10.4 Packfile 10.5 Refspec 10.6 데이터 전송 프로토콜 10.7 운영 및 데이터 복구 10.8 환경변수 10.9 요약 A1. 부록 A: 다양한 환경에서 Git 사용하기 A1.1 GUI A1.2 Visual Studio A1.3 Eclipse A1.4 Bash A1.5 Zsh A1.6 Git in Powershell A1.7 요약 A2. 부록 B: 애플리케이션에 Git 넣기 A2.1 Git 명령어 A2.2 Libgit2 A2.3 JGit A2.4 go-git A3. 부록 C: Git 명령어 A3.1 설치와 설정 A3.2 프로젝트 가져오기와 생성하기 A3.3 스냅샷 다루기 A3.4 Branch와 Merge A3.5 공유하고 업데이트하기 A3.6 보기와 비교 A3.7 Debugging A3.8 Patch 하기 A3.9 Email A3.10 다른 버전 관리 시스템 A3.11 관리 A3.12 Plumbing 명령어 2nd Edition 4.10 Git 서버 - 요약 요약 Git 서버를 운영하거나 사람들과 협업을 하는 방법 몇 가지를 살펴 보았다. 자신의 서버에서 Git 서버를 운영하면 제어 범위가 넓어지고 방화벽 등을 운영할 수 있다. 하지만 설정하고 유지보수하는 데에 시간이 많이 든다. 호스팅 서비스를 이용하면 설정과 유지보수가 쉬워진다. 대신 코드를 외부에 두게 된다. 자신의 회사나 조직에서 이를 허용하는지 사용하기 전에 확인해야 한다. 필요에 따라 둘 중 하나를 선택하든지, 아니면 두 방법을 적절히 섞어서 사용하는 것이 좋다. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://www.linkedin.com/legal/privacy-policy?session_redirect=%2Fservices%2Fproducts%2Fsalesforce-lightning-platform%2F&trk=registration-frontend_join-form-privacy-policy | LinkedIn Privacy Policy Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws Privacy Policy Effective November 3, 2025 Your Privacy Matters LinkedIn’s mission is to connect the world’s professionals to allow them to be more productive and successful. Central to this mission is our commitment to be transparent about the data we collect about you, how it is used and with whom it is shared. This Privacy Policy applies when you use our Services (described below). We offer our users choices about the data we collect, use and share as described in this Privacy Policy, Cookie Policy , Settings and our Help Center. Key Terms Choices Settings are available to Members of LinkedIn and Visitors are provided separate controls. Learn More . Table of Contents Data We Collect How We Use Your Data How We Share Information Your Choices and Obligations Other Important Information Introduction We are a social network and online platform for professionals. People use our Services to find and be found for business opportunities, to connect with others and find information. Our Privacy Policy applies to any Member or Visitor to our Services. Our registered users (“Members”) share their professional identities, engage with their network, exchange knowledge and professional insights, post and view relevant content, learn and develop skills, and find business and career opportunities. Content and data on some of our Services is viewable to non-Members (“Visitors”). We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. Members and Visitors located in the Designated Countries or the UK can review additional information in our European Regional Privacy Notice . Services This Privacy Policy, including our Cookie Policy applies to your use of our Services. This Privacy Policy applies to LinkedIn.com, LinkedIn-branded apps, and other LinkedIn-branded sites, apps, communications and services offered by LinkedIn (“Services”), including off-site Services, such as our ad services and the “Apply with LinkedIn” and “Share with LinkedIn” plugins, but excluding services that state that they are offered under a different privacy policy. For California residents, additional disclosures required by California law may be found in our California Privacy Disclosure . Data Controllers and Contracting Parties If you are in the “Designated Countries”, LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) will be the controller of your personal data provided to, or collected by or for, or processed in connection with our Services. If you are outside of the Designated Countries, LinkedIn Corporation will be the controller of (or business responsible for) your personal data provided to, or collected by or for, or processed in connection with our Services. As a Visitor or Member of our Services, the collection, use and sharing of your personal data is subject to this Privacy Policy and other documents referenced in this Privacy Policy, as well as updates. Change Changes to the Privacy Policy apply to your use of our Services after the “effective date.” LinkedIn (“we” or “us”) can modify this Privacy Policy, and if we make material changes to it, we will provide notice through our Services, or by other means, to provide you the opportunity to review the changes before they become effective. If you object to any changes, you may close your account. You acknowledge that your continued use of our Services after we publish or send a notice about our changes to this Privacy Policy means that the collection, use and sharing of your personal data is subject to the updated Privacy Policy, as of its effective date. 1. Data We Collect 1.1 Data You Provide To Us You provide data to create an account with us. Registration To create an account you need to provide data including your name, email address and/or mobile number, general location (e.g., city), and a password. If you register for a premium Service, you will need to provide payment (e.g., credit card) and billing information. You create your LinkedIn profile (a complete profile helps you get the most from our Services). Profile You have choices about the information on your profile, such as your education, work experience, skills, photo, city or area , endorsements, and optional verifications of information on your profile (such as verifications of your identity or workplace). You don’t have to provide additional information on your profile; however, profile information helps you to get more from our Services, including helping recruiters and business opportunities find you. It’s your choice whether to include sensitive information on your profile and to make that sensitive information public. Please do not post or add personal data to your profile that you would not want to be publicly available. You may give other data to us, such as by syncing your calendar. Posting and Uploading We collect personal data from you when you provide, post or upload it to our Services, such as when you fill out a form, (e.g., with demographic data or salary), respond to a survey, or submit a resume or fill out a job application on our Services. If you sync your calendars with our Services, we will collect your calendar meeting information to keep growing your network by suggesting connections for you and others, and by providing information about events, e.g. times, places, attendees and contacts. You don’t have to post or upload personal data; though if you don’t, it may limit your ability to grow and engage with your network over our Services. 1.2 Data From Others Others may post or write about you. Content and News You and others may post content that includes information about you (as part of articles, posts, comments, videos) on our Services. We also may collect public information about you, such as professional-related news and accomplishments, and make it available as part of our Services, including, as permitted by your settings, in notifications to others of mentions in the news . Others may sync their calendar with our Services Contact and Calendar Information We receive personal data (including contact information) about you when others import or sync their calendar with our Services, associate their contacts with Member profiles, scan and upload business cards, or send messages using our Services (including invites or connection requests). If you or others opt-in to sync email accounts with our Services, we will also collect “email header” information that we can associate with Member profiles. Customers and partners may provide data to us. Partners We receive personal data (e.g., your job title and work email address) about you when you use the services of our customers and partners, such as employers or prospective employers and applicant tracking systems providing us job application data. Related Companies and Other Services We receive data about you when you use some of the other services provided by us or our Affiliates , including Microsoft. For example, you may choose to send us information about your contacts in Microsoft apps and services, such as Outlook, for improved professional networking activities on our Services or we may receive information from Microsoft about your engagement with their sites and services. 1.3 Service Use We log your visits and use of our Services, including mobile apps. We log usage data when you visit or otherwise use our Services, including our sites, app and platform technology, such as when you view or click on content (e.g., learning video) or ads (on or off our sites and apps), perform a search, install or update one of our mobile apps, share articles or apply for jobs. We use log-ins, cookies, device information and internet protocol (“IP”) addresses to identify you and log your use. 1.4 Cookies and Similar Technologies We collect data through cookies and similar technologies. As further described in our Cookie Policy , we use cookies and similar technologies (e.g., pixels and ad tags) to collect data (e.g., device IDs) to recognize you and your device(s) on, off and across different services and devices where you have engaged with our Services. We also allow some others to use cookies as described in our Cookie Policy. If you are outside the Designated Countries, we also collect (or rely on others, including Microsoft, who collect) information about your device where you have not engaged with our Services (e.g., ad ID, IP address, operating system and browser information) so we can provide our Members with relevant ads and better understand their effectiveness. Learn more . You can opt out from our use of data from cookies and similar technologies that track your behavior on the sites of others for ad targeting and other ad-related purposes. For Visitors, the controls are here . 1.5 Your Device and Location We receive data through cookies and similar technologies When you visit or leave our Services (including some plugins and our cookies or similar technology on the sites of others), we receive the URL of both the site you came from and the one you go to and the time of your visit. We also get information about your network and device (e.g., IP address, proxy server, operating system, web browser and add-ons, device identifier and features, cookie IDs and/or ISP, or your mobile carrier). If you use our Services from a mobile device, that device will send us data about your location based on your phone settings. We will ask you to opt-in before we use GPS or other tools to identify your precise location. 1.6 Communications If you communicate through our Services, we learn about that. We collect information about you when you communicate with others through our Services (e.g., when you send, receive, or engage with messages, events, or connection requests, including our marketing communications). This may include information that indicates who you are communicating with and when. We also use automated systems to support and protect our site. For example, we use such systems to suggest possible responses to messages and to manage or block content that violates our User Agreement or Professional Community Policies . 1.7 Workplace and School Provided Information When your organization (e.g., employer or school) buys a premium Service for you to use, they give us data about you. Others buying our Services for your use, such as your employer or your school, provide us with personal data about you and your eligibility to use the Services that they purchase for use by their workers, students or alumni. For example, we will get contact information for “ LinkedIn Page ” (formerly Company Page) administrators and for authorizing users of our premium Services, such as our recruiting, sales or learning products. 1.8 Sites and Services of Others We get data when you visit sites that include our ads, cookies or plugins or when you log-in to others’ services with your LinkedIn account. We receive information about your visits and interaction with services provided by others when you log-in with LinkedIn or visit others’ services that include some of our plugins (such as “Apply with LinkedIn”) or our ads, cookies or similar technologies. 1.9 Other We are improving our Services, which means we get new data and create new ways to use data. Our Services are dynamic, and we often introduce new features, which may require the collection of new information. If we collect materially different personal data or materially change how we collect, use or share your data, we will notify you and may also modify this Privacy Policy. Key Terms Affiliates Affiliates are companies controlling, controlled by or under common control with us, including, for example, LinkedIn Ireland, LinkedIn Corporation, LinkedIn Singapore and Microsoft Corporation or any of its subsidiaries (e.g., GitHub, Inc.). 2. How We Use Your Data We use your data to provide, support, personalize and develop our Services. How we use your personal data will depend on which Services you use, how you use those Services and the choices you make in your settings . We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others. You can review LinkedIn's Responsible AI principles here and learn more about our approach to generative AI here . Learn more about the inferences we may make, including as to your age and gender and how we use them. 2.1 Services Our Services help you connect with others, find and be found for work and business opportunities, stay informed, get training and be more productive. We use your data to authorize access to our Services and honor your settings. Stay Connected Our Services allow you to stay in touch and up to date with colleagues, partners, clients, and other professional contacts. To do so, you can “connect” with the professionals who you choose, and who also wish to “connect” with you. Subject to your and their settings , when you connect with other Members, you will be able to search each others’ connections in order to exchange professional opportunities. We use data about you (such as your profile, profiles you have viewed or data provided through address book uploads or partner integrations) to help others find your profile, suggest connections for you and others (e.g. Members who share your contacts or job experiences) and enable you to invite others to become a Member and connect with you. You can also opt-in to allow us to use your precise location or proximity to others for certain tasks (e.g. to suggest other nearby Members for you to connect with, calculate the commute to a new job, or notify your connections that you are at a professional event). It is your choice whether to invite someone to our Services, send a connection request, or allow another Member to become your connection. When you invite someone to connect with you, your invitation will include your network and basic profile information (e.g., name, profile photo, job title, region). We will send invitation reminders to the person you invited. You can choose whether or not to share your own list of connections with your connections. Visitors have choices about how we use their data. Stay Informed Our Services allow you to stay informed about news, events and ideas regarding professional topics you care about, and from professionals you respect. Our Services also allow you to improve your professional skills, or learn new ones. We use the data we have about you (e.g., data you provide, data we collect from your engagement with our Services and inferences we make from the data we have about you), to personalize our Services for you, such as by recommending or ranking relevant content and conversations on our Services. We also use the data we have about you to suggest skills you could add to your profile and skills that you might need to pursue your next opportunity. So, if you let us know that you are interested in a new skill (e.g., by watching a learning video), we will use this information to personalize content in your feed, suggest that you follow certain Members on our site, or suggest related learning content to help you towards that new skill. We use your content, activity and other data, including your name and photo, to provide notices to your network and others. For example, subject to your settings , we may notify others that you have updated your profile, posted content, took a social action , used a feature, made new connections or were mentioned in the news . Career Our Services allow you to explore careers, evaluate educational opportunities, and seek out, and be found for, career opportunities. Your profile can be found by those looking to hire (for a job or a specific task ) or be hired by you. We will use your data to recommend jobs and show you and others relevant professional contacts (e.g., who work at a company, in an industry, function or location or have certain skills and connections). You can signal that you are interested in changing jobs and share information with recruiters. We will use your data to recommend jobs to you and you to recruiters. We may use automated systems to provide content and recommendations to help make our Services more relevant to our Members, Visitors and customers. Keeping your profile accurate and up-to-date may help you better connect to others and to opportunities through our Services. Productivity Our Services allow you to collaborate with colleagues, search for potential clients, customers, partners and others to do business with. Our Services allow you to communicate with other Members and schedule and prepare meetings with them. If your settings allow, we scan messages to provide “bots” or similar tools that facilitate tasks such as scheduling meetings, drafting responses, summarizing messages or recommending next steps. Learn more . 2.2 Premium Services Our premium Services help paying users to search for and contact Members through our Services, such as searching for and contacting job candidates, sales leads and co-workers, manage talent and promote content. We sell premium Services that provide our customers and subscribers with customized-search functionality and tools (including messaging and activity alerts) as part of our talent, marketing and sales solutions. Customers can export limited information from your profile, such as name, headline, current company, current title, and general location (e.g., Dublin), such as to manage sales leads or talent, unless you opt-out . We do not provide contact information to customers as part of these premium Services without your consent. Premium Services customers can store information they have about you in our premium Services, such as a resume or contact information or sales history. The data stored about you by these customers is subject to the policies of those customers. Other enterprise Services and features that use your data include TeamLink and LinkedIn Pages (e.g., content analytics and followers). 2.3 Communications We contact you and enable communications between Members. We offer settings to control what messages you receive and how often you receive some types of messages. We will contact you through email, mobile phone, notices posted on our websites or apps, messages to your LinkedIn inbox, and other ways through our Services, including text messages and push notifications. We will send you messages about the availability of our Services, security, or other service-related issues. We also send messages about how to use our Services, network updates, reminders, job suggestions and promotional messages from us and our partners. You may change your communication preferences at any time. Please be aware that you cannot opt out of receiving service messages from us, including security and legal notices. We also enable communications between you and others through our Services, including for example invitations , InMail , groups and messages between connections. 2.4 Advertising We serve you tailored ads both on and off our Services. We offer you choices regarding personalized ads, but you cannot opt-out of seeing non-personalized ads. We target (and measure the performance of) ads to Members, Visitors and others both on and off our Services directly or through a variety of partners, using the following data, whether separately or combined: Data collected by advertising technologies on and off our Services using pixels, ad tags (e.g., when an advertiser installs a LinkedIn tag on their website), cookies, and other device identifiers; Member-provided information (e.g., profile, contact information, title and industry); Data from your use of our Services (e.g., search history, feed, content you read, who you follow or is following you, connections, groups participation, page visits, videos you watch, clicking on an ad, etc.), including as described in Section 1.3; Information from advertising partners , vendors and publishers ; and Information inferred from data described above (e.g., using job titles from a profile to infer industry, seniority, and compensation bracket; using graduation dates to infer age or using first names or pronoun usage to infer gender; using your feed activity to infer your interests; or using device data to recognize you as a Member). Learn more about the inferences we make and how they may be used for advertising. Learn more about the ad technologies we use and our advertising services and partners. You can learn more about our compliance with laws in the Designated Countries or the UK in our European Regional Privacy Notice . We will show you ads called sponsored content which look similar to non-sponsored content, except that they are labeled as advertising (e.g., as “ad” or “sponsored”). If you take a social action (such as like, comment or share) on these ads, your action is associated with your name and viewable by others, including the advertiser. Subject to your settings , if you take a social action on the LinkedIn Services, that action may be mentioned with related ads. For example, when you like a company we may include your name and photo when their sponsored content is shown. Ad Choices You have choices regarding our uses of certain categories of data to show you more relevant ads. Member settings can be found here . For Visitors, the setting is here . Info to Ad Providers We do not share your personal data with any non-Affiliated third-party advertisers or ad networks except for: (i) hashed IDs or device identifiers (to the extent they are personal data in some countries); (ii) with your separate permission (e.g., in a lead generation form) or (iii) data already visible to any users of the Services (e.g., profile). However, if you view or click on an ad on or off our Services, the ad provider will get a signal that someone visited the page that displayed the ad, and they may, through the use of mechanisms such as cookies, determine it is you. Advertising partners can associate personal data collected by the advertiser directly from you with hashed IDs or device identifiers received from us. We seek to contractually require such advertising partners to obtain your explicit, opt-in consent before doing so where legally required, and in such instances, we take steps to ensure that consent has been provided before processing data from them. 2.5 Marketing We promote our Services to you and others. In addition to advertising our Services, we use Members’ data and content for invitations and communications promoting membership and network growth, engagement and our Services, such as by showing your connections that you have used a feature on our Services. 2.6 Developing Services and Research We develop our Services and conduct research Service Development We use data, including public feedback, to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity. Other Research We seek to create economic opportunity for Members of the global workforce and to help them be more productive and successful. We use the personal data available to us to research social, economic and workplace trends, such as jobs availability and skills needed for these jobs and policies that help bridge the gap in various industries and geographic areas. In some cases, we work with trusted third parties to perform this research, under controls that are designed to protect your privacy. We may also make public data available to researchers to enable assessment of the safety and legal compliance of our Services. We publish or allow others to publish economic insights, presented as aggregated data rather than personal data. Surveys Polls and surveys are conducted by us and others through our Services. You are not obligated to respond to polls or surveys, and you have choices about the information you provide. You may opt-out of survey invitations. 2.7 Customer Support We use data to help you and fix problems. We use data (which can include your communications) to investigate, respond to and resolve complaints and for Service issues (e.g., bugs). 2.8 Insights That Do Not Identify You We use data to generate insights that do not identify you. We use your data to perform analytics to produce and share insights that do not identify you. For example, we may use your data to generate statistics about our Members, their profession or industry, to calculate ad impressions served or clicked on (e.g., for basic business reporting to support billing and budget management or, subject to your settings , for reports to advertisers who may use them to inform their advertising campaigns), to show Members' information about engagement with a post or LinkedIn Page , to publish visitor demographics for a Service or create demographic workforce insights, or to understand usage of our services. 2.9 Security and Investigations We use data for security, fraud prevention and investigations. We and our Affiliates, including Microsoft, may use your data (including your communications) for security purposes or to prevent or investigate possible fraud or other violations of the law, our User Agreement and/or attempts to harm our Members, Visitors, company, Affiliates, or others. Key Terms Social Action E.g. like, comment, follow, share Partners Partners include ad networks, exchanges and others 3. How We Share Information 3.1 Our Services Any data that you include on your profile and any content you post or social action (e.g., likes, follows, comments, shares) you take on our Services will be seen by others, consistent with your settings. Profile Your profile is fully visible to all Members and customers of our Services. Subject to your settings , it can also be visible to others on or off of our Services (e.g., Visitors to our Services or users of third-party search tools). As detailed in our Help Center , your settings, degree of connection with the viewing Member, the subscriptions they may have, their usage of our Services , access channels and search types (e.g., by name or by keyword) impact the availability of your profile and whether they can view certain fields in your profile. Posts, Likes, Follows, Comments, Messages Our Services allow viewing and sharing information including through posts, likes, follows and comments. When you share an article or a post (e.g., an update, image, video or article) publicly it can be viewed by everyone and re-shared anywhere (subject to your settings ). Members, Visitors and others will be able to find and see your publicly-shared content, including your name (and photo if you have provided one). In a group , posts are visible to others according to group type. For example, posts in private groups are visible to others in the group and posts in public groups are visible publicly. Your membership in groups is public and part of your profile, but you can change visibility in your settings . Any information you share through companies’ or other organizations’ pages on our Services will be viewable by those organizations and others who view those pages' content. When you follow a person or organization, you are visible to others and that “page owner” as a follower. We let senders know when you act on their message, subject to your settings where applicable. Subject to your settings , we let a Member know when you view their profile. We also give you choices about letting organizations know when you've viewed their Page. When you like or re-share or comment on another’s content (including ads), others will be able to view these “social actions” and associate it with you (e.g., your name, profile and photo if you provided it). Your employer can see how you use Services they provided for your work (e.g. as a recruiter or sales agent) and related information. We will not show them your job searches or personal messages. Enterprise Accounts Your employer may offer you access to our enterprise Services such as Recruiter, Sales Navigator, LinkedIn Learning or our advertising Campaign Manager. Your employer can review and manage your use of such enterprise Services. Depending on the enterprise Service, before you use such Service, we will ask for permission to share with your employer relevant data from your profile or use of our non-enterprise Services. For example, users of Sales Navigator will be asked to share their “social selling index”, a score calculated in part based on their personal account activity. We understand that certain activities such as job hunting and personal messages are sensitive, and so we do not share those with your employer unless you choose to share it with them through our Services (for example, by applying for a new position in the same company or mentioning your job hunting in a message to a co-worker through our Services). Subject to your settings , when you use workplace tools and services (e.g., interactive employee directory tools) certain of your data may also be made available to your employer or be connected with information we receive from your employer to enable these tools and services. 3.2 Communication Archival Regulated Members may need to store communications outside of our Service. Some Members (or their employers) need, for legal or professional compliance, to archive their communications and social media activity, and will use services of others to provide these archival services. We enable archiving of messages by and to those Members outside of our Services. For example, a financial advisor needs to archive communications with her clients through our Services in order to maintain her professional financial advisor license. 3.3 Others’ Services You may link your account with others’ services so that they can look up your contacts’ profiles, post your shares on such platforms, or enable you to start conversations with your connections on such platforms. Excerpts from your profile will also appear on the services of others. Subject to your settings , other services may look up your profile. When you opt to link your account with other services, personal data (e.g., your name, title, and company) will become available to them. The sharing and use of that personal data will be described in, or linked to, a consent screen when you opt to link the accounts. For example, you may link your Twitter or WeChat account to share content from our Services into these other services, or your email provider may give you the option to upload your LinkedIn contacts into its own service. Third-party services have their own privacy policies, and you may be giving them permission to use your data in ways we would not. You may revoke the link with such accounts. The information you make available to others in our Services (e.g., information from your profile, your posts, your engagement with the posts, or message to Pages) may be available to them on other services . For example, search tools, mail and calendar applications, or talent and lead managers may show a user limited profile data (subject to your settings ), and social media management tools or other platforms may display your posts. The information retained on these services may not reflect updates you make on LinkedIn. 3.4 Related Services We share your data across our different Services and LinkedIn affiliated entities. We will share your personal data with our Affiliates to provide and develop our Services. For example, we may refer a query to Bing in some instances, such as where you'd benefit from a more up to date response in a chat experience. Subject to our European Regional Privacy Notice , we may also share with our Affiliates, including Microsoft, your (1) publicly-shared content (such as your public LinkedIn posts) to provide or develop their services and (2) personal data to improve, provide or develop their advertising services. Where allowed , we may combine information internally across the different Services covered by this Privacy Policy to help our Services be more relevant and useful to you and others. For example, we may personalize your feed or job recommendations based on your learning history. 3.5 Service Providers We may use others to help us with our Services. We use others to help us provide our Services (e.g., maintenance, analysis, audit, payments, fraud detection, customer support, marketing and development). They will have access to your information (e.g., the contents of a customer support request) as reasonably necessary to perform these tasks on our behalf and are obligated not to disclose or use it for other purposes. If you purchase a Service from us, we may use a payments service provider who may separately collect information about you (e.g., for fraud prevention or to comply with legal obligations). 3.6 Legal Disclosures We may need to share your data when we believe it’s required by law or to help protect the rights and safety of you, us or others. It is possible that we will need to disclose information about you when required by law, subpoena, or other legal process or if we have a good faith belief that disclosure is reasonably necessary to (1) investigate, prevent or take action regarding suspected or actual illegal activities or to assist government enforcement agencies; (2) enforce our agreements with you; (3) investigate and defend ourselves against any third-party claims or allegations; (4) protect the security or integrity of our Services or the products or services of our Affiliates (such as by sharing with companies facing similar threats); or (5) exercise or protect the rights and safety of LinkedIn, our Members, personnel or others. We attempt to notify Members about legal demands for their personal data when appropriate in our judgment, unless prohibited by law or court order or when the request is an emergency. We may dispute such demands when we believe, in our discretion, that the requests are overbroad, vague or lack proper authority, but we do not promise to challenge every demand. To learn more see our Data Request Guidelines and Transparency Report . 3.7 Change in Control or Sale We may share your data when our business is sold to others, but it must continue to be used in accordance with this Privacy Policy. We can also share your personal data as part of a sale, merger or change in control, or in preparation for any of these events. Any other entity which buys us or part of our business will have the right to continue to use your data, but only in the manner set out in this Privacy Policy unless you agree otherwise. 4. Your Choices & Obligations 4.1 Data Retention We keep most of your personal data for as long as your account is open. We generally retain your personal data as long as you keep your account open or as needed to provide you Services. This includes data you or others provided to us and data generated or inferred from your use of our Services. Even if you only use our Services when looking for a new job every few years, we will retain your information and keep your profile open, unless you close your account. In some cases we choose to retain certain information (e.g., insights about Services use) in a depersonalized or aggregated form. 4.2 Rights to Access and Control Your Personal Data You can access or delete your personal data. You have many choices about how your data is collected, used and shared. We provide many choices about the collection, use and sharing of your data, from deleting or correcting data you include in your profile and controlling the visibility of your posts to advertising opt-outs and communication controls. We offer you settings to control and manage the personal data we have about you. For personal data that we have about you, you can: Delete Data : You can ask us to erase or delete all or some of your personal data (e.g., if it is no longer necessary to provide Services to you). Change or Correct Data : You can edit some of your personal data through your account. You can also ask us to change, update or fix your data in certain cases, particularly if it’s inaccurate. Object to, or Limit or Restrict, Use of Data : You can ask us to stop using all or some of your personal data (e.g., if we have no legal right to keep using it) or to limit our use of it (e.g., if your personal data is inaccurate or unlawfully held). Right to Access and/or Take Your Data : You can ask us for a copy of your personal data and can ask for a copy of personal data you provided in machine readable form. Visitors can learn more about how to make these requests here . You may also contact us using the contact information below, and we will consider your request in accordance with applicable laws. Residents in the Designated Countries and the UK , and other regions , may have additional rights under their laws. 4.3 Account Closure We keep some of your data even after you close your account. If you choose to close your LinkedIn account, your personal data will generally stop being visible to others on our Services within 24 hours. We generally delete closed account information within 30 days of account closure, except as noted below. We retain your personal data even after you have closed your account if reasonably necessary to comply with our legal obligations (including law enforcement requests), meet regulatory requirements, resolve disputes, maintain security, prevent fraud and abuse (e.g., if we have restricted your account for breach of our Professional Community Policies ), enforce our User Agreement, or fulfill your request to "unsubscribe" from further messages from us. We will retain de-personalized information after your account has been closed. Information you have shared with others (e.g., through InMail, updates or group posts) will remain visible after you close your account or delete the information from your own profile or mailbox, and we do not control data that other Members have copied out of our Services. Groups content and ratings or review content associated with closed accounts will show an unknown user as the source. Your profile may continue to be displayed in the services of others (e.g., search tools) until they refresh their cache. 5. Other Important Information 5.1. Security We monitor for and try to prevent security breaches. Please use the security features available through our Services. We implement security safeguards designed to protect your data, such as HTTPS. We regularly monitor our systems for possible vulnerabilities and attacks. However, we cannot warrant the security of any information that you send us. There is no guarantee that data may not be accessed, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. 5.2. Cross-Border Data Transfers We store and use your data outside your country. We process data both inside and outside of the United States and rely on legally-provided mechanisms to lawfully transfer data across borders. Learn more . Countries where we process data may have laws which are different from, and potentially not as protective as, the laws of your own country. 5.3 Lawful Bases for Processing We have lawful bases to collect, use and share data about you. You have choices about our use of your data. At any time, you can withdraw consent you have provided by going to settings. We will only collect and process personal data about you where we have lawful bases. Lawful bases include consent (where you have given consent), contract (where processing is necessary for the performance of a contract with you (e.g., to deliver the LinkedIn Services you have requested) and “legitimate interests.” Learn more . Where we rely on your consent to process personal data, you have the right to withdraw or decline your consent at any time and where we rely on legitimate interests, you have the right to object. Learn More . If you have any questions about the lawful bases upon which we collect and use your personal data, please contact our Data Protection Officer here . If you're located in one of the Designated Countries or the UK, you can learn more about our lawful bases for processing in our European Regional Privacy Notice . 5.4. Direct Marketing and Do Not Track Signals Our statements regarding direct marketing and “do not track” signals. We currently do not share personal data with third parties for their direct marketing purposes without your permission. Learn more about this and about our response to “do not track” signals. 5.5. Contact Information You can contact us or use other options to resolve any complaints. If you have questions or complaints regarding this Policy, please first contact LinkedIn online. You can also reach us by physical mail . If contacting us does not resolve your complaint, you have more options . Residents in the Designated Countries and other regions may also have the right to contact our Data Protection Officer here . If this does not resolve your complaint, Residents in the Designated Countries and other regions may have more options under their laws. Key Terms Consent Where we process data based on consent, we will ask for your explicit consent. You may withdraw your consent at any time, but that will not affect the lawfulness of the processing of your personal data prior to such withdrawal. Where we rely on contract, we will ask that you agree to the processing of personal data that is necessary for entering into or performance of your contract with us. We will rely on legitimate interests as a basis for data processing where the processing of your data is not overridden by your interests or fundamental rights and freedoms. LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:25 |
https://es-la.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook Correo o teléfono Contraseña ¿Olvidaste tu cuenta? Crear cuenta nueva Se te bloqueó temporalmente Se te bloqueó temporalmente Parece que hiciste un uso indebido de esta función al ir muy rápido. Se te bloqueó su uso temporalmente. Back Español 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Registrarte Iniciar sesión Messenger Facebook Lite Video Meta Pay Tienda de Meta Meta Quest Ray-Ban Meta Meta AI Más contenido de Meta AI Instagram Threads Centro de información de votación Política de privacidad Centro de privacidad Información Crear anuncio Crear página Desarrolladores Empleo Cookies Opciones de anuncios Condiciones Ayuda Importación de contactos y no usuarios Configuración Registro de actividad Meta © 2026 | 2026-01-13T09:29:25 |
https://zh-cn.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook 邮箱或手机号 密码 忘记账户了? 创建新账户 你暂时被禁止使用此功能 你暂时被禁止使用此功能 似乎你过度使用了此功能,因此暂时被阻止,不能继续使用。 Back 中文(简体) 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 日本語 Português (Brasil) Français (France) Deutsch 注册 登录 Messenger Facebook Lite 视频 Meta Pay Meta 商店 Meta Quest Ray-Ban Meta Meta AI Meta AI 更多内容 Instagram Threads 选民信息中心 隐私政策 隐私中心 关于 创建广告 创建公共主页 开发者 招聘信息 Cookie Ad Choices 条款 帮助 联系人上传和非用户 设置 动态记录 Meta © 2026 | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/installing-cloudwatch-agent-ssm.html#CloudWatch-Agent-profile-instance-fleet | Installieren Sie den CloudWatch Agenten mit AWS Systems Manager - Amazon CloudWatch Installieren Sie den CloudWatch Agenten mit AWS Systems Manager - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Installieren oder Aktualisieren des SSM-Agenten Überprüfen der Voraussetzungen für Systems Manager Überprüfen des Internetzugangs Laden Sie das CloudWatch Agentenpaket auf Ihre erste Instanz herunter Erstellen und Ändern der Agentenkonfigurationsdatei Installieren Sie den CloudWatch Agenten auf EC2 Instanzen Installieren Sie den CloudWatch Agenten auf EC2 Instanzen (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Installieren Sie den CloudWatch Agenten mit AWS Systems Manager Die Verwendung AWS Systems Manager erleichtert die Installation des CloudWatch Agenten auf einer Flotte von EC2 Amazon-Instances. Sie können den Agenten auf einen Server herunterladen und Ihre CloudWatch Agenten-Konfigurationsdatei für alle Server in der Flotte erstellen. Anschließend können Sie Systems Manager verwenden, um den Agenten auf den anderen Servern zu installieren. Verwenden Sie dazu die Konfigurationsdatei, die Sie erstellt haben. Verwenden Sie die folgenden Themen, um den CloudWatch Agenten mit zu installieren und auszuführen AWS Systems Manager. Themen Installieren oder Aktualisieren des SSM-Agenten. Überprüfen der Voraussetzungen für Systems Manager Überprüfen des Internetzugangs Laden Sie das CloudWatch Agentenpaket auf Ihre erste Instanz herunter Erstellen und Ändern der Agentenkonfigurationsdatei Installieren und starten Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen Installieren Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten Installieren oder Aktualisieren des SSM-Agenten. Auf einer EC2 Amazon-Instance erfordert der CloudWatch Agent, dass auf der Instance Version 2.2.93.0 oder höher des SSM-Agenten ausgeführt wird. Bevor Sie den CloudWatch Agenten installieren, aktualisieren oder installieren Sie den SSM Agent auf der Instance, falls Sie dies noch nicht getan haben. Informationen über das Installieren oder Aktualisieren des SSM Agent auf einer Instance, auf der Linux ausgeführt wird, finden Sie unter Installieren und Konfigurieren von SSM Agent auf Linux-Instances im AWS Systems Manager -Benutzerhandbuch . Informationen über das Installieren oder Aktualisieren des SSM Agenten finden Sie unter Installieren und Konfigurieren des SSM-Agenten im AWS Systems Manager -Benutzerhandbuch . Überprüfen der Voraussetzungen für Systems Manager Bevor Sie Systems Manager Run Command zur Installation und Konfiguration des CloudWatch Agenten verwenden, stellen Sie sicher, dass Ihre Instances die Mindestanforderungen von Systems Manager erfüllen. Weitere Informationen finden Sie unter Voraussetzungen für Systems Manager im AWS Systems Manager -Benutzerhandbuch . Überprüfen des Internetzugangs Ihre EC2 Amazon-Instances müssen in der Lage sein, eine Verbindung zu CloudWatch Endpunkten herzustellen. Dies kann über Internet Gateway-, NAT-Gateway- oder CloudWatch Interface-VPC-Endpunkte erfolgen. Weitere Informationen dazu, wie Sie den Internetzugang konfigurieren, finden Sie unter Internet-Gateways im Benutzerhandbuch zu Amazon VPC . Folgende auf Ihrem Proxy zu konfigurierende Endpunkte und Ports sind möglich: Wenn Sie den Agenten zum Sammeln von Metriken verwenden, müssen Sie zulassen, dass Sie die CloudWatch Endpunkte für die entsprechenden Regionen auflisten. Diese Endpunkte sind bei Amazon CloudWatch in der Allgemeine Amazon Web Services-Referenz aufgeführt. Wenn Sie den Agenten zum Sammeln von Protokollen verwenden, müssen Sie die Liste der CloudWatch Logs-Endpunkte für die entsprechenden Regionen zulassen. Diese Endpunkte sind in Amazon CloudWatch Logs im Allgemeine Amazon Web Services-Referenz aufgeführt. Wenn Sie den Agenten mit Systems Manager installieren oder die Konfigurationsdatei mit Parameter Store speichern, müssen Sie die Systems-Manager-Endpunkte für die entsprechenden Regionen der Zulassungsliste hinzufügen. Diese Endpunkte sind unter AWS -Systems Manager im Allgemeine Amazon Web Services-Referenz aufgeführt. Laden Sie das CloudWatch Agentenpaket auf Ihre erste Instanz herunter Gehen Sie wie folgt vor, um das CloudWatch Agentenpaket mit Systems Manager herunterzuladen. So laden Sie den CloudWatch Agenten mit Systems Manager herunter Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Liste der Befehlsdokumente die Option AWSPackageAWS-Configure aus. Wählen Sie im Bereich Targets (Ziele) die Instance, auf der der CloudWatch-Agent installiert werden soll. Wenn Sie eine bestimmte Instance nicht sehen, ist sie möglicherweise nicht als verwaltete Instance für die Verwendung mit Systems Manager konfiguriert. Weitere Informationen finden Sie im AWS Systems Manager Benutzerhandbuch unter Einrichtung AWS Systems Manager für Hybridumgebungen . Klicken Sie in der Liste Action auf Install . Geben Sie im Feld Name AmazonCloudWatchAgent ein. Lassen Sie Version auf latest (aktuell) eingestellt, damit die neueste Version des Agenten installiert wird. Klicken Sie auf Ausführen . Wählen Sie optional in den Bereichen Targets and outputs (Ziele und Ausgaben) die Schaltfläche neben einem Instance-Namen aus und wählen Sie View output (Ausgabe anzeigen) aus. Systems Manager zeigt jetzt an, dass der Agent erfolgreich installiert wurde. Erstellen und Ändern der Agentenkonfigurationsdatei Nachdem Sie den CloudWatch Agenten heruntergeladen haben, müssen Sie die Konfigurationsdatei erstellen, bevor Sie den Agenten auf Servern starten. Wenn Sie Ihre Agentenkonfigurationsdatei im Systems Manager Parameter Store speichern möchten, müssen Sie eine EC2 Instanz verwenden, um sie im Parameter Store zu speichern. Darüber hinaus müssen Sie dieser Instance zunächst die IAM-Rolle CloudWatchAgentAdminRole anfügen. Weitere Informationen zum Anhängen von Rollen finden Sie unter Anhängen einer IAM-Rolle an eine Instance im EC2 Amazon-Benutzerhandbuch . Weitere Informationen zum Erstellen der CloudWatch Agent-Konfigurationsdatei finden Sie unter. Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei Installieren und starten Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen Nachdem Sie eine CloudWatch Agentenkonfiguration im Parameter Store gespeichert haben, können Sie sie verwenden, wenn Sie den Agenten auf anderen Servern installieren. Führen Sie für jeden dieser Server die zuvor in diesem Abschnitt aufgeführten Schritte aus, um die Systems Manager-Voraussetzungen, die Version des SSM-Agenten und den Internetzugang zu überprüfen. Verwenden Sie dann die folgenden Anweisungen, um den CloudWatch Agenten mithilfe der von Ihnen erstellten CloudWatch Agenten-Konfigurationsdatei auf den zusätzlichen Instanzen zu installieren. Schritt 1: Laden Sie den CloudWatch Agenten herunter und installieren Sie ihn Um die CloudWatch Daten in eine andere Region senden zu können, stellen Sie sicher, dass die IAM-Rolle, die Sie dieser Instanz zugewiesen haben, berechtigt ist, die CloudWatch Daten in dieser Region zu schreiben. Im Folgenden finden Sie ein Beispiel für die Verwendung des aws configure Befehls zum Erstellen eines benannten Profils für den CloudWatch Agenten. Bei diesem Beispiel wird davon ausgegangen, dass Sie den Standardprofilnamen von AmazonCloudWatchAgent verwenden. Um das AmazonCloudWatchAgent Profil für den CloudWatch Agenten zu erstellen Geben Sie auf Linux-Servern den folgenden Befehl ein und befolgen Sie die Anweisungen: sudo aws configure --profile AmazonCloudWatchAgent Öffnen Sie Windows Server PowerShell als Administrator, geben Sie den folgenden Befehl ein und folgen Sie den Anweisungen. aws configure --profile AmazonCloudWatchAgent Installieren Sie den CloudWatch Agenten mithilfe Ihrer Agentenkonfiguration auf weiteren EC2 Instanzen Nachdem Sie eine CloudWatch Agentenkonfiguration im Parameter Store gespeichert haben, können Sie sie verwenden, wenn Sie den Agenten auf anderen Servern installieren. Führen Sie für jeden dieser Server die zuvor in diesem Abschnitt aufgeführten Schritte aus, um die Systems Manager-Voraussetzungen, die Version des SSM-Agenten und den Internetzugang zu überprüfen. Verwenden Sie dann die folgenden Anweisungen, um den CloudWatch Agenten mithilfe der von Ihnen erstellten CloudWatch Agenten-Konfigurationsdatei auf den zusätzlichen Instanzen zu installieren. Schritt 1: Laden Sie den CloudWatch Agenten herunter und installieren Sie ihn Sie müssen den Agenten auf jedem Server installieren, auf dem Sie den Agenten ausführen. Der CloudWatch Agent ist als Paket in Amazon Linux 2023 und Amazon Linux 2 verfügbar. Wenn Sie dieses Betriebssystem verwenden, können Sie das Paket mit Systems Manager installieren, indem Sie die folgenden Schritte befolgen. Anmerkung Sie müssen außerdem sicherstellen, dass der IAM-Rolle, die der Instance zugewiesen ist, die CloudWatchAgentServerPolicy angehängt ist. Weitere Informationen finden Sie unter Voraussetzungen . So verwenden Sie Systems Manager zur Installation des CloudWatch Agentenpakets Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Liste Command-Dokument die Option AWS- ausRunShellScript. Fügen Sie dann Folgendes in die Befehlsparameter ein. sudo yum install amazon-cloudwatch-agent Wählen Sie Ausführen aus Auf allen unterstützten Betriebssystemen können Sie das CloudWatch Agentenpaket entweder über Systems Manager Run Command oder über einen Amazon S3 S3-Download-Link herunterladen. Anmerkung Wenn Sie den CloudWatch Agenten installieren oder aktualisieren, wird nur die Option Deinstallieren und Neuinstallieren unterstützt. Sie können die Option In-place update (Direkte Aktualisierung) nicht verwenden. Systems Manager Run Command ermöglicht Ihnen die bedarfsgerechte Verwaltung der Konfiguration Ihrer Instances. Sie geben ein Systems-Manager-Dokument und Parameter an und führen Sie den Befehl auf einer oder mehreren Instances aus. Der SSM-Agent auf der Instance verarbeitet den Befehl und konfiguriert die Instance wie angegeben. Um den CloudWatch Agenten mit Run Command herunterzuladen Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Liste der Befehlsdokumente die Option AWSPackageAWS-Configure aus. Wählen Sie im Bereich Ziele die Instanz aus, auf der der Agent installiert werden soll. CloudWatch Wenn Sie eine bestimmte Instance nicht sehen, ist sie möglicherweise nicht für Run Command konfiguriert. Weitere Informationen finden Sie im AWS Systems Manager Benutzerhandbuch unter Einrichtung AWS Systems Manager von Hybridumgebungen . Klicken Sie in der Liste Action auf Install . Geben Sie im Feld Name (Name) AmazonCloudWatchAgent ein. Lassen Sie Version auf latest (aktuell) eingestellt, damit die neueste Version des Agenten installiert wird. Klicken Sie auf Ausführen . Wählen Sie optional in den Bereichen Targets and outputs (Ziele und Ausgaben) die Schaltfläche neben einem Instance-Namen aus und wählen Sie View output (Ausgabe anzeigen) aus. Systems Manager zeigt jetzt an, dass der Agent erfolgreich installiert wurde. Schritt 2: Starten Sie den CloudWatch Agenten mithilfe Ihrer Agenten-Konfigurationsdatei Führen Sie diese Schritte aus, um den Agenten mit dem Systems Manager Run Command zu starten. Informationen zum Einrichten des Agenten auf einem System, auf dem das sicherheitserweiterte Linux (SELinux) aktiviert ist, finden Sie unter. Richten Sie den CloudWatch Agenten mit Linux mit verbesserter Sicherheit ein () SELinux So starten Sie den CloudWatch Agenten mit Run Command Öffnen Sie die Systems Manager Manager-Konsole unter https://console.aws.amazon.com/systems-manager/ . Wählen Sie im Navigationsbereich Run Command aus. –oder– Wenn die AWS Systems Manager Startseite geöffnet wird, scrollen Sie nach unten und wählen Sie Explore Run Command . Wählen Sie Befehl ausführen aus. Wählen Sie in der Command-Dokumentliste die Option AmazonCloudWatch- ManageAgent . Wählen Sie im Bereich Ziele die Instanz aus, auf der Sie den CloudWatch Agenten installiert haben. Klicken Sie in der Liste Action auf Configure . Klicken Sie in der Liste Optional Configuration Source auf ssm . Geben Sie in das Feld Optionaler Konfigurationsstandort den Namen des Systems-Manager-Parameternamens der Agenten-Konfigurationsdatei ein, die Sie erstellt und im Systems-Manager-Parameter-Speicher gespeichert haben, wie in Erstellen Sie die CloudWatch Agenten-Konfigurationsdatei erläutert. Klicken Sie in der Liste Optional Restart auf yes , um den Agent zu starten, nachdem Sie diese Schritte abgeschlossen haben. Klicken Sie auf Ausführen . Wählen Sie optional in den Bereichen Targets and outputs (Ziele und Ausgaben) die Schaltfläche neben einem Instance-Namen aus und wählen Sie View output (Ausgabe anzeigen) aus. Systems Manager zeigt jetzt an, dass der Agent erfolgreich gestartet wurde. (Optional) Ändern Sie die allgemeine Konfiguration und das benannte Profil für den CloudWatch Agenten Der CloudWatch Agent enthält eine Konfigurationsdatei mit dem Namen common-config.toml . Sie können diese Datei verwenden, um optional Proxy- und Regionsinformationen anzugeben. Auf einem Server, auf dem Linux ausgeführt wird, befindet sich diese Datei im Verzeichnis /opt/aws/amazon-cloudwatch-agent/etc . Auf einem Server, auf dem Windows Server ausgeführt wird, befindet sich diese Datei im Verzeichnis C:\ProgramData\Amazon\AmazonCloudWatchAgent . common-config.toml lautet standardmäßig folgendermaßen: # This common-config is used to configure items used for both ssm and cloudwatch access ## Configuration for shared credential. ## Default credential strategy will be used if it is absent here: ## Instance role is used for EC2 case by default. ## AmazonCloudWatchAgent profile is used for onPremise case by default. # [credentials] # shared_credential_profile = " { profile_name}" # shared_credential_file= " { file_name}" ## Configuration for proxy. ## System-wide environment-variable will be read if it is absent here. ## i.e. HTTP_PROXY/http_proxy; HTTPS_PROXY/https_proxy; NO_PROXY/no_proxy ## Note: system-wide environment-variable is not accessible when using ssm run-command. ## Absent in both here and environment-variable means no proxy will be used. # [proxy] # http_proxy = " { http_url}" # https_proxy = " { https_url}" # no_proxy = " { domain}" Alle Zeilen sind anfangs auskommentiert. Zum Festlegen des Anmeldeinformationsprofils oder der Proxy-Einstellungen entfernen Sie # aus der Zeile und geben einen Wert an. Sie können diese Datei manuell oder mithilfe von RunShellScript Run Command in Systems Manager: bearbeiten: shared_credential_profile — Für lokale Server gibt diese Zeile das Profil mit den IAM-Benutzeranmeldeinformationen an, an das Daten gesendet werden sollen. CloudWatch Wenn diese Zeile auskommentiert bleibt, wird AmazonCloudWatchAgent verwendet. Auf einer EC2 Instanz können Sie diese Zeile verwenden, damit der CloudWatch Agent Daten von dieser Instanz an eine andere CloudWatch Region sendet. AWS Geben Sie hierzu ein benanntes Profil an, das ein Feld region zur Angabe der Zielregion enthält. Wenn Sie einen shared_credential_profile angeben, müssen Sie auch das # am Anfang der [credentials] -Zeile entfernen. shared_credential_file – Damit der Agent in einer nicht im Standardpfad abgelegten Datei nach Anmeldeinformationen sucht, müssen Sie den vollständigen Pfad und den Dateinamen hier angeben. Der Standardpfad ist unter Linux /root/.aws und unter Windows Server C:\\Users\\Administrator\\.aws . Das erste Beispiel unten zeigt die Syntax einer gültigen shared_credential_file -Zeile für Linux-Server, und das zweite Beispiel ist für Windows-Server gültig. Auf Windows Server müssen Sie die \-Zeichen mit einem Escape-Zeichen versehen. shared_credential_file= "/usr/ username /credentials" shared_credential_file= "C:\\Documents and Settings\\ username \\.aws\\credentials" Wenn Sie einen shared_credential_file angeben, müssen Sie auch das # am Anfang der [credentials] -Zeile entfernen. Proxy-Einstellungen – Falls Ihre Server HTTP- oder HTTPS-Proxys verwenden, um AWS -Services zu kontaktieren, geben Sie diese Proxys in den Feldern http_proxy und https_proxy an. Falls es URLs welche gibt, die vom Proxying ausgeschlossen werden sollen, geben Sie sie in das no_proxy Feld ein, getrennt durch Kommas. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Manuelle Installation bei Amazon EC2 Installieren Sie den CloudWatch Agenten auf lokalen Servern Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#param-url-2 | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/tiktok#posts-api | TikTok API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs TikTok API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profile API Collect by URL Discover by Search URL Posts API Collect by URL Discover by Profile URL Discover by Keywords Discover by Discover URL Comments API Collect by URL Social Media APIs TikTok API Scrapers Copy page Copy page Overview The TikTok API Suite offers multiple types of APIs, each designed for specific data collection needs from TikTok. Below is an overview of how these APIs connect and interact, based on the available features: Profile API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : Direct URL of the search Interesting Columns : nickname , awg_engagement_rate , followers , likes Posts API This API allows users to collect multiple posts based on a single input URL. Discovery functionality : - Direct URL of the TikTok profile - Discover by keywords - Direct URL of the discovery Interesting Columns : url , share_count , description , hashtags Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : url , comment_text , commenter_url , num_likes Profile API Collect by URL This API allows users to retrieve detailed TikTok profile information using the provided profile URL. Input Parameters : URL string required The TikTok profile URL. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API allows users to retrieve detailed TikTok profile information, including engagement metrics, privacy settings, and top videos, offering insights into user activity and profile data. Discover by Search URL This API allows users to discover TikTok profiles based on a specific search URL and country, providing detailed profile information. Input Parameters : search_url string required The TikTok search URL. country string required The country from which to perform the search. Output Structure : Includes comprehensive data points: Profile Details : account_id , nickname , biography , bio_link , predicted_lang , is_verified , followers , following , likes , videos_count , create_time , id , url , profile_pic_url , profile_pic_url_hd , and more. For all data points, click here . Engagement Metrics : awg_engagement_rate , comment_engagement_rate , like_engagement_rate , like_count , digg_count . Privacy & Settings : is_private , relation , open_favorite , comment_setting , duet_setting , stitch_setting , is_ad_virtual , room_id , is_under_age_18 . Discovery & Top Videos : region , top_videos , discovery_input . This API enables users to discover TikTok profiles based on search criteria, offering insights into user activity, engagement, privacy settings, and top content. It helps facilitate efficient discovery and analysis of TikTok users. Posts API Collect by URL This API enables users to collect detailed data from TikTok posts by providing a post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information: : tt_chain_token , secu_id Discover by Profile URL This API allows users to retrieve posts from a TikTok profile based on a provided profile URL, with filtering options for the number of posts, date range, and post exclusions. Input Parameters : URL string required The TikTok profile URL. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. start_date string Start date for filtering posts (format: mm-dd-yyyy). Should be lower than end_date . end_date string End date for filtering posts (format: mm-dd-yyyy). Should be greater than start_date . what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , official_item , original_item , shortcode , video_url , music , cdn_url , width , carousel_images , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover and retrieve detailed information about posts from a specific TikTok profile, including post-specific metrics, profile details of the creator, and tagged users. It supports efficient content discovery and post analysis. Discover by Keywords This API allows users to search for TikTok posts based on specific keywords or hashtags, offering a powerful tool for discovering relevant content across TikTok’s platform. Input Parameters : search_keyword string required The keyword or hashtag to search for within TikTok posts. num_of_posts number The number of posts to collect. If not provided, there is no limit. posts_to_not_include array An array of post IDs to exclude from the collection. what_to_collect string Specify the type of posts to collect (e.g., “post” or “reel”). Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API allows users to discover posts on TikTok that match specific keywords or hashtags, providing insights into post details, profile information, and media. It’s a great tool for exploring trends, content, and users on TikTok. Discover by Discover URL This API allows users to collect detailed post data from a specific TikTok discover URL. Input Parameters : URL string required The TikTok discover URL from which posts will be retrieved. Output Structure : Includes comprehensive data points: Post Details : post_id , description , create_time , digg_count , share_count , collect_count , comment_count , play_count , video_duration , hashtags , original_sound , post_type , discovery_input , official_item , original_item , and more. For all data points, click here . Profile Details : profile_id , profile_username , profile_url , profile_avatar , profile_biography , account_id , profile_followers , is_verified . Tagged Users and Media : tagged_user , carousel_images . Additional Information : tt_chain_token , secu_id . This API provides detailed insights into TikTok posts discovered via the discover URL, allowing for easy access to trending content, user profiles, and post metadata for analysis and exploration. Comments API Collect by URL This API allows users to collect detailed comment data from a specific TikTok post using the provided post URL. Input Parameters : URL string required The TikTok post URL. Output Structure : Includes comprehensive data points: Post Details : post_url , post_id , post_date_created . For all data points, click here . Comment Details : date_created , comment_text , num_likes , num_replies , comment_id , comment_url . Commenter Details : commenter_user_name , commenter_id , commenter_url . This API provides detailed insights into TikTok post comments, including comment-specific metrics and information about the commenters, enabling effective comment analysis and interaction tracking. Was this page helpful? Yes No LinkedIn Reddit ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://www.infoworld.com/cloud-computing/ | Cloud Computing | InfoWorld Topics Latest Newsletters Resources Buyer’s Guides About About Us Advertise Contact Us Editorial Ethics Policy Foundry Careers Newsletters Contribute to InfoWorld Reprints Policies Terms of Service Privacy Policy Cookie Policy Copyright Notice Member Preferences About AdChoices Your California Privacy Rights Our Network CIO Computerworld CSO Network World More News Features Blogs BrandPosts Events Videos Enterprise Buyer’s Guides Close Analytics Artificial Intelligence Generative AI Careers Cloud Computing Data Management Databases Emerging Technology Technology Industry Security Software Development Microsoft .NET Development Tools Devops Open Source Programming Languages Java JavaScript Python IT Leadership Enterprise Buyer’s Guides Back Close Back Close Popular Topics Artificial Intelligence Cloud Computing Data Management Software Development Search Topics Latest Newsletters Resources Buyer’s Guides About Policies Our Network More Back Topics Analytics Artificial Intelligence Generative AI Careers Cloud Computing Data Management Databases Emerging Technology Technology Industry Security Software Development Microsoft .NET Development Tools Devops Open Source Programming Languages Java JavaScript Python IT Leadership Enterprise Buyer’s Guides Back About About Us Advertise Contact Us Editorial Ethics Policy Foundry Careers Newsletters Contribute to InfoWorld Reprints Back Policies Terms of Service Privacy Policy Cookie Policy Copyright Notice Member Preferences About AdChoices Your California Privacy Rights Back Our Network CIO Computerworld CSO Network World Back More News Features Blogs BrandPosts Events Videos Enterprise Buyer’s Guides Home Cloud Computing Sponsored by KPMG Cloud Computing Cloud Computing | News, how-tos, features, reviews, and videos Explore related topics Cloud Architecture Cloud Management Cloud Storage Cloud-Native Hybrid Cloud IaaS Managed Cloud Services Multicloud PaaS Private Cloud SaaS Latest from today analysis Which development platforms and tools should you learn now? For software developers, choosing which technologies and skills to master next has never been more difficult. Experts offer their recommendations. By Isaac Sacolick Jan 13, 2026 8 mins Development Tools Devops Generative AI analysis Why hybrid cloud is the future of enterprise platforms By David Linthicum Jan 13, 2026 4 mins Artificial Intelligence Cloud Architecture Hybrid Cloud news Oracle unveils Java development plans for 2026 By Paul Krill Jan 12, 2026 3 mins Java Programming Languages Software Development news AI is causing developers to abandon Stack Overflow By Mikael Markander Jan 12, 2026 2 mins Artificial Intelligence Generative AI Software Development opinion Stack thinking: Why a single AI platform won’t cut it By Tom Popomaronis Jan 12, 2026 8 mins Artificial Intelligence Development Tools Software Development news Postman snaps up Fern to reduce developer friction around API documentation and SDKs By Anirban Ghoshal Jan 12, 2026 3 mins APIs Software Development opinion Why ‘boring’ VS Code keeps winning By Matt Asay Jan 12, 2026 7 mins Developer GitHub Visual Studio Code feature How to succeed with AI-powered, low-code and no-code development tools By Bob Violino Jan 12, 2026 9 mins Development Tools Generative AI No Code and Low Code news Visual Studio Code adds support for agent skills By Paul Krill Jan 9, 2026 3 mins Development Tools Integrated Development Environments Visual Studio Code Articles news analysis Snowflake: Latest news and insights Stay up-to-date on how Snowflake and its underlying architecture has changed how cloud developers, data managers and data scientists approach cloud data management and analytics By Dan Muse Jan 9, 2026 5 mins Cloud Architecture Cloud Computing Cloud Management news Snowflake to acquire Observe to boost observability in AIops The acquisition could position Snowflake as a control plane for production AI, giving CIOs visibility across data, models, and infrastructure without the pricing shock of traditional observability stacks, analysts say. By Anirban Ghoshal Jan 9, 2026 3 mins Artificial Intelligence Software Development feature Python starts 2026 with a bang The world’s most popular programming language kicks off the new year with a wicked-fast type checker, a C code generator, and a second chance for the tail-calling interpreter. By Serdar Yegulalp Jan 9, 2026 2 mins Programming Languages Python Software Development news Microsoft open-sources XAML Studio Forthcoming update of the rapid prototyping tool for WinUI developers, now available on GitHub, adds a new Fluent UI design, folder support, and a live properties panel. By Paul Krill Jan 8, 2026 1 min Development Tools Integrated Development Environments Visual Studio analysis What drives your cloud security strategy? As cloud breaches increase, organizations should prioritize skills and training over the latest tech to address the actual root problems. By David Linthicum Jan 6, 2026 5 mins Careers Cloud Security IT Skills and Training news Databricks says its Instructed Retriever offers better AI answers than RAG in the enterprise Databricks says Instructed Retriever outperforms RAG and could move AI pilots to production faster, but analysts warn it could expose data, governance, and budget gaps that CIOs can’t ignore. By Anirban Ghoshal Jan 8, 2026 5 mins Artificial Intelligence Generative AI opinion The hidden devops crisis that AI workloads are about to expose Devops teams that cling to component-level testing and basic monitoring will struggle to keep pace with the data demands of AI. By Joseph Morais Jan 8, 2026 6 mins Artificial Intelligence Devops Generative AI news AI-built Rue language pairs Rust memory safety with ease of use Developed using Anthropic’s Claude AI model, the new language is intended to provide memory safety without garbage collection while being easier to use than Rust and Zig. By Paul Krill Jan 7, 2026 2 mins Generative AI Programming Languages Rust news Microsoft acquires Osmos to ease data engineering bottlenecks in Fabric The acquisition could help enterprises push analytics and AI projects into production faster while acting as the missing autonomy layer that connects Fabric’s recent enhancements into a coherent system. By Anirban Ghoshal Jan 7, 2026 4 mins Analytics Artificial Intelligence Data Engineering opinion What the loom tells us about AI and coding Like the loom, AI may turn the job market upside down. And enable new technologies and jobs that we simply can’t predict. By Nick Hodges Jan 7, 2026 4 mins Developer Engineer Generative AI analysis Generative UI: The AI agent is the front end In a new model for user interfaces, agents paint the screen with interactive UI components on demand. Let’s take a look. By Matthew Tyson Jan 7, 2026 8 mins Development Tools Generative AI Libraries and Frameworks news AI won’t replace human devs for at least 5 years Progress towards full AI-driven coding automation continues, but in steps rather than leaps, giving organizations time to prepare, according to a new study. By Taryn Plumb Jan 7, 2026 7 mins Artificial Intelligence Developer Roles news Automated data poisoning proposed as a solution for AI theft threat For hackers, the stolen data would be useless, but authorized users would have a secret key that filters out the fake information. By Howard Solomon Jan 7, 2026 6 mins Artificial Intelligence Data Privacy Privacy Show more Show less View all Video on demand video How to generate C-like programs with Python You might be familiar with how Python and C can work together, by way of projects like Cython. The new PythoC project has a unique twist on working with both languages: it lets you write type-decorated Python that can generate entire standalone C programs, not just importable Python libraries written in C. This video shows a few basic PythoC functions, from generating a whole program to using some of PythoC’s typing features to provide better memory management than C alone could. Dec 16, 2025 5 mins Python Zed Editor Review: The Rust-Powered IDE That Might Replace VS Code Dec 3, 2025 5 mins Python Python vs. Kotlin Nov 13, 2025 5 mins Python Hands-on with the new sampling profiler in Python 3.15 Nov 6, 2025 6 mins Python See all videos Explore a topic Analytics Artificial Intelligence Careers Data Management Databases Development Tools Devops Emerging Technology Generative AI Java JavaScript Microsoft .NET Open Source Programming Languages View all topics All topics Close Browse all topics and categories below. Analytics Artificial Intelligence Careers Data Management Databases Development Tools Devops Emerging Technology Generative AI Java JavaScript Microsoft .NET Open Source Programming Languages Python Security Software Development Technology Industry Show me more Latest Articles Videos news Ruby 4.0.0 introduces ZJIT compiler, Ruby Box isolation By Paul Krill Jan 6, 2026 3 mins Programming Languages Ruby Software Development news Open WebUI bug turns the ‘free model’ into an enterprise backdoor By Shweta Sharma Jan 6, 2026 3 mins Artificial Intelligence Security Vulnerabilities interview Generative AI and the future of databases By Martin Heller Jan 6, 2026 14 mins Artificial Intelligence Databases Generative AI video How to make local packages universal across Python venvs Nov 4, 2025 4 mins Python video X-ray vision for your async activity in Python 3.14 Oct 21, 2025 4 mins Python video Why it's so hard to redistribute standalone Python apps Oct 17, 2025 5 mins Python About About Us Advertise Contact Us Editorial Ethics Policy Foundry Careers Reprints Newsletters BrandPosts Policies Terms of Service Privacy Policy Cookie Policy Copyright Notice Member Preferences About AdChoices Your California Privacy Rights Privacy Settings Our Network CIO Computerworld CSO Network World Facebook X YouTube Google News LinkedIn © 2026 FoundryCo, Inc. All Rights Reserved. | 2026-01-13T09:29:25 |
https://id-id.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook Email atau telepon Kata Sandi Lupa akun? Buat Akun Baru Anda Diblokir Sementara Anda Diblokir Sementara Sepertinya Anda menyalahgunakan fitur ini dengan menggunakannya terlalu cepat. Anda dilarang menggunakan fitur ini untuk sementara. Back Bahasa Indonesia 한국어 English (US) Tiếng Việt ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Daftar Masuk Messenger Facebook Lite Video Meta Pay Meta Store Meta Quest Ray-Ban Meta Meta AI Konten Meta AI lainnya Instagram Threads Pusat Informasi Pemilu Kebijakan Privasi Pusat Privasi Tentang Buat Iklan Buat Halaman Developer Karier Cookie Pilihan Iklan Ketentuan Bantuan Pengunggahan Kontak & Non-Pengguna Pengaturan Log aktivitas Meta © 2026 | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html#Solution-NVIDIA-GPU-On-EC2-Benefits | Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Esta solução auxilia na configuração da coleta de métricas prontas para uso com agentes do CloudWatch para workloads da GPU da NVIDIA que estão sendo executadas em instâncias do EC2. Além disso, a solução ajuda na configuração de um painel do CloudWatch configurado previamente. Para obter informações gerais sobre todas as soluções de observabilidade do CloudWatch, consulte Soluções de observabilidade do CloudWatch . Tópicos Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Requisitos Esta solução é aplicável nas seguintes condições: Computação: Amazon EC2 Fornecimento de suporte para até 500 GPUs em todas as instâncias do EC2 em uma Região da AWS específica Versão mais recente do agente do CloudWatch SSM Agent instalado na instância do EC2 A instância do EC2 deve ter um driver da NVIDIA instalado. Os drivers da NVIDIA são instalados previamente em algumas imagens de máquina da Amazon (AMIs). Caso contrário, é possível instalar o driver manualmente. Para obter mais informações, consulte Instalação de drivers NVIDIA em instâncias Linux . nota O AWS Systems Manager (SSM Agent) está instalado previamente em algumas imagens de máquinas da Amazon (AMIs) fornecidas pela AWS e por entidades externas confiáveis. Se o agente não estiver instalado, você poderá instalá-lo manualmente usando o procedimento adequado para o seu tipo de sistema operacional. Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Linux Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para macOS Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Windows Server Benefícios A solução disponibiliza monitoramento da NVIDIA, fornecendo insights valiosos para os seguintes casos de uso: Análise do uso da GPU e da memória para identificar gargalos de performance ou a necessidade de obtenção de recursos adicionais. Monitoramento da temperatura e do consumo de energia para garantir que as GPUs operem dentro dos limites seguros. Avaliação da performance do codificador para workloads de vídeo na GPU. Verificação da conectividade PCIe para garantir que atendam à geração e à largura esperadas. Monitoramento das velocidades do relógio da GPU para identificar problemas de ajuste de escala ou de controle de utilização. A seguir, apresentamos as principais vantagens da solução: Automatiza a coleta de métricas para a NVIDIA usando a configuração do agente do CloudWatch, o que elimina a necessidade de instrumentação manual. Fornece um painel do CloudWatch consolidado e configurado previamente para as métricas da NVIDIA. O painel gerenciará automaticamente as métricas das novas instâncias do EC2 para a NVIDIA que foram configuradas usando a solução, mesmo que essas métricas não estejam disponíveis no momento de criação do painel. A imagem apresentada a seguir é um exemplo do painel para esta solução. Custos Esta solução cria e usa recursos em sua conta. A cobrança será realizada com base no uso padrão, que inclui o seguinte: Todas as métricas coletadas pelo agente do CloudWatch são cobradas como métricas personalizadas. O número de métricas usadas por esta solução depende do número de hosts do EC2. Cada host do EC2 configurado para a solução publica um total de 17 métricas por GPU. Um painel personalizado. As operações da API solicitadas pelo agente do CloudWatch para publicar as métricas. Com a configuração padrão para esta solução, o agente do CloudWatch chama a operação PutMetricData uma vez por minuto para cada host do EC2. Isso significa que a API PutMetricData será chamada 30*24*60=43,200 em um mês com 30 dias para cada host do EC2. Para obter mais informações sobre os preços do CloudWatch, consulte Preço do Amazon CloudWatch . A calculadora de preços pode ajudar a estimar os custos mensais aproximados para o uso desta solução. Como usar a calculadora de preços para estimar os custos mensais da solução Abra a calculadora de preços do Amazon CloudWatch . Em Escolher uma região , selecione a região em que você gostaria de implantar a solução. Na seção Métricas , em Número de métricas , insira 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . Na seção APIs , em Número de solicitações de API , insira 43200 * number of EC2 instances configured for this solution . Por padrão, o agente do CloudWatch executa uma operação PutMetricData a cada minuto para cada host do EC2. Na seção Painéis e alarmes , em Número de painéis , insira 1 . É possível visualizar os custos mensais estimados na parte inferior da calculadora de preços. Configuração do agente do CloudWatch para esta solução O agente do CloudWatch é um software que opera de maneira contínua e autônoma em seus servidores e em ambientes com contêineres. Ele coleta métricas, logs e rastreamentos da infraestrutura e das aplicações e os envia para o CloudWatch e para o X-Ray. Para obter mais informações sobre o agente do CloudWatch, consulte Coleta de métricas, logs e rastreamentos usando o agente do CloudWatch . Nesta solução, a configuração do agente coleta um conjunto de métricas para ajudar você a começar a monitorar e realizar a observabilidade da GPU da NVIDIA. O agente do CloudWatch pode ser configurado para coletar mais métricas da GPU da NVIDIA do que as que são exibidas por padrão no painel. Para obter uma lista de todas as métricas da GPU da NVIDIA que você pode coletar, consulte Colete métricas de GPU NVIDIA . Configuração do agente para esta solução As métricas coletadas pelo agente são definidas na configuração do agente. A solução fornece configurações do agente para a coleta das métricas recomendadas com dimensões adequadas para o painel da solução. Use a configuração do agente do CloudWatch apresentada a seguir em instâncias do EC2 equipadas com GPUs da NVIDIA. A configuração será armazenada como um parâmetro no Parameter Store do SSM, conforme detalhado posteriormente em Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implantação do agente para a sua solução Existem várias abordagens para instalar o agente do CloudWatch, dependendo do caso de uso. Recomendamos o uso do Systems Manager para esta solução. Ele fornece uma experiência no console e simplifica o gerenciamento de uma frota de servidores gerenciados em uma única conta da AWS. As instruções apresentadas nesta seção usam o Systems Manager e são destinadas para situações em que o agente do CloudWatch não está em execução com as configurações existentes. É possível verificar se o agente do CloudWatch está em execução ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se você já estiver executando o agente do CloudWatch nos hosts do EC2 nos quais a workload está implantada e gerenciando as configurações do agente, pode pular as instruções apresentadas nesta seção e usar o mecanismo de implantação existente para atualizar a configuração. Certifique-se de combinar a configuração do agente da GPU da NVIDIA com a configuração do agente existente e, em seguida, implante a configuração combinada. Se você estiver usando o Systems Manager para armazenar e gerenciar a configuração do agente do CloudWatch, poderá combinar a configuração com o valor do parâmetro existente. Para obter mais informações, consulte Managing CloudWatch agent configuration files . nota Ao usar o Systems Manager para implantar as configurações do agente do CloudWatch apresentadas a seguir, qualquer configuração existente do agente do CloudWatch nas suas instâncias do EC2 será substituída ou sobrescrita. É possível modificar essa configuração para atender às necessidades do ambiente ou do caso de uso específico. As métricas definidas na configuração representam o requisito mínimo necessário para o painel fornecido pela solução. O processo de implantação inclui as seguintes etapas: Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias. Etapa 2: armazenar o arquivo de configuração recomendado do agente no Systems Manager Parameter Store. Etapa 3: instalar o agente do CloudWatch em uma ou mais instâncias do EC2 usando uma pilha do CloudFormation. Etapa 4: verificar se a configuração do agente foi realizada corretamente. Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias Você deve conceder permissão para o Systems Manager instalar e configurar o agente do CloudWatch. Além disso, é necessário conceder permissão para que o agente do CloudWatch publique a telemetria da instância do EC2 para o CloudWatch. Certifique-se de que o perfil do IAM anexado à instância tenha as políticas do IAM CloudWatchAgentServerPolicy e AmazonSSMManagedInstanceCore associadas. Após criar o perfil, associe-o às suas instâncias do EC2. Para anexar um perfil a uma instância do EC2, siga as etapas apresentadas em Attach an IAM role to an instance . Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store O Parameter Store simplifica a instalação do agente do CloudWatch em uma instância do EC2 ao armazenar e gerenciar os parâmetros de configuração de forma segura, eliminando a necessidade de valores com codificação rígida. Isso garante um processo de implantação mais seguro e flexível ao possibilitar o gerenciamento centralizado e as atualizações simplificadas para as configurações em diversas instâncias. Use as etapas apresentadas a seguir para armazenar o arquivo de configuração recomendado do agente do CloudWatch como um parâmetro no Parameter Store. Como criar o arquivo de configuração do agente do CloudWatch como um parâmetro Abra o console AWS Systems Manager em https://console.aws.amazon.com/systems-manager/ . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. No painel de navegação, escolha Gerenciamento de aplicações e, em seguida, Parameter Store . Siga as etapas apresentadas a seguir para criar um novo parâmetro para a configuração. Escolha Criar Parâmetro . Na caixa Nome , insira um nome que será usado para referenciar o arquivo de configuração do agente do CloudWatch nas etapas posteriores. Por exemplo, . AmazonCloudWatch-NVIDIA-GPU-Configuration (Opcional) Na caixa Descrição , digite uma descrição para o parâmetro. Em Camadas de parâmetros , escolha Padrão . Para Tipo , escolha String . Em Tipo de dados , selecione texto . Na caixa Valor , cole o bloco em JSON correspondente que foi listado em Configuração do agente para esta solução . Escolha Criar Parâmetro . Etapa 3: instalar o agente do CloudWatch e aplicar a configuração usando um modelo do CloudFormation É possível usar o AWS CloudFormation para instalar o agente e configurá-lo para usar a configuração do agente do CloudWatch criada nas etapas anteriores. Como instalar e configurar o agente do CloudWatch para esta solução Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como CWAgentInstallationStack . Na seção Parâmetros , especifique o seguinte: Para CloudWatchAgentConfigSSM , insira o nome do parâmetro do Systems Manager para a configuração do agente que você criou anteriormente, como AmazonCloudWatch-NVIDIA-GPU-Configuration . Para selecionar as instâncias de destino, você tem duas opções. Para InstanceIds , especifique uma lista delimitada por vírgulas de IDs de instâncias nas quais você deseja instalar o agente do CloudWatch com esta configuração. É possível listar uma única instância ou várias instâncias. Se você estiver realizando implantações em grande escala, é possível especificar a TagKey e o TagValue correspondente para direcionar todas as instâncias do EC2 associadas a essa etiqueta e a esse valor. Se você especificar uma TagKey , é necessário especificar um TagValue correspondente. (Para um grupo do Auto Scaling, especifique aws:autoscaling:groupName para a TagKey e defina o nome do grupo do Auto Scaling para a TagValue para realizar a implantação em todas as instâncias do grupo do Auto Scaling.) Analise as configurações e, em seguida, escolha Criar pilha . Se você desejar editar o arquivo de modelo previamente para personalizá-lo, selecione a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . nota Após a conclusão desta etapa, este parâmetro do Systems Manager será associado aos agentes do CloudWatch em execução nas instâncias de destino. Isto significa que: Se o parâmetro do Systems Manager for excluído, o agente será interrompido. Se o parâmetro do Systems Manager for editado, as alterações de configuração serão aplicadas automaticamente ao agente na frequência programada, que, por padrão, é de 30 dias. Se você desejar aplicar imediatamente as alterações a este parâmetro do Systems Manager, você deverá executar esta etapa novamente. Para obter mais informações sobre as associações, consulte Working with associations in Systems Manager . Etapa 4: verificar se a configuração do agente foi realizada corretamente É possível verificar se o agente do CloudWatch está instalado ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se o agente do CloudWatch não estiver instalado e em execução, certifique-se de que todas as configurações foram realizadas corretamente. Certifique-se de ter anexado um perfil com as permissões adequadas para a instância do EC2, conforme descrito na Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias . Certifique-se de ter configurado corretamente o JSON para o parâmetro do Systems Manager. Siga as etapas em Solução de problemas de instalação do atendente do CloudWatch com o CloudFormation . Se todas as configurações estiverem corretas, as métricas da GPU da NVIDIA serão publicadas no CloudWatch e estarão disponíveis para visualização. É possível verificar no console do CloudWatch para assegurar que as métricas estão sendo publicadas corretamente. Como verificar se as métricas da GPU da NVIDIA estão sendo publicadas no CloudWatch Abra o console do CloudWatch, em https://console.aws.amazon.com/cloudwatch/ . Escolha Métricas e, depois, Todas as métricas . Certifique-se de ter selecionado a região na qual a solução foi implantada, escolha Namespaces personalizados e, em seguida, selecione CWAgent . Pesquise pelas métricas mencionadas em Configuração do agente para esta solução , como nvidia_smi_utilization_gpu . Caso encontre resultados para essas métricas, isso significa que elas estão sendo publicadas no CloudWatch. Criação do painel da solução com a GPU da NVIDIA O painel fornecido por esta solução apresenta métricas das GPUs da NVIDIA ao agregar e apresentar as métricas em todas as instâncias. O painel mostra um detalhamento dos principais colaboradores (que corresponde aos dez principais por widget de métrica) para cada métrica. Isso ajuda a identificar rapidamente discrepâncias ou instâncias que contribuem significativamente para as métricas observadas. Para criar o painel, é possível usar as seguintes opções: Usar o console do CloudWatch para criar o painel. Usar o console do AWS CloudFormation para implantar o painel. Fazer o download do código de infraestrutura como código do AWS CloudFormation e integrá-lo como parte da automação de integração contínua (CI). Ao usar o console do CloudWatch para criar um painel, é possível visualizá-lo previamente antes de criá-lo e incorrer em custos. nota O painel criado com o CloudFormation nesta solução exibe métricas da região em que a solução está implantada. Certifique-se de que a pilha do CloudFormation seja criada na mesma região em que as métricas da GPU da NVIDIA são publicadas. Se você especificou um namespace personalizado diferente de CWAgent na configuração do agente do CloudWatch, será necessário alterar o modelo do CloudFormation para o painel, substituindo CWAgent pelo namespace personalizado que você está usando. Como criar o painel usando o console do CloudWatch Abra o console do CloudWatch e acesse Criar painel usando este link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Insira o nome do painel e, em seguida, escolha Criar painel . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Visualize previamente o painel e escolha Salvar para criá-lo. Como criar o painel usando o CloudFormation Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como NVIDIA-GPU-DashboardStack . Na seção Parâmetros , especifique o nome do painel no parâmetro DashboardName . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Confirme as funcionalidades de acesso relacionadas às transformações na seção Capacidades e transformações . Lembre-se de que o CloudFormation não adiciona recursos do IAM. Analise as configurações e, em seguida, escolha Criar pilha . Quando o status da pilha mostrar CREATE_COMPLETE , selecione a guia Recursos na pilha criada e, em seguida, escolha o link exibido em ID físico para acessar o painel. Como alternativa, é possível acessar o painel diretamente no console do CloudWatch ao selecionar Painéis no painel de navegação do console à esquerda e localizar o nome do painel na seção Painéis personalizados . Se você desejar editar o arquivo de modelo para personalizá-lo para atender a uma necessidade específica, é possível usar a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . É possível usar este link para fazer download do modelo: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Como começar a usar o painel da GPU da NVIDIA A seguir, apresentamos algumas tarefas que você pode realizar para explorar o novo painel da GPU da NVIDIA. Essas tarefas permitem a validação do funcionamento correto do painel e fornecem uma experiência prática ao usá-lo para monitorar as GPUs da NVIDIA. À medida que realiza as tarefas, você se familiarizará com a navegação no painel e com a interpretação das métricas visualizadas. Análise da utilização da GPU Na seção Utilização , localize os widgets de Utilização da GPU e Utilização da memória . Eles mostram, respectivamente, a porcentagem de tempo em que a GPU está sendo ativamente usada para cálculos e a porcentagem de uso da memória global para leitura ou gravação. Uma utilização elevada pode indicar possíveis gargalos de performance ou a necessidade de obtenção de recursos adicionais de GPU. Análise do uso de memória da GPU Na seção Memória , localize os widgets Memória total , Memória usada e Memória livre . Esses widgets fornecem insights sobre a capacidade total de memória das GPUs, além de indicar a quantidade de memória que, no momento, está sendo consumida ou disponível. A pressão de memória pode acarretar em problemas de performance ou erros por falta de memória, portanto, é fundamental monitorar essas métricas e garantir que a workload tenha memória suficiente disponível. Monitoramento da temperatura e do consumo de energia Na seção Temperatura/Potência , localize os widgets de Temperatura da CPU e Consumo de energia . Essas métricas são essenciais para garantir que as GPUs estejam operando dentro dos limites seguros de temperatura e consumo de energia. Identificação da performance do codificador Na seção Codificador , localize os widgets de Contagem de sessões do codificador , Média de FPS e Latência média . Essas métricas são relevantes se você estiver executando workloads de codificação de vídeo em suas GPUs. O monitoramento dessas métricas é fundamental para garantir a operação ideal dos codificadores e identificar possíveis gargalos ou problemas de performance. Verificação do status do link do PCIe Na seção PCIe , localize os widgets de Geração do link do PCIe e de Largura do link do PCIe . Essas métricas fornecem informações sobre o link do PCIe que estabelece conexão entre a GPU e o sistema de host. Certifique-se de que o link esteja operando com a geração e a largura esperadas para evitar possíveis limitações de performance causadas por gargalos do PCIe. Análise dos relógios da GPU Na seção Relógio , localize os widgets de Relógio de gráficos , Relógio de SM , Relógio de memória e Relógio de vídeo . Essas métricas apresentam as frequências operacionais atuais dos diversos componentes da GPU. O monitoramento desses relógios pode ajudar a identificar possíveis problemas com o ajuste de escala ou com o controle de utilização do relógio da GPU, que podem impactar a performance. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Workload do NGINX no EC2 Workload do Kafka no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:25 |
https://www.linkedin.com/uas/login?session_redirect=%2Fservices%2Fproducts%2Ftechnarts-numerus%2F&fromSignIn=true&trk=products_details_guest_nav-header-signin | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html#Solution-NVIDIA-GPU-Agent-Step1 | Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Esta solução auxilia na configuração da coleta de métricas prontas para uso com agentes do CloudWatch para workloads da GPU da NVIDIA que estão sendo executadas em instâncias do EC2. Além disso, a solução ajuda na configuração de um painel do CloudWatch configurado previamente. Para obter informações gerais sobre todas as soluções de observabilidade do CloudWatch, consulte Soluções de observabilidade do CloudWatch . Tópicos Requisitos Benefícios Configuração do agente do CloudWatch para esta solução Implantação do agente para a sua solução Criação do painel da solução com a GPU da NVIDIA Requisitos Esta solução é aplicável nas seguintes condições: Computação: Amazon EC2 Fornecimento de suporte para até 500 GPUs em todas as instâncias do EC2 em uma Região da AWS específica Versão mais recente do agente do CloudWatch SSM Agent instalado na instância do EC2 A instância do EC2 deve ter um driver da NVIDIA instalado. Os drivers da NVIDIA são instalados previamente em algumas imagens de máquina da Amazon (AMIs). Caso contrário, é possível instalar o driver manualmente. Para obter mais informações, consulte Instalação de drivers NVIDIA em instâncias Linux . nota O AWS Systems Manager (SSM Agent) está instalado previamente em algumas imagens de máquinas da Amazon (AMIs) fornecidas pela AWS e por entidades externas confiáveis. Se o agente não estiver instalado, você poderá instalá-lo manualmente usando o procedimento adequado para o seu tipo de sistema operacional. Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Linux Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para macOS Instalar e desinstalar o SSM Agent manualmente em instâncias do EC2 para Windows Server Benefícios A solução disponibiliza monitoramento da NVIDIA, fornecendo insights valiosos para os seguintes casos de uso: Análise do uso da GPU e da memória para identificar gargalos de performance ou a necessidade de obtenção de recursos adicionais. Monitoramento da temperatura e do consumo de energia para garantir que as GPUs operem dentro dos limites seguros. Avaliação da performance do codificador para workloads de vídeo na GPU. Verificação da conectividade PCIe para garantir que atendam à geração e à largura esperadas. Monitoramento das velocidades do relógio da GPU para identificar problemas de ajuste de escala ou de controle de utilização. A seguir, apresentamos as principais vantagens da solução: Automatiza a coleta de métricas para a NVIDIA usando a configuração do agente do CloudWatch, o que elimina a necessidade de instrumentação manual. Fornece um painel do CloudWatch consolidado e configurado previamente para as métricas da NVIDIA. O painel gerenciará automaticamente as métricas das novas instâncias do EC2 para a NVIDIA que foram configuradas usando a solução, mesmo que essas métricas não estejam disponíveis no momento de criação do painel. A imagem apresentada a seguir é um exemplo do painel para esta solução. Custos Esta solução cria e usa recursos em sua conta. A cobrança será realizada com base no uso padrão, que inclui o seguinte: Todas as métricas coletadas pelo agente do CloudWatch são cobradas como métricas personalizadas. O número de métricas usadas por esta solução depende do número de hosts do EC2. Cada host do EC2 configurado para a solução publica um total de 17 métricas por GPU. Um painel personalizado. As operações da API solicitadas pelo agente do CloudWatch para publicar as métricas. Com a configuração padrão para esta solução, o agente do CloudWatch chama a operação PutMetricData uma vez por minuto para cada host do EC2. Isso significa que a API PutMetricData será chamada 30*24*60=43,200 em um mês com 30 dias para cada host do EC2. Para obter mais informações sobre os preços do CloudWatch, consulte Preço do Amazon CloudWatch . A calculadora de preços pode ajudar a estimar os custos mensais aproximados para o uso desta solução. Como usar a calculadora de preços para estimar os custos mensais da solução Abra a calculadora de preços do Amazon CloudWatch . Em Escolher uma região , selecione a região em que você gostaria de implantar a solução. Na seção Métricas , em Número de métricas , insira 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . Na seção APIs , em Número de solicitações de API , insira 43200 * number of EC2 instances configured for this solution . Por padrão, o agente do CloudWatch executa uma operação PutMetricData a cada minuto para cada host do EC2. Na seção Painéis e alarmes , em Número de painéis , insira 1 . É possível visualizar os custos mensais estimados na parte inferior da calculadora de preços. Configuração do agente do CloudWatch para esta solução O agente do CloudWatch é um software que opera de maneira contínua e autônoma em seus servidores e em ambientes com contêineres. Ele coleta métricas, logs e rastreamentos da infraestrutura e das aplicações e os envia para o CloudWatch e para o X-Ray. Para obter mais informações sobre o agente do CloudWatch, consulte Coleta de métricas, logs e rastreamentos usando o agente do CloudWatch . Nesta solução, a configuração do agente coleta um conjunto de métricas para ajudar você a começar a monitorar e realizar a observabilidade da GPU da NVIDIA. O agente do CloudWatch pode ser configurado para coletar mais métricas da GPU da NVIDIA do que as que são exibidas por padrão no painel. Para obter uma lista de todas as métricas da GPU da NVIDIA que você pode coletar, consulte Colete métricas de GPU NVIDIA . Configuração do agente para esta solução As métricas coletadas pelo agente são definidas na configuração do agente. A solução fornece configurações do agente para a coleta das métricas recomendadas com dimensões adequadas para o painel da solução. Use a configuração do agente do CloudWatch apresentada a seguir em instâncias do EC2 equipadas com GPUs da NVIDIA. A configuração será armazenada como um parâmetro no Parameter Store do SSM, conforme detalhado posteriormente em Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implantação do agente para a sua solução Existem várias abordagens para instalar o agente do CloudWatch, dependendo do caso de uso. Recomendamos o uso do Systems Manager para esta solução. Ele fornece uma experiência no console e simplifica o gerenciamento de uma frota de servidores gerenciados em uma única conta da AWS. As instruções apresentadas nesta seção usam o Systems Manager e são destinadas para situações em que o agente do CloudWatch não está em execução com as configurações existentes. É possível verificar se o agente do CloudWatch está em execução ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se você já estiver executando o agente do CloudWatch nos hosts do EC2 nos quais a workload está implantada e gerenciando as configurações do agente, pode pular as instruções apresentadas nesta seção e usar o mecanismo de implantação existente para atualizar a configuração. Certifique-se de combinar a configuração do agente da GPU da NVIDIA com a configuração do agente existente e, em seguida, implante a configuração combinada. Se você estiver usando o Systems Manager para armazenar e gerenciar a configuração do agente do CloudWatch, poderá combinar a configuração com o valor do parâmetro existente. Para obter mais informações, consulte Managing CloudWatch agent configuration files . nota Ao usar o Systems Manager para implantar as configurações do agente do CloudWatch apresentadas a seguir, qualquer configuração existente do agente do CloudWatch nas suas instâncias do EC2 será substituída ou sobrescrita. É possível modificar essa configuração para atender às necessidades do ambiente ou do caso de uso específico. As métricas definidas na configuração representam o requisito mínimo necessário para o painel fornecido pela solução. O processo de implantação inclui as seguintes etapas: Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias. Etapa 2: armazenar o arquivo de configuração recomendado do agente no Systems Manager Parameter Store. Etapa 3: instalar o agente do CloudWatch em uma ou mais instâncias do EC2 usando uma pilha do CloudFormation. Etapa 4: verificar se a configuração do agente foi realizada corretamente. Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias Você deve conceder permissão para o Systems Manager instalar e configurar o agente do CloudWatch. Além disso, é necessário conceder permissão para que o agente do CloudWatch publique a telemetria da instância do EC2 para o CloudWatch. Certifique-se de que o perfil do IAM anexado à instância tenha as políticas do IAM CloudWatchAgentServerPolicy e AmazonSSMManagedInstanceCore associadas. Após criar o perfil, associe-o às suas instâncias do EC2. Para anexar um perfil a uma instância do EC2, siga as etapas apresentadas em Attach an IAM role to an instance . Etapa 2: armazenar o arquivo de configuração recomendado do agente do CloudWatch no Systems Manager Parameter Store O Parameter Store simplifica a instalação do agente do CloudWatch em uma instância do EC2 ao armazenar e gerenciar os parâmetros de configuração de forma segura, eliminando a necessidade de valores com codificação rígida. Isso garante um processo de implantação mais seguro e flexível ao possibilitar o gerenciamento centralizado e as atualizações simplificadas para as configurações em diversas instâncias. Use as etapas apresentadas a seguir para armazenar o arquivo de configuração recomendado do agente do CloudWatch como um parâmetro no Parameter Store. Como criar o arquivo de configuração do agente do CloudWatch como um parâmetro Abra o console AWS Systems Manager em https://console.aws.amazon.com/systems-manager/ . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. No painel de navegação, escolha Gerenciamento de aplicações e, em seguida, Parameter Store . Siga as etapas apresentadas a seguir para criar um novo parâmetro para a configuração. Escolha Criar Parâmetro . Na caixa Nome , insira um nome que será usado para referenciar o arquivo de configuração do agente do CloudWatch nas etapas posteriores. Por exemplo, . AmazonCloudWatch-NVIDIA-GPU-Configuration (Opcional) Na caixa Descrição , digite uma descrição para o parâmetro. Em Camadas de parâmetros , escolha Padrão . Para Tipo , escolha String . Em Tipo de dados , selecione texto . Na caixa Valor , cole o bloco em JSON correspondente que foi listado em Configuração do agente para esta solução . Escolha Criar Parâmetro . Etapa 3: instalar o agente do CloudWatch e aplicar a configuração usando um modelo do CloudFormation É possível usar o AWS CloudFormation para instalar o agente e configurá-lo para usar a configuração do agente do CloudWatch criada nas etapas anteriores. Como instalar e configurar o agente do CloudWatch para esta solução Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como CWAgentInstallationStack . Na seção Parâmetros , especifique o seguinte: Para CloudWatchAgentConfigSSM , insira o nome do parâmetro do Systems Manager para a configuração do agente que você criou anteriormente, como AmazonCloudWatch-NVIDIA-GPU-Configuration . Para selecionar as instâncias de destino, você tem duas opções. Para InstanceIds , especifique uma lista delimitada por vírgulas de IDs de instâncias nas quais você deseja instalar o agente do CloudWatch com esta configuração. É possível listar uma única instância ou várias instâncias. Se você estiver realizando implantações em grande escala, é possível especificar a TagKey e o TagValue correspondente para direcionar todas as instâncias do EC2 associadas a essa etiqueta e a esse valor. Se você especificar uma TagKey , é necessário especificar um TagValue correspondente. (Para um grupo do Auto Scaling, especifique aws:autoscaling:groupName para a TagKey e defina o nome do grupo do Auto Scaling para a TagValue para realizar a implantação em todas as instâncias do grupo do Auto Scaling.) Analise as configurações e, em seguida, escolha Criar pilha . Se você desejar editar o arquivo de modelo previamente para personalizá-lo, selecione a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . nota Após a conclusão desta etapa, este parâmetro do Systems Manager será associado aos agentes do CloudWatch em execução nas instâncias de destino. Isto significa que: Se o parâmetro do Systems Manager for excluído, o agente será interrompido. Se o parâmetro do Systems Manager for editado, as alterações de configuração serão aplicadas automaticamente ao agente na frequência programada, que, por padrão, é de 30 dias. Se você desejar aplicar imediatamente as alterações a este parâmetro do Systems Manager, você deverá executar esta etapa novamente. Para obter mais informações sobre as associações, consulte Working with associations in Systems Manager . Etapa 4: verificar se a configuração do agente foi realizada corretamente É possível verificar se o agente do CloudWatch está instalado ao seguir as etapas apresentadas em Verificar se o atendente do CloudWatch está em execução . Se o agente do CloudWatch não estiver instalado e em execução, certifique-se de que todas as configurações foram realizadas corretamente. Certifique-se de ter anexado um perfil com as permissões adequadas para a instância do EC2, conforme descrito na Etapa 1: garantir que as instâncias do EC2 de destino têm as permissões do IAM necessárias . Certifique-se de ter configurado corretamente o JSON para o parâmetro do Systems Manager. Siga as etapas em Solução de problemas de instalação do atendente do CloudWatch com o CloudFormation . Se todas as configurações estiverem corretas, as métricas da GPU da NVIDIA serão publicadas no CloudWatch e estarão disponíveis para visualização. É possível verificar no console do CloudWatch para assegurar que as métricas estão sendo publicadas corretamente. Como verificar se as métricas da GPU da NVIDIA estão sendo publicadas no CloudWatch Abra o console do CloudWatch, em https://console.aws.amazon.com/cloudwatch/ . Escolha Métricas e, depois, Todas as métricas . Certifique-se de ter selecionado a região na qual a solução foi implantada, escolha Namespaces personalizados e, em seguida, selecione CWAgent . Pesquise pelas métricas mencionadas em Configuração do agente para esta solução , como nvidia_smi_utilization_gpu . Caso encontre resultados para essas métricas, isso significa que elas estão sendo publicadas no CloudWatch. Criação do painel da solução com a GPU da NVIDIA O painel fornecido por esta solução apresenta métricas das GPUs da NVIDIA ao agregar e apresentar as métricas em todas as instâncias. O painel mostra um detalhamento dos principais colaboradores (que corresponde aos dez principais por widget de métrica) para cada métrica. Isso ajuda a identificar rapidamente discrepâncias ou instâncias que contribuem significativamente para as métricas observadas. Para criar o painel, é possível usar as seguintes opções: Usar o console do CloudWatch para criar o painel. Usar o console do AWS CloudFormation para implantar o painel. Fazer o download do código de infraestrutura como código do AWS CloudFormation e integrá-lo como parte da automação de integração contínua (CI). Ao usar o console do CloudWatch para criar um painel, é possível visualizá-lo previamente antes de criá-lo e incorrer em custos. nota O painel criado com o CloudFormation nesta solução exibe métricas da região em que a solução está implantada. Certifique-se de que a pilha do CloudFormation seja criada na mesma região em que as métricas da GPU da NVIDIA são publicadas. Se você especificou um namespace personalizado diferente de CWAgent na configuração do agente do CloudWatch, será necessário alterar o modelo do CloudFormation para o painel, substituindo CWAgent pelo namespace personalizado que você está usando. Como criar o painel usando o console do CloudWatch Abra o console do CloudWatch e acesse Criar painel usando este link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Insira o nome do painel e, em seguida, escolha Criar painel . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Visualize previamente o painel e escolha Salvar para criá-lo. Como criar o painel usando o CloudFormation Abra o assistente para criar pilha de forma rápida do CloudFormation usando este link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verifique se a região selecionada no console corresponde à região em que a workload da GPU da NVIDIA está em execução. Em Nome da pilha , insira um nome para identificar esta pilha, como NVIDIA-GPU-DashboardStack . Na seção Parâmetros , especifique o nome do painel no parâmetro DashboardName . Para diferenciar este painel de painéis semelhantes em outras regiões com facilidade, recomendamos incluir o nome da região no nome do painel, por exemplo, NVIDIA-GPU-Dashboard-us-east-1 . Confirme as funcionalidades de acesso relacionadas às transformações na seção Capacidades e transformações . Lembre-se de que o CloudFormation não adiciona recursos do IAM. Analise as configurações e, em seguida, escolha Criar pilha . Quando o status da pilha mostrar CREATE_COMPLETE , selecione a guia Recursos na pilha criada e, em seguida, escolha o link exibido em ID físico para acessar o painel. Como alternativa, é possível acessar o painel diretamente no console do CloudWatch ao selecionar Painéis no painel de navegação do console à esquerda e localizar o nome do painel na seção Painéis personalizados . Se você desejar editar o arquivo de modelo para personalizá-lo para atender a uma necessidade específica, é possível usar a opção Fazer upload de um arquivo de modelo no Assistente de criação de pilha para fazer o upload do modelo editado. Para obter mais informações, consulte Criar uma pilha no console do CloudFormation . É possível usar este link para fazer download do modelo: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Como começar a usar o painel da GPU da NVIDIA A seguir, apresentamos algumas tarefas que você pode realizar para explorar o novo painel da GPU da NVIDIA. Essas tarefas permitem a validação do funcionamento correto do painel e fornecem uma experiência prática ao usá-lo para monitorar as GPUs da NVIDIA. À medida que realiza as tarefas, você se familiarizará com a navegação no painel e com a interpretação das métricas visualizadas. Análise da utilização da GPU Na seção Utilização , localize os widgets de Utilização da GPU e Utilização da memória . Eles mostram, respectivamente, a porcentagem de tempo em que a GPU está sendo ativamente usada para cálculos e a porcentagem de uso da memória global para leitura ou gravação. Uma utilização elevada pode indicar possíveis gargalos de performance ou a necessidade de obtenção de recursos adicionais de GPU. Análise do uso de memória da GPU Na seção Memória , localize os widgets Memória total , Memória usada e Memória livre . Esses widgets fornecem insights sobre a capacidade total de memória das GPUs, além de indicar a quantidade de memória que, no momento, está sendo consumida ou disponível. A pressão de memória pode acarretar em problemas de performance ou erros por falta de memória, portanto, é fundamental monitorar essas métricas e garantir que a workload tenha memória suficiente disponível. Monitoramento da temperatura e do consumo de energia Na seção Temperatura/Potência , localize os widgets de Temperatura da CPU e Consumo de energia . Essas métricas são essenciais para garantir que as GPUs estejam operando dentro dos limites seguros de temperatura e consumo de energia. Identificação da performance do codificador Na seção Codificador , localize os widgets de Contagem de sessões do codificador , Média de FPS e Latência média . Essas métricas são relevantes se você estiver executando workloads de codificação de vídeo em suas GPUs. O monitoramento dessas métricas é fundamental para garantir a operação ideal dos codificadores e identificar possíveis gargalos ou problemas de performance. Verificação do status do link do PCIe Na seção PCIe , localize os widgets de Geração do link do PCIe e de Largura do link do PCIe . Essas métricas fornecem informações sobre o link do PCIe que estabelece conexão entre a GPU e o sistema de host. Certifique-se de que o link esteja operando com a geração e a largura esperadas para evitar possíveis limitações de performance causadas por gargalos do PCIe. Análise dos relógios da GPU Na seção Relógio , localize os widgets de Relógio de gráficos , Relógio de SM , Relógio de memória e Relógio de vídeo . Essas métricas apresentam as frequências operacionais atuais dos diversos componentes da GPU. O monitoramento desses relógios pode ajudar a identificar possíveis problemas com o ajuste de escala ou com o controle de utilização do relógio da GPU, que podem impactar a performance. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Workload do NGINX no EC2 Workload do Kafka no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/it_it/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-EKS.html | Abilitazione delle applicazioni sui cluster Amazon EKS - Amazon CloudWatch Abilitazione delle applicazioni sui cluster Amazon EKS - Amazon CloudWatch Documentazione Amazon CloudWatch Guida per l’utente Abilitazione di Application Signals su un cluster Amazon EKS tramite la console Abilita Application Signals su un cluster Amazon EKS utilizzando la configurazione avanzata del componente aggiuntivo CloudWatch Observability Abilita Application Signals su Amazon EKS utilizzando AWS CDK Abilita i segnali applicativi su Amazon EKS utilizzando Model Context Protocol (MCP) Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Abilitazione delle applicazioni sui cluster Amazon EKS CloudWatch Application Signals è supportato per le applicazioni Java, Python, Node.js e.NET. Per abilitare Application Signals per le tue applicazioni su un cluster Amazon EKS esistente, puoi utilizzare la Console di gestione AWS configurazione AWS CDK avanzata Auto monitor o il componente aggiuntivo CloudWatch Observability. Argomenti Abilitazione di Application Signals su un cluster Amazon EKS tramite la console Abilita Application Signals su un cluster Amazon EKS utilizzando la configurazione avanzata del componente aggiuntivo CloudWatch Observability Abilita Application Signals su Amazon EKS utilizzando AWS CDK Abilita i segnali applicativi su Amazon EKS utilizzando Model Context Protocol (MCP) Abilitazione di Application Signals su un cluster Amazon EKS tramite la console Per abilitare CloudWatch Application Signals sulle tue applicazioni su un cluster Amazon EKS esistente, utilizza le istruzioni in questa sezione. Importante Se stai già utilizzando OpenTelemetry un'applicazione che intendi abilitare per Application Signals, consulta Sistemi supportati prima di abilitare Application Signals. Per abilitare Application Signals per le applicazioni su un cluster Amazon EKS esistente Nota Se non hai ancora abilitato Application Signals, segui le istruzioni in Abilitazione di Application Signals in un account e poi segui la procedura qui riportata. Apri la CloudWatch console all'indirizzo https://console.aws.amazon.com/cloudwatch/ . Scegli Application Signals . Per Specifica piattaforma , scegli EKS . Per Seleziona un cluster EKS , scegli il cluster in cui desideri abilitare Application Signals. Se questo cluster non ha già il componente aggiuntivo Amazon CloudWatch Observability EKS abilitato, ti verrà richiesto di abilitarlo. In questo caso, puoi fare quanto segue: Scegli il componente aggiuntivo Add CloudWatch Observability EKS . Viene visualizzata la console Amazon EKS. Seleziona la casella di controllo per Amazon CloudWatch Observability e scegli Avanti . Il componente aggiuntivo CloudWatch Observability EKS abilita sia Application Signals che CloudWatch Container Insights con una migliore osservabilità per Amazon EKS. Per ulteriori informazioni su Container Insights, consulta Container Insights . Seleziona la versione più recente del componente aggiuntivo da installare. Seleziona un ruolo IAM da utilizzare per il componente aggiuntivo. Se scegli Eredita dal nodo , assegna le autorizzazioni corrette per il ruolo IAM utilizzato dai tuoi nodi worker. my-worker-node-role Sostituiscilo con il ruolo IAM utilizzato dai nodi di lavoro Kubernetes. aws iam attach-role-policy \ --role-name my-worker-node-role \ --policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \ --policy-arn arn:aws:iam::aws:policy/AWSXRayWriteOnlyAccess Per creare un ruolo di servizio per l'utilizzo del componente aggiuntivo, consulta Installa l' CloudWatch agente con il componente aggiuntivo Amazon CloudWatch Observability EKS o il grafico Helm . Scegli Successivo , conferma le informazioni sullo schermo e scegli Crea . Nella schermata successiva, scegli Enable CloudWatch Application Signals per tornare alla CloudWatch console e completare il processo. Esistono due opzioni per abilitare le applicazioni per Application Signals. Per coerenza, consigliamo di scegliere una singola opzione per ciascun cluster. L'opzione Console è più semplice. Utilizzando questo metodo, i pod si riavviano immediatamente. Il metodo Annota il file manifesto offre un maggiore controllo sul momento in cui i pod si riavviano e può anche facilitare la gestione del monitoraggio in modo più decentralizzato, se non desideri centralizzarlo. Nota Se stai abilitando Application Signals per un'applicazione Node.js con ESM, vai direttamente a Configurazione di un'applicazione Node.js con il formato del modulo ESM . Console L'opzione Console utilizza la configurazione avanzata del componente aggiuntivo Amazon CloudWatch Observability EKS per configurare Application Signals per i tuoi servizi. Per ulteriori informazioni sul componente aggiuntivo, consulta (Facoltativo) Configurazione aggiuntiva . Se non vedi un elenco di carichi di lavoro e namespace, assicurati di disporre delle autorizzazioni giuste per visualizzarli per questo cluster. Per ulteriori informazioni, consulta Autorizzazioni richieste . Puoi monitorare tutti i carichi di lavoro del servizio selezionando la casella di controllo Monitor automatico o scegliere selettivamente carichi di lavoro e namespace specifici da monitorare. Per monitorare tutti i carichi di lavoro del servizio con Monitor automatico: Seleziona la casella di controllo Monitor automatico per selezionare automaticamente tutti i carichi di lavoro del servizio in un cluster. Scegli Riavvio automatico per riavviare tutti i pod del carico di lavoro e abilitare immediatamente Application Signals con AWS Distro for OpenTelemetry auto-instrumentation (ADOT) iniettato nei tuoi pod. SDKs Seleziona Fatto . Quando è selezionato il riavvio automatico, il componente aggiuntivo Observability EKS abiliterà immediatamente Application Signals CloudWatch . In caso contrario, Application Signals verrà abilitato durante la successiva implementazione di ogni carico di lavoro. È possibile monitorare singoli carichi di lavoro o interi namespace. Per monitorare un singolo carico di lavoro: Seleziona la casella di controllo accanto al carico di lavoro da monitorare. Utilizza l'elenco a discesa Seleziona lingua(e) per selezionare la lingua del carico di lavoro. Seleziona le lingue per le quali desideri abilitare Application Signals, quindi scegli l'icona del segno di spunta (✓) per salvare la selezione. Per le applicazioni Python, assicurati che l'applicazione soddisfi i prerequisiti richiesti prima di continuare. Per ulteriori informazioni, consulta L'applicazione Python non si avvia dopo l'abilitazione di Application Signals . Seleziona Fatto . Il componente aggiuntivo Amazon CloudWatch Observability EKS inietterà immediatamente AWS Distro for OpenTelemetry autoinstrumentation (ADOT) SDKs nei tuoi pod e attiverà il riavvio del pod per consentire la raccolta di metriche e tracce dell'applicazione. Per monitorare un intero namespace: Seleziona la casella di controllo accanto al namespace da monitorare. Utilizza l'elenco a discesa Seleziona lingua(e) per selezionare la lingua del namespace. Seleziona le lingue per le quali desideri abilitare Application Signals, quindi scegli l'icona del segno di spunta (✓) per salvare la selezione. Ciò vale per tutti i carichi di lavoro in questo namespace, indipendentemente dal fatto che siano già stati implementati o che verranno implementati in futuro. Per le applicazioni Python, assicurati che l'applicazione soddisfi i prerequisiti richiesti prima di continuare. Per ulteriori informazioni, consulta L'applicazione Python non si avvia dopo l'abilitazione di Application Signals . Seleziona Fatto . Il componente aggiuntivo Amazon CloudWatch Observability EKS inietterà immediatamente AWS Distro for OpenTelemetry autoinstrumentation (ADOT) SDKs nei tuoi pod e attiverà il riavvio del pod per consentire la raccolta di metriche e tracce dell'applicazione. Per abilitare Application Signals in un altro cluster Amazon EKS, scegli Abilita Application Signals dalla schermata Servizi . Annotate manifest file Nella CloudWatch console, la sezione Monitor Services spiega che è necessario aggiungere un'annotazione a un file YAML manifesto nel cluster. L'aggiunta di questa annotazione strumenta automaticamente l'applicazione per inviare parametri, tracciamenti e log ad Application Signals. Sono disponibili due opzioni per l'annotazione: Annotazione del carico di lavoro strumenta automaticamente un singolo carico di lavoro nel cluster. Annotazione dello spazio dei nomi strumenta automaticamente tutti i carichi di lavoro distribuiti nello spazio dei nomi selezionato. Scegli una di queste opzioni e segui i passaggi appropriati: Per annotare un singolo carico di lavoro: Scegli Annotazione del carico di lavoro . Incolla una delle seguenti righe nella sezione PodTemplate del file manifesto del carico di lavoro. Per i carichi di lavoro Java: annotations: instrumentation.opentelemetry.io/inject-java: "true" Per i carichi di lavoro Python: annotations: instrumentation.opentelemetry.io/inject-python: "true" Per le applicazioni Python, sono necessarie configurazioni aggiuntive. Per ulteriori informazioni, consulta L'applicazione Python non si avvia dopo l'abilitazione di Application Signals . Per i carichi di lavoro .NET annotations: instrumentation.opentelemetry.io/inject-dotnet: "true" Nota Per abilitare Application Signals per un carico di lavoro .NET su immagini basate su Alpine Linux ( linux-musl-x64 ), aggiungi la seguente annotazione. instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-musl-x64" Per i carichi di lavoro Node.js: annotations: instrumentation.opentelemetry.io/inject-nodejs: "true" Nel terminale, inserisci kubectl apply -f your_deployment_yaml per applicare la modifica. Per annotare tutti i carichi di lavoro in un namespace: Scegli Annotazione dello spazio dei nomi . Incolla una delle seguenti righe nella sezione dei metadati del file manifesto del namespace. Se il namespace include carichi di lavoro Java, Python e .NET, incolla tutte le righe seguenti nel file manifesto del namespace. Se nel namespace sono presenti carichi di lavoro Java: annotations: instrumentation.opentelemetry.io/inject-java: "true" Se nel namespace sono presenti carichi di lavoro Python: annotations: instrumentation.opentelemetry.io/inject-python: "true" Per le applicazioni Python, sono necessarie configurazioni aggiuntive. Per ulteriori informazioni, consulta L'applicazione Python non si avvia dopo l'abilitazione di Application Signals . Se nel namespace sono presenti carichi di lavoro .NET: annotations: instrumentation.opentelemetry.io/inject-dotnet: "true" Se nel namespace sono presenti carichi di lavoro Node.JS: annotations: instrumentation.opentelemetry.io/inject-nodejs: "true" Nel terminale, inserisci kubectl apply -f your_namespace_yaml per applicare la modifica. Nel tuo terminale, inserisci un comando per riavviare tutti i pod nello spazio dei nomi. Un comando di esempio per riavviare i carichi di lavoro di implementazione è kubectl rollout restart deployment -n namespace_name Scegli Al termine visualizza servizi . Accederai alla visualizzazione dei servizi di Application Signals, dove è possibile visualizzare i dati raccolti da Application Signals. Potrebbero essere necessari alcuni minuti prima che i dati vengano visualizzati. Per abilitare Application Signals in un altro cluster Amazon EKS, scegli Abilita Application Signals dalla schermata Servizi . Per ulteriori informazioni sulla visualizzazione Servizi , consulta Monitoraggio dell'integrità operativa delle applicazioni con Application Signals . Nota Se stai usando un server WSGI per l'applicazione Python, consulta Nessun dato di Application Signals per l'applicazione Python che utilizza un server WSGI per informazioni sull'abilitazione di Application Signals. Abbiamo anche identificato altre considerazioni da tenere a mente quando si abilitano le applicazioni Python per Application Signals. Per ulteriori informazioni, consulta L'applicazione Python non si avvia dopo l'abilitazione di Application Signals . Configurazione di un'applicazione Node.js con il formato del modulo ESM Forniamo un supporto limitato per le applicazioni Node.js con il formato del modulo ESM. Per informazioni dettagliate, vedi Limitazioni note di Node.js con ESM . Per il formato del modulo ESM, l'abilitazione di Application Signals tramite la console o l'annotazione del file manifesto non funziona. Salta il passaggio 8 della procedura precedente ed esegui invece quanto segue. Per abilitare Application Signals per un'applicazione Node.js con ESM Installa le dipendenze pertinenti nell'applicazione Node.js per l'instrumentazione automatica: npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation npm install @opentelemetry/instrumentation@0.54.0 Aggiungi le seguenti variabili di ambiente al Dockerfile per la tua applicazione e crea l'immagine. ... ENV OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true ENV OTEL_TRACES_SAMPLER_ARG='endpoint=http://cloudwatch-agent.amazon-cloudwatch:2000' ENV OTEL_TRACES_SAMPLER='xray' ENV OTEL_EXPORTER_OTLP_PROTOCOL='http/protobuf' ENV OTEL_EXPORTER_OTLP_TRACES_ENDPOINT='http://cloudwatch-agent.amazon-cloudwatch:4316/v1/traces' ENV OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT='http://cloudwatch-agent.amazon-cloudwatch:4316/v1/metrics' ENV OTEL_METRICS_EXPORTER='none' ENV OTEL_LOGS_EXPORTER='none' ENV NODE_OPTIONS='--import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs' ENV OTEL_SERVICE_NAME=' YOUR_SERVICE_NAME ' #replace with a proper service name ENV OTEL_PROPAGATORS='tracecontext,baggage,b3,xray' ... # command to start the application # for example # CMD ["node", "index.mjs"] Aggiungi le variabili di ambiente OTEL_RESOURCE_ATTRIBUTES_POD_NAME , OTEL_RESOURCE_ATTRIBUTES_NODE_NAME , OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME , POD_NAMESPACE e OTEL_RESOURCE_ATTRIBUTES al file yaml di implementazione per l'applicazione. Ad esempio: apiVersion: apps/v1 kind: Deployment metadata: name: nodejs-app labels: app: nodejs-app spec: replicas: 2 selector: matchLabels: app: nodejs-app template: metadata: labels: app: nodejs-app # annotations: # make sure this annotation doesn't exit # instrumentation.opentelemetry.io/inject-nodejs: 'true' spec: containers: - name: nodejs-app image: your-nodejs-application-image #replace with a proper image uri imagePullPolicy: Always ports: - containerPort: 8000 env: - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: OTEL_RESOURCE_ATTRIBUTES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME valueFrom: fieldRef: fieldPath: metadata.labels['app'] # Assuming 'app' label is set to the deployment name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: OTEL_RESOURCE_ATTRIBUTES value: "k8s.deployment.name=$(OTEL_RESOURCE_ATTRIBUTES_DEPLOYMENT_NAME),k8s.namespace.name=$(POD_NAMESPACE),k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME)" Implementa l'applicazione Node.js nel cluster. Dopo aver abilitato le applicazioni sui cluster Amazon EKS, è possibile monitorarne l'integrità. Per ulteriori informazioni, consulta Monitoraggio dell'integrità operativa delle applicazioni con Application Signals . Abilita Application Signals su un cluster Amazon EKS utilizzando la configurazione avanzata del componente aggiuntivo CloudWatch Observability Per impostazione predefinita, il monitoraggio delle prestazioni delle applicazioni OpenTelemetry (APM) basato su (OTEL) è abilitato tramite Application Signals durante l'installazione del componente aggiuntivo CloudWatch Observability EKS (V5.0.0 o versione successiva) o del grafico Helm. Puoi personalizzare ulteriormente impostazioni specifiche utilizzando la configurazione avanzata per il componente aggiuntivo Amazon EKS o sovrascrivendo i valori con il grafico Helm. Nota Se si utilizza una soluzione APM basata su OpenTelemetry (OTEL), l'attivazione di Application Signals influisce sulla configurazione di osservabilità esistente. Controlla l'implementazione corrente prima di procedere. Per mantenere la configurazione APM esistente dopo l'aggiornamento alla versione 5.0.0 o successiva, consulta. Disattiva Application Signals CloudWatch Observability Add-on fornisce anche un controllo dettagliato aggiuntivo per includere o escludere servizi specifici, se necessario, nella nuova configurazione avanzata. Per ulteriori informazioni, consulta Abilitazione di APM tramite Application Signals per il tuo cluster Amazon EKS . Abilita Application Signals su Amazon EKS utilizzando AWS CDK Se non hai ancora abilitato Application Signals in questo account, devi concedere ad Application Signals le autorizzazioni necessarie per scoprire i tuoi servizi. Per informazioni, consulta Abilitazione di Application Signals in un account . Abilita Application Signals per le tue applicazioni. import { aws_applicationsignals as applicationsignals } from 'aws-cdk-lib'; const cfnDiscovery = new applicationsignals.CfnDiscovery(this, 'ApplicationSignalsServiceRole', { } ); La CloudFormation risorsa Discovery concede ad Application Signals le seguenti autorizzazioni: xray:GetServiceGraph logs:StartQuery logs:GetQueryResults cloudwatch:GetMetricData cloudwatch:ListMetrics tag:GetResources Per ulteriori informazioni su questo ruolo, consulta Autorizzazioni di ruolo collegate al servizio per Application Signals CloudWatch . Installa il componente aggiuntivo amazon-cloudwatch-observability . Crea un ruolo IAM con la CloudWatchAgentServerPolicy e l'OIDC associato al cluster. const cloudwatchRole = new Role(this, 'CloudWatchAgentAddOnRole', { assumedBy: new OpenIdConnectPrincipal(cluster.openIdConnectProvider), managedPolicies: [ManagedPolicy.fromAwsManagedPolicyName('CloudWatchAgentServerPolicy')], }); Installa il componente aggiuntivo con il ruolo IAM creato sopra. new CfnAddon(this, 'CloudWatchAddon', { addonName: 'amazon-cloudwatch-observability', clusterName: cluster.clusterName, serviceAccountRoleArn: cloudwatchRole.roleArn }); Aggiungi una delle seguenti righe nella sezione PodTemplate del file manifesto del carico di lavoro. Lingua File Java instrumentation.opentelemetry.io/inject-java: “true” Python instrumentation.opentelemetry.io/inject-python: “true” .Net instrumentation.opentelemetry.io/inject-dotnet: “true” Node.js instrumentation.opentelemetry.io/inject-nodejs: “true” const deployment = { apiVersion: "apps/v1", kind: "Deployment", metadata: { name: " sample-app " }, spec: { replicas: 3, selector: { matchLabels: { "app": " sample-app " } }, template: { metadata: { labels: { "app": " sample-app " }, annotations: { "instrumentation.opentelemetry.io/inject- $LANG ": "true" } }, spec: { ...}, }, }, }; cluster.addManifest(' sample-app ', deployment) Abilita i segnali applicativi su Amazon EKS utilizzando Model Context Protocol (MCP) Puoi utilizzare il server CloudWatch Application Signals Model Context Protocol (MCP) per abilitare Application Signals sui tuoi cluster Amazon EKS tramite interazioni IA conversazionali. Ciò fornisce un'interfaccia in linguaggio naturale per configurare il monitoraggio di Application Signals. Il server MCP automatizza il processo di abilitazione comprendendo i requisiti e generando la configurazione appropriata. Invece di seguire manualmente i passaggi della console o scrivere codice CDK, puoi semplicemente descrivere cosa vuoi abilitare. Prerequisiti Prima di utilizzare il server MCP per abilitare Application Signals, assicuratevi di avere: Un ambiente di sviluppo che supporti MCP (come Kiro, Claude Desktop, VSCode con estensioni MCP o altri strumenti compatibili con MCP) Il server MCP CloudWatch Application Signals configurato nel tuo IDE. Per istruzioni di configurazione dettagliate, consultate la documentazione del server MCP di CloudWatch Application Signals . Utilizzo del server MCP Dopo aver configurato il server MCP CloudWatch Application Signals nell'IDE, puoi richiedere indicazioni sull'abilitazione utilizzando istruzioni in linguaggio naturale. Sebbene l'assistente di codifica sia in grado di dedurre il contesto dalla struttura del progetto, fornire dettagli specifici nelle istruzioni aiuta a garantire una guida più accurata e pertinente. Includi informazioni come il linguaggio dell'applicazione, il nome del cluster Amazon EKS e i percorsi assoluti verso l'infrastruttura e il codice dell'applicazione. Istruzioni sulle migliori pratiche (specifiche e complete): "Enable Application Signals for my Python service running on EKS. My app code is in /home/user/flask-api and IaC is in /home/user/flask-api/terraform" "I want to add observability to my Node.js application on EKS cluster 'production-cluster'. The application code is at /Users/dev/checkout-service and the Kubernetes manifests are at /Users/dev/checkout-service/k8s" "Help me instrument my Java Spring Boot application on EKS with Application Signals. Application directory: /opt/apps/payment-api CDK infrastructure: /opt/apps/payment-api/cdk" Suggerimenti meno efficaci: "Enable monitoring for my app" → Missing: platform, language, paths "Enable Application Signals. My code is in ./src and IaC is in ./infrastructure" → Problem: Relative paths instead of absolute paths "Enable Application Signals for my EKS service at /home/user/myapp" → Missing: programming language Modello rapido: "Enable Application Signals for my [LANGUAGE] service on EKS. App code: [ABSOLUTE_PATH_TO_APP] IaC code: [ABSOLUTE_PATH_TO_IAC]" Vantaggi dell'utilizzo del server MCP L'utilizzo del server MCP CloudWatch Application Signals offre diversi vantaggi: Interfaccia in linguaggio naturale: descrivi cosa vuoi abilitare senza memorizzare comandi o sintassi di configurazione Guida sensibile al contesto: il server MCP comprende l'ambiente specifico dell'utente e fornisce consigli personalizzati Errori ridotti: la generazione automatizzata della configurazione riduce al minimo gli errori di digitazione manuale Configurazione più rapida: passa più rapidamente dall'intenzione all'implementazione Strumento di apprendimento: guarda le configurazioni generate e scopri come funziona Application Signals Per ulteriori informazioni sulla configurazione e l'utilizzo del server MCP CloudWatch Application Signals, consultate la documentazione del server MCP . JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti (Facoltativo) Prova di Application Signals su un'app di esempio Abilita le tue applicazioni su Amazon EC2 Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/es_es/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html | Solución de CloudWatch: carga de trabajo de GPU NVIDIA en Amazon EC2 - Amazon CloudWatch Solución de CloudWatch: carga de trabajo de GPU NVIDIA en Amazon EC2 - Amazon CloudWatch Documentación Amazon CloudWatch Guía del usuario Requisitos Ventajas Configuración del agente de CloudWatch para esta solución Implementar el agente para su solución Crear el panel de soluciones de GPU NVIDIA Solución de CloudWatch: carga de trabajo de GPU NVIDIA en Amazon EC2 Esta solución lo ayuda a configurar la recopilación de métricas lista para usar mediante agentes de CloudWatch para las cargas de trabajo de GPU NVIDIA que se ejecutan en instancias de EC2. Además, lo ayuda a configurar un panel de CloudWatch preconfigurado. Para obtener información general sobre todas las soluciones de observabilidad de CloudWatch, consulte Soluciones de observabilidad de CloudWatch . Temas Requisitos Ventajas Configuración del agente de CloudWatch para esta solución Implementar el agente para su solución Crear el panel de soluciones de GPU NVIDIA Requisitos Esta solución es adecuada para las siguientes condiciones: Recursos informáticos: Amazon EC2 Compatible con hasta 500 GPU en todas las instancias de EC2 en una determinadaRegión de AWS Versión más reciente del agente de CloudWatch SSM Agent instalado en la instancia de EC2 La instancia de EC2 debe tener instalado un controlador NVIDIA. Los controladores NVIDIA están preinstalados en algunas Imágenes de máquina de Amazon (AMI). De lo contrario, puede instalar el controlador de forma manual. Para obtener más información, consulte Instalar los controladores NVIDIA en instancias de Linux . nota AWS Systems Manager (SSM Agent) está preinstalado en algunas Imágenes de máquina de Amazon (AMI) proporcionadas por AWS y terceros de confianza. Si el agente no está instalado, puede instalarlo manualmente mediante el procedimiento correspondiente al tipo de sistema operativo. Instalación y desinstalación manual del SSM Agent en instancias de EC2 para Linux Instalación y desinstalación manual del SSM Agent en instancias de EC2 para macOS Instalación y desinstalación manual del SSM Agent en instancias de EC2 para Windows Server Ventajas La solución ofrece supervisión NVIDIA, lo que proporciona información valiosa para los siguientes casos de uso: Analice el uso de GPU y de memoria para detectar cuellos de botella en el rendimiento o la necesidad de recursos adicionales. Supervise la temperatura y el consumo de energía para garantizar que las GPU funcionen dentro de límites seguros. Evalúe el rendimiento del codificador para las cargas de trabajo de video de la GPU. Compruebe la conectividad PCIe para conocer la generación y el ancho esperados. Supervise las velocidades de reloj de la GPU para detectar problemas de escala y limitación. A continuación se detallan las principales ventajas de la solución: Automatiza la recopilación de métricas para NVIDIA mediante la configuración del agente de CloudWatch, lo que elimina la instrumentación manual. Proporciona un panel de CloudWatch consolidado y preconfigurado para las métricas de NVIDIA. El panel administrará automáticamente las métricas de las nuevas instancias de EC2 de NVIDIA configuradas con la solución, incluso si esas métricas no existían cuando creó el panel por primera vez. La siguiente imagen muestra un ejemplo del panel para esta solución: Costos Esta solución crea y utiliza recursos en su espacio de trabajo. Se cobrará por el uso estándar, que incluye lo siguiente: Las métricas recopiladas por el agente de CloudWatch se cobran como métricas personalizadas. La cantidad de métricas que utiliza esta solución depende de la cantidad de hosts de EC2. Cada host de EC2 configurado para la solución publica un total de 17 métricas por GPU. Un panel personalizado. Operaciones de API solicitadas por el agente de CloudWatch para publicar las métricas. Con la configuración predeterminada de esta solución, el agente de CloudWatch llama a PutMetricData una vez por minuto para cada host de EC2. Esto significa que la API PutMetricData se llamará 30*24*60=43,200 en un mes de 30 días para cada host de EC2. Para obtener más información sobre los precios de CloudWatch, consulte Precios de Amazon CloudWatch . La calculadora de precios puede ayudarlo a estimar los costos mensuales aproximados del uso de esta solución. Para usar la calculadora de precios para estimar los costos mensuales de la solución Abra la calculadora de precios de Amazon CloudWatch . En Elegir una región , seleccione la región en la que desea implementar la solución. En la sección Métricas , en Número de métricas , ingrese 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . En la sección API , en Número de solicitudes de API , ingrese 43200 * number of EC2 instances configured for this solution . De forma predeterminada, el agente CloudWatch realiza una operación PutMetricData por minuto para cada host de EC2. En la sección Paneles y alarmas , en Número de paneles , escriba 1 . Puede ver sus costos mensuales estimados en la parte inferior de la calculadora de precios. Configuración del agente de CloudWatch para esta solución El agente de CloudWatch es un software que se ejecuta de forma continua y autónoma en sus servidores y en entornos contenerizados. Recopila métricas, registros y trazas de su infraestructura y aplicaciones y los envía a CloudWatch y X-Ray. Para obtener más información sobre el agente de CloudWatch, consulte Recopile las métricas, registros y rastros con el agente de CloudWatch . La configuración del agente de esta solución recopila un conjunto de métricas para ayudarlo a empezar a supervisar y observar su GPU NVIDIA. El agente de CloudWatch se puede configurar para recopilar más métricas de GPU NVIDIA que las que muestra el panel de forma predeterminada. Para ver una lista de todas las métricas de GPU NVIDIA que puede recopilar, consulte Recopilación de métricas de GPU NVIDIA . Configuración del agente de CloudWatch para esta solución Las métricas que el agente recopiló se definen en la configuración del agente. La solución proporciona configuraciones de agentes para recopilar las métricas recomendadas con las dimensiones adecuadas para el panel de la solución. Utilice la siguiente configuración de agente de CloudWatch en instancias EC2 con GPU NVIDIA. La configuración se almacenará como parámetro en Parameter Store de SSM, como se detalla más adelante en Paso 2: almacene el archivo de configuración recomendado del agente de CloudWatch en Parameter Store de Systems Manager . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Implementar el agente para su solución Existen varios métodos de instalación del agente de CloudWatch, según el caso de uso. Recomendamos usar Systems Manager para esta solución. Proporciona una experiencia de consola y simplifica la administración de una flota de servidores administrados en una sola cuenta de AWS. Las instrucciones de esta sección utilizan Systems Manager y están pensadas para cuando el agente de CloudWatch no se esté ejecutando con las configuraciones existentes. Puede comprobar si el agente de CloudWatch se está ejecutando, siguiendo los pasos que se indican en Verifique que el agente de CloudWatch esté en ejecución . Si ya ejecuta el agente de CloudWatch en los hosts de EC2 en los que se implementa la carga de trabajo y administra las configuraciones del agente, puede omitir las instrucciones de esta sección y seguir el mecanismo de implementación existente para actualizar la configuración. Asegúrese de combinar la configuración de agente de la GPU NVIDIA con la configuración de agente existente y, a continuación, implemente la configuración combinada. Si utiliza Systems Manager para almacenar y administrar la configuración del agente de CloudWatch, puede combinar la configuración con el valor del parámetro existente. Para obtener más información, consulte Managing CloudWatch agent configuration files . nota El uso de Systems Manager para implementar las siguientes configuraciones de agente de CloudWatch reemplazará o sobrescribirá cualquier configuración de agente de CloudWatch existente en las instancias de EC2. Puede modificar esta configuración para adaptarla a su entorno o caso de uso únicos. Las métricas definidas en la configuración son las mínimas requeridas para el panel proporcionado por la solución. El proceso de implementación consta de los siguientes pasos: Paso 1: asegúrese de que las instancias de EC2 de destino disponen de los permisos de IAM necesarios. Paso 2: almacene el archivo de configuración recomendado del agente en Parameter Store de Systems Manager. Paso 3: instale el agente de CloudWatch en una o más instancias de EC2 mediante una pila de CloudFormation. Paso 4: verifique que la configuración del agente sea correcta. Paso 1: asegúrese de que las instancias de EC2 de destino disponen de los permisos de IAM necesarios Debe conceder permiso a Systems Manager para instalar y configurar el agente de CloudWatch. También debe conceder permiso al agente de CloudWatch para publicar la telemetría de su instancia de EC2 en CloudWatch. Asegúrese de que el rol de IAM adjuntado a la instancia tenga adjuntas las políticas de IAM CloudWatchAgentServerPolicy y AmazonSSMManagedInstanceCore . Después de crear el rol, adjunte el rol a sus instancias de EC2. Para adjuntar un rol a una instancia de EC2, siga los pasos que se indican en Adjuntar un rol de IAM a una instancia . Paso 2: almacene el archivo de configuración recomendado del agente de CloudWatch en Parameter Store de Systems Manager Parameter Store simplifica la instalación del agente de CloudWatch en una instancia de EC2 al almacenar y administrar los parámetros de configuración de forma segura, lo que elimina la necesidad de valores de codificación rígida. Esto garantiza un proceso de implementación más seguro y flexible, lo que permite una administración centralizada y facilita las actualizaciones de las configuraciones en varias instancias. Siga los pasos a continuación para almacenar el archivo de configuración del agente de CloudWatch recomendado como parámetro en Parameter Store. Para crear el archivo de configuración del agente de CloudWatch como parámetro Abra la consola de AWS Systems Manager en https://console.aws.amazon.com/systems-manager/ . Compruebe que la región seleccionada en la consola sea la región en la que se ejecuta la carga de trabajo de la GPU NVIDIA. Desde el panel de navegación, elija Administración de aplicaciones , Parameter Store . Siga estos pasos para crear un nuevo parámetro para la configuración. Elija Create parameter . En el cuadro Nombre , ingrese un nombre que utilizará para hacer referencia al archivo de configuración del agente de CloudWatch en pasos posteriores. Por ejemplo, AmazonCloudWatch-NVIDIA-GPU-Configuration . (Opcional) En el cuadro Descripción , escriba una descripción para el parámetro. En Capa de parámetros , elija Estándar . En Type , elija String . En Tipo de datos , elija texto . En el cuadro Valor , pegue el bloque JSON correspondiente que aparece en Configuración del agente de CloudWatch para esta solución . Elija Create parameter . Paso 3: instale el agente de CloudWatch y aplique la configuración mediante una plantilla de CloudFormation Puede usar AWS CloudFormation para instalar el agente y configurarlo para que use la configuración del agente de CloudWatch que creó en los pasos anteriores. Para instalar y configurar el agente de CloudWatch para esta solución Abra el asistente de CloudFormation Creación rápida de pilas mediante este enlace: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Compruebe que la región seleccionada en la consola sea la región en la que se ejecuta la carga de trabajo de la GPU NVIDIA. En Nombre de la pila , ingrese un nombre para identificar esta pila, como CWAgentInstallationStack . En la sección Parámetros , especifique los siguientes parámetros: En CloudWatchAgentConfigSSM , ingrese el nombre del parámetro de Systems Manager para la configuración del agente que creó anteriormente, por ejemplo AmazonCloudWatch-NVIDIA-GPU-Configuration . Para seleccionar las instancias de destino, tiene dos opciones. En el caso de InstanceIds , especifique una lista delimitada por comas de una lista de ID de instancia en los que desee instalar el agente de CloudWatch con esta configuración. Puede enumerar una sola instancia o varias instancias. Si realiza una implementación a escala, puede especificar la TagKey y el TagValue correspondiente para dirigirse a todas las instancias de EC2 con esta etiqueta y valor. Si especifica una TagKey , debe especificar el TagValue correspondiente. (En el caso de un grupo de escalado automático, especifique aws:autoscaling:groupName para TagKey y especifique el nombre del grupo de escalado automático para que TagValue se implemente en todas las instancias del grupo de escalado automático). Revise la configuración y, a continuación, seleccione Crear pila . Si quiere editar primero el archivo de plantilla para personalizarlo, seleccione la opción Cargar un archivo de plantilla en el Asistente de creación de pilas para cargar la plantilla editada. Para obtener más información, consulte Creación de una pila en la consola de CloudFormation . nota Una vez completado este paso, este parámetro de Systems Manager se asociará a los agentes de CloudWatch que se ejecuten en las instancias de destino. Esto significa que: Si se elimina el parámetro de Systems Manager, el agente se detendrá. Si se edita el parámetro de Systems Manager, los cambios de configuración se aplicarán automáticamente al agente con la frecuencia programada, que es de 30 días por defecto. Si desea aplicar inmediatamente los cambios a este parámetro de Systems Manager, debe volver a ejecutar este paso. Para obtener más información sobre asociaciones, consulte Trabajo con asociaciones en Systems Manager . Paso 4: verifique que la configuración del agente sea correcta Puede comprobar si el agente de CloudWatch está instalado siguiendo los pasos que se indican en Verifique que el agente de CloudWatch esté en ejecución . Si el agente de CloudWatch no está instalado ni en ejecución, asegúrese de haber configurado todo correctamente. Asegúrese de haber adjuntado un rol con los permisos correctos a la instancia EC2, tal y como se describe en Paso 1: asegúrese de que las instancias de EC2 de destino disponen de los permisos de IAM necesarios . Asegúrese de haber configurado correctamente el JSON para el parámetro de Systems Manager. Siga los pasos de Solución de problemas de la instalación del agente de CloudWatch con CloudFormation . Si todo está configurado correctamente, debería ver las métricas de la GPU NVIDIA publicadas en CloudWatch. Puede comprobar la consola de CloudWatch para verificar que se estén publicando. Para verificar que las métricas de la GPU NVIDIA se publican en CloudWatch Abra la consola de CloudWatch en https://console.aws.amazon.com/cloudwatch/ . Elija Métricas , Todas las métricas . Asegúrese de haber seleccionado la región en la que implementó la solución y elija Namespaces personalizados , CWAgent . Busque las métricas mencionadas en Configuración del agente de CloudWatch para esta solución , por ejemplo nvidia_smi_utilization_gpu . Si ve los resultados de estas métricas, significa que las métricas se están publicando en CloudWatch. Crear el panel de soluciones de GPU NVIDIA El panel proporcionado por esta solución presenta las métricas de las GPU NVIDIA mediante la agregación y presentación de las métricas de todas las instancias. El panel muestra un desglose de los principales contribuyentes (los 10 principales por widget de métrica) para cada métrica. Esto lo ayuda a identificar rápidamente los valores atípicos o las instancias que contribuyen de manera significativa a las métricas observadas. Para crear el panel, puede usar las siguientes opciones: Use la consola de CloudWatch para crear el panel. Utilice la consola de AWS CloudFormation para implementar el panel. Descargue la infraestructura de AWS CloudFormation como código e intégrela como parte de su automatización de integración continua (CI). Al utilizar la consola de CloudWatch para crear un panel, puede obtener una vista previa del panel antes de crearlo y que se le cobre por ello. nota El panel creado con CloudFormation en esta solución muestra las métricas de la región en la que se implementa la solución. Asegúrese de crear la pila CloudFormation en la región en la que se publican las métricas de su GPU NVIDIA. Si especificó un namespace personalizado que no sea CWAgent en la configuración del agente de CloudWatch, tendrá que cambiar la plantilla CloudFormation del panel para sustituir CWAgent por el namespace personalizado que esté utilizando. Para crear el panel mediante la consola de CloudWatch Abra la consola de CloudWatch en Crear panel mediante este enlace: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Compruebe que la región seleccionada en la consola sea la región en la que se ejecuta la carga de trabajo de la GPU NVIDIA. Ingrese el nombre del panel, luego seleccione Crear panel . Para diferenciar fácilmente este panel de otros paneles similares de otras regiones, recomendamos incluir el nombre de la región en el nombre del panel, por ejemplo NVIDIA-GPU-Dashboard-us-east-1 . Obtenga una vista previa del panel y seleccione Guardar para crearlo. Para crear el panel de control mediante CloudFormation Abra el asistente de CloudFormation Creación rápida de pilas mediante este enlace: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Compruebe que la región seleccionada en la consola sea la región en la que se ejecuta la carga de trabajo de la GPU NVIDIA. En Nombre de la pila , ingrese un nombre para identificar esta pila, como NVIDIA-GPU-DashboardStack . En la sección Parámetros , especifique el nombre del panel en el parámetro DashboardName . Para diferenciar fácilmente este panel de otros paneles similares de otras regiones, recomendamos incluir el nombre de la región en el nombre del panel, por ejemplo NVIDIA-GPU-Dashboard-us-east-1 . Reconozca las capacidades de acceso para las transformaciones en Capacidades y transformaciones . Tenga en cuenta que CloudFormation no agrega ningún recurso de IAM. Revise la configuración y, a continuación, seleccione Crear pila . Cuando el estado de la pila sea CREATE_COMPLETE , elija la pestaña Recursos situada debajo de la pila creada y, a continuación, elija el enlace situado en ID físico para ir al panel. También puede acceder al panel en la consola de CloudWatch seleccionando Paneles en el panel de navegación izquierdo de la consola y buscando el nombre del panel en Paneles personalizados . Si desea editar el archivo de plantilla para personalizarlo para cualquier propósito, puede usar la opción Cargar un archivo de plantilla en Crear asistente de pilas para cargar la plantilla editada. Para obtener más información, consulte Creación de una pila en la consola de CloudFormation . Puede utilizar este enlace para descargar la plantilla: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Introducción al panel de GPU NVIDIA Estas son algunas tareas que puede probar con el nuevo panel de GPU NVIDIA. Estas tareas permiten comprobar que el panel funciona correctamente y le proporcionan experiencia práctica al usarlo para supervisar sus GPU NVIDIA. A medida que las vaya probando, se familiarizará con la navegación por el panel y la interpretación de las métricas visualizadas. Revisar el uso de la GPU En la sección Uso , busque los widgets Uso de GPU y Uso de memoria . Estos muestran el porcentaje de tiempo que la GPU se utiliza activamente para realizar cálculos y el porcentaje de memoria global que se lee o escribe, respectivamente. Un uso elevado podría indicar posibles cuellos de botella de rendimiento o la necesidad de recursos adicionales de la GPU. Analizar el uso de memoria de la GPU En la sección Memoria , busque los widgets Memoria total , Memoria usada y Memoria libre . Estos proporcionan información sobre la capacidad total de memoria de las GPU y la cantidad de memoria que se consume o está disponible actualmente. La presión de la memoria puede provocar problemas de rendimiento o errores por falta de memoria, por lo que es importante supervisar estas métricas y garantizar que haya suficiente memoria disponible para sus cargas de trabajo. Supervisar la temperatura y el consumo de energía En la sección Temperatura/energía , busque los widgets Temperatura de GPU y Consumo de energía . Estas métricas son esenciales para garantizar que las GPU funcionen dentro de los límites térmicos y de potencia seguros. Identificar el rendimiento del codificador En la sección Codificador , busque los widgets Recuento de sesiones del codificador , FPS promedio y Latencia promedio . Estas métricas son pertinentes si ejecuta cargas de trabajo de codificación de video en sus GPU. Supervise estas métricas para asegurarse de que sus codificadores funcionan de manera óptima e identifique cualquier posible cuello de botella o problema de rendimiento. Compruebe el estado del enlace PCIe En la sección PCIe , busque los widgets Generación de enlace PCIe y Ancho de enlace PCIe . Estas métricas proporcionan información sobre el enlace PCIe que conecta la GPU al sistema host. Asegúrese de que el enlace funcione con la generación y el ancho esperados para evitar posibles limitaciones de rendimiento debido a los cuellos de botella del PCIe. Revisar los relojes de la GPU En la sección Reloj , busque los widgets Reloj gráfico , Reloj SM , Reloj de memoria y Reloj de video . Estas métricas muestran las frecuencias de funcionamiento actuales de varios componentes de la GPU. La supervisión de estos relojes puede ayudar a identificar posibles problemas relacionados con la escala del reloj de la GPU o la limitación de la frecuencia, que podrían afectar el rendimiento. JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento Carga de trabajo de NGINX en EC2 Carga de trabajo de Kafka en EC2 ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/it_it/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Configura i nomi dei servizi e degli ambienti dell' CloudWatch agente per le entità correlate - Amazon CloudWatch Configura i nomi dei servizi e degli ambienti dell' CloudWatch agente per le entità correlate - Amazon CloudWatch Documentazione Amazon CloudWatch Guida per l’utente Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Configura i nomi dei servizi e degli ambienti dell' CloudWatch agente per le entità correlate L' CloudWatch agente può inviare metriche e log con dati di entità per supportare il relativo riquadro Esplora nella CloudWatch console. Il nome del servizio o il nome dell'ambiente possono essere configurati dalla configurazione JSON dell'CloudWatch agente . Nota La configurazione dell'agente può essere sovrascritta. Per dettagli su come l'agente decide quali dati inviare per le entità correlate, consulta Utilizzo dell'agente con la relativa telemetria CloudWatch , Per quanto riguarda le metriche, può essere configurata a livello di agente, metrica o plug-in. Per i log, può essere configurato a livello di agente, di log o di file. Viene sempre utilizzata la configurazione più specifica. Ad esempio, se la configurazione esiste a livello di agente e a livello di metriche, le metriche utilizzeranno la configurazione delle metriche e qualsiasi altra cosa (log) utilizzerà la configurazione dell'agente. L'esempio seguente mostra diversi modi per configurare il nome del servizio e il nome dell'ambiente. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Configura e configura la raccolta di metriche Prometheus sulle istanze Amazon EC2 Avvia l' CloudWatch agente Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:29:25 |
https://git-scm.com/book/fa/v2/%d9%85%d9%82%d8%af%d9%85%d8%a7%d8%aa-%da%af%db%8c%d8%aa-git-basics-chapter-%d8%ab%d8%a8%d8%aa-%d8%aa%d8%ba%db%8c%db%8c%d8%b1%d8%a7%d8%aa-%d8%af%d8%b1-%d9%85%d8%ae%d8%b2%d9%86-Recording-Changes-to-the-Repository | Git - ثبت تغییرات در مخزن (Recording Changes to the Repository) About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. شروع به کار (getting started) 1.1 درباره ورژن کنترل (About Version Control) 1.2 تاریخچه کوتاهی از گیت (A Short History of Git) 1.3 گیت چیست؟ (What is Git) 1.4 نصب گیت (Installing Git) 1.5 ستاپ اولیه گیت (First-Time Git Setup) 1.6 دریافت کمک (Getting Help) 1.7 خلاصه (summary) 2. مقدمات گیت (git basics chapter) 2.1 گرفتن یک مخزن گیت (Getting a Git Repository) 2.2 ثبت تغییرات در مخزن (Recording Changes to the Repository) 2.3 مشاهده تاریخچه کامیتها (Viewing the Commit History) 2.4 بازگرداندن تغییرات (Undoing Things) 2.5 کار کردن با ریموت ها (Working with Remotes) 2.6 تگ کردن (Tagging) 2.7 نام مستعار گیت (Git Aliases) 2.8 خلاصه (summary) 3. انشعابگیری در گیت (Git Branching) 3.1 شاخهها در یک نگاه (Branches in a Nutshell) 3.2 شاخهبندی و ادغام پایهای (Basic Branching and Merging) 3.3 مدیریت شاخهها (Branch Management) 3.4 روندهای کاری شاخهها (Branching Workflows) 3.5 شاخههای راه دور (Remote Branches) 3.6 بازپایهگذاری (Rebasing) 3.7 خلاصه (Summary) 4. گیت روی سرور (Git on the server) 4.1 پروتکلها (The Protocols) 4.2 راهاندازی گیت روی یک سرور (Getting Git on a Server) 4.3 ایجاد کلید عمومی SSH شما (Generating Your SSH Public Key) 4.4 نصب و راهاندازی سرور (Setting up server) 4.5 سرویسدهنده گیت (Git Daemon) 4.6 HTTP هوشمند (Smart HTTP) 4.7 گیتوب (GitWeb) 4.8 گیتلب (GitLab) 4.9 گزینههای میزبانی شخص ثالث (Third Party Hosted Options) 4.10 خلاصه (Summary) 5. گیت توزیعشده (Distributed git) 5.1 جریانهای کاری توزیعشده (Distributed Workflows) 5.2 مشارکت در یک پروژه (Contributing to a Project) 5.3 نگهداری یک پروژه (Maintaining a Project) 5.4 خلاصه (Summary) 6. گیت هاب (GitHub) 6.1 راهاندازی و پیکربندی حساب کاربری (Account Setup and Configuration) 6.2 مشارکت در یک پروژه (Contributing to a Project) 6.3 نگهداری یک پروژه (Maintaining a Project) 6.4 مدیریت یک سازمان (Managing an organization) 6.5 اسکریپتنویسی در گیتهاب (Scripting GitHub) 6.6 خلاصه (Summary) 7. ابزارهای گیت (Git Tools) 7.1 انتخاب بازبینی (Revision Selection) 7.2 مرحلهبندی تعاملی (Interactive Staging) 7.3 ذخیره موقت و پاکسازی (Stashing and Cleaning) 7.4 امضای کارهای شما (Signing Your Work) 7.5 جستجو (Searching) 7.6 بازنویسی تاریخچه (Rewriting History) 7.7 بازنشانی به زبان ساده (Reset Demystified) 7.8 ادغام پیشرفته (Advanced Merging) 7.9 بازاستفاده خودکار از حل تضادها (Rerere) 7.10 اشکالزدایی با گیت (Debugging with Git) 7.11 سابماژول ها (Submodules) 7.12 بستهبندی (Bundling) 7.13 جایگزینی (Replace) 7.14 ذخیرهسازی اطلاعات ورود (Credential Storage) 7.15 خلاصه (Summary) 8. سفارشیسازی Git (Customizing Git) 8.1 پیکربندی گیت (Git Configuration) 8.2 ویژگیهای گیت (Git Attributes) 8.3 هوکهای گیت (Git Hooks) 8.4 یک نمونه سیاست اعمال شده توسط گیت (An Example Git-Enforced Policy) 8.5 خلاصه (Summary) 9. گیت و سیستمهای دیگر (Git and Other Systems) 9.1 گیت بهعنوان کلاینت (Git as a Client) 9.2 مهاجرت به گیت (Migrating to Git) 9.3 خلاصه (Summary) 10. مباحث درونی گیت (Git Internals) 10.1 ابزارها و دستورات سطح پایین (Plumbing and Porcelain) 10.2 اشیا گیت (Git Objects) 10.3 مراجع گیت (Git References) 10.4 فایلهای بسته (Packfiles) 10.5 نگاشت (The Refspec) 10.6 پروتکلهای انتقال (Transfer Protocols) 10.7 نگهداری و بازیابی دادهها (Maintenance and Data Recovery) 10.8 متغیرهای محیطی (Environment Variables) 10.9 خلاصه (Summary) A1. پیوست A: گیت در محیطهای دیگر (Git in Other Environments) A1.1 رابط های گرافیکی (Graphical Interfaces) A1.2 گیت در ویژوال استودیو (Git in Visual Studio) A1.3 گیت در Visual Studio Code (Git in Visual Studio Code) A1.4 گیت در IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine (Git in IntelliJ / PyCharm / WebStorm / PhpStorm / RubyMine) A1.5 گیت در Sublime Text (Git in Sublime Text) A1.6 گیت در بش (Git in Bash) A1.7 گیت در Zsh (Git in Zsh) A1.8 گیت در PowerShell (Git in PowerShell) A1.9 خلاصه (Summary) A2. پیوست B: گنجاندن گیت در برنامههای شما (Embedding Git in your Applications) A2.1 خط فرمان گیت (Command-line Git) A2.2 کتابخانهٔ گیت به زبان سی (Libgit2) A2.3 کتابخانه گیت برای زبان جاوا (JGit) A2.4 کتابخانه گیت برای زبان گو (go-git) A2.5 کتابخانه گیت پایتون (Dulwich) A3. پیوست C: دستورات گیت (Git Commands) A3.1 تنظیم و پیکربندی (Setup and Config) A3.2 گرفتن و ایجاد پروژهها (Getting and Creating Projects) A3.3 نمونهبرداری پایهای (Basic Snapshotting) A3.4 انشعابگیری و ادغام (Branching and Merging) A3.5 بهاشتراکگذاری و بهروزرسانی پروژهها (Sharing and Updating Projects) A3.6 بازرسی و مقایسه (Inspection and Comparison) A3.7 عیبیابی (Debugging) A3.8 اعمال تغییرات به صورت پچ (Patching) A3.9 ایمیل (Email) A3.10 سیستمهای خارجی (External Systems) A3.11 مدیریت (Administration) A3.12 دستورات سطح پایین گیت (Plumbing Commands) 2nd Edition 2.2 مقدمات گیت (git basics chapter) - ثبت تغییرات در مخزن (Recording Changes to the Repository) ثبت تغییرات در مخزن (Recording Changes to the Repository) در این مرحله، شما باید یک مخزن _bona fide_Git در ماشین محلی خود داشته باشید، و یک نسخه چک کردن یا کار کردن از تمام فایل های آن در مقابل شما. به طور معمول، شما می خواهید شروع به ایجاد تغییرات و ارسال عکس از آن تغییرات به مخزن خود را هر بار که پروژه به یک دولت شما می خواهید به ثبت برسد. به یاد داشته باشید که هر فایل در دایرکتوری کاری شما می تواند در یکی از دو حالت باشد: tracked یا untracked . فایل های ردیابی شده، فایل هایی هستند که در آخرین عکس لحظه ای بودند، و همچنین هر فایل جدیدی که مرحله بندی شده است؛ آنها می توانند بدون تغییر، اصلاح شده یا مرحله بندی شوند. به طور خلاصه، فایل های ردیابی شده، فایل هایی هستند که گیت از آن ها آگاه است. فایل های بدون ردیابی همه چیز دیگری هستند — هر فایل در دایرکتوری کاری شما که در آخرین عکس شما نبود و در منطقه تنظیم شما نیست. هنگامی که شما برای اولین بار یک مخزن را کلان می کنید، تمام فایل های شما ردیابی و بدون تغییر خواهند شد زیرا Git آنها را بررسی کرده و شما چیزی را ویرایش نکرده اید. همانطور که فایل ها را ویرایش می کنید، گیت آنها را به عنوان اصلاح شده می بیند، زیرا شما آنها را از آخرین ارتقاء خود تغییر داده اید. همانطور که کار می کنید، به طور انتخابی این فایل های اصلاح شده را مرحله بندی می کنید و سپس تمام آن تغییرات مرحله ای را انجام می دهید، و چرخه تکرار می شود. نمودار 8. The lifecycle of the status of your files بررسی وضعیت فایل های شما (Checking the Status of Your Files) ابزار اصلی که شما برای تعیین اینکه کدام فایل ها در کدام حالت هستند استفاده می کنید دستور git status است. اگر شما این دستور را مستقیماً بعد از یک کلان اجرا کنید، چیزی شبیه به این خواهید دید: $ git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working tree clean این به این معنی است که شما یک دایرکتوری کاری تمیز دارید؛ به عبارت دیگر، هیچ یک از فایل های ردیابی شده شما اصلاح نشده است. همچنین گیت هیچ فایل غیر ردیابی شده ای را نمی بیند، در غیر این صورت آنها در اینجا فهرست شده اند. در نهایت، دستور به شما می گوید که در کدام شاخه هستید و به شما اطلاع می دهد که از همان شاخه در سرور منحرف نشده است. در حال حاضر، این شاخه همیشه master است، که پیش فرض است؛ شما نگران آن در اینجا نخواهید بود. انشعابگیری در گیت (Git Branching) شاخه ها و مرجع ها را به طور مفصل بررسی می کند. یادداشت گیت هاب در اواسط سال 2020 نام شاخه پیش فرض را از master به main تغییر داد و سایر میزبان های گیت نیز این کار را انجام دادند. بنابراین شما ممکن است متوجه شوید که نام شاخه پیش فرض در برخی از مخازن جدید ایجاد شده main و نه master است. علاوه بر این، می توان نام شاخه ی پیش فرض را تغییر داد (همانطور که در نام پیشفرض برنچ شما (Your default branch name) مشاهده کردید) ، بنابراین ممکن است نام دیگری برای شاخه ی پیش فرض مشاهده کنید. با این حال، گیت خود هنوز از master به عنوان پیش فرض استفاده می کند، بنابراین ما از آن در سراسر کتاب استفاده خواهیم کرد. فرض کنید که شما یک فایل جدید به پروژه خود اضافه کنید، یک فایل ساده "README" اگر این فایل قبلا وجود نداشته باشد و شما git status را اجرا کنید، فایل بدون ردیابی خود را به این شکل می بینید: $ echo 'My Project' > README $ git status On branch master Your branch is up-to-date with 'origin/master'. Untracked files: (use "git add <file>..." to include in what will be committed) README nothing added to commit but untracked files present (use "git add" to track) شما می توانید ببینید که فایل جدید README شما غیر قابل ردیابی است، زیرا تحت عنوان " فایل های غیر قابل ردیابی " در خروجی وضعیت شما قرار دارد. Untracked اساساً به این معنی است که Git یک فایل را می بیند که شما در عکس فوری قبلی (تعهد) نداشته اید و هنوز مرحله ای نشده است؛ Git شامل آن در عکس فوری شما نمی شود تا زمانی که به طور صریح به آن بگویید. این کار را انجام می دهد تا شما به طور تصادفی شروع به اضافه کردن فایل های باینری تولید شده یا فایل های دیگری که شما قصد نداشتید را شامل شوید. شما می خواهید شروع به شامل 'README' کنید، پس بیایید پرونده را ردیابی کنیم. دنبال کردن فایل های جدید (Tracking New Files) برای شروع ردیابی یک فایل جدید، از دستور git add استفاده می کنید. برای شروع به ردیابی فایل "README" می توانید این را اجرا کنید: $ git add README اگر دستور وضعیت خود را دوباره اجرا کنید، می توانید ببینید که فایل README شما در حال حاضر ردیابی شده و مرحله ای برای انجام شده است: $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git restore --staged <file>..." to unstage) new file: README می تونید بفهمید که صحنه سازی شده چون زیر عنوان "تغییراتی که باید انجام بشه" قرار داره. اگر در این مرحله commit کنید، نسخه فایل در زمانی که شما git add را اجرا کردید همان چیزی است که در تصویر تاریخی بعدی خواهد بود. ممکن است به یاد داشته باشید که وقتی قبلاً git init را اجرا کردید، سپس git add <files> ` را اجرا کردید — که برای شروع ردیابی فایل ها در دایرکتوری شما بود. دستور `git add نام مسیر را برای یک فایل یا یک دایرکتوری می گیرد. اگر یک دایرکتوری باشد، دستور تمام فایل های موجود در آن دایرکتوری را به صورت تکراری اضافه می کند. مرحله بندی فایل های اصلاح شده (Staging Modified Files) بیایید یک فایل که قبلا ردیابی شده بود را تغییر دهیم. اگر شما یک فایل ردیابی شده قبلی به نام CONTRIBUTING.md را تغییر دهید و سپس دستور git status خود را دوباره اجرا کنید، چیزی شبیه به این را دریافت می کنید: $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: README Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: CONTRIBUTING.md فایل CONTRIBUTING.md در بخش " تغییرات ` که برای commit ` مرحله بندی نشده است" ظاهر می شود — که به این معنی است که فایل ردیابی شده در دایرکتوری کاری تغییر کرده است اما هنوز مرحله بندی نشده است. برای این کار، شما دستور git add را اجرا می کنید. git add یک دستور چند منظوره است — شما از آن برای شروع ردیابی فایل های جدید، مرحله بندی فایل ها، و انجام کارهای دیگر مانند علامت گذاری فایل های با تعارض ادغام به عنوان حل شده استفاده می کنید. ممکن است مفید باشد که به آن بیشتر به عنوان " ` اضافه کردن دقیقا این محتوا به commit بعدی ` " به جای " ` اضافه کردن این فایل به پروژه ` " فکر کنید. بیایید اکنون add git را اجرا کنیم تا فایل ∀CONTRIBUTING.md` را اجرا کنیم و سپس دوباره status git را اجرا کنیم: $ git add CONTRIBUTING.md $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: README modified: CONTRIBUTING.md هر دو فایل آماده شده اند و به کامیت بعدی شما منتقل خواهند شد. در این مرحله، فرض کنید یک تغییر کوچک را به یاد آورده اید که می خواهید قبل از انجام آن در CONTRIBUTING.md ایجاد کنید. دوباره بازش ميکني و اون تغيير رو ميکني و آماده ي تعهد هستي. با این حال، بیایید یک بار دیگر وضعیت را اجرا کنیم: $ vim CONTRIBUTING.md $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: README modified: CONTRIBUTING.md Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: CONTRIBUTING.md چه خبره؟ حالا CONTRIBUTING.md به عنوان هر دو مرحله ای و غیر مرحله ای فهرست شده است. چطور ممکنه؟ معلوم شد که گیت یک فایل را دقیقاً همان طور که وقتی دستور git add را اجرا می کنید، مرحله بندی می کند. اگر شما اکنون commit کنید، نسخه ی CONTRIBUTING.md همان طور که در آخرین باری که دستور git add را اجرا کردید، وارد commit می شود، نه نسخه ی فایل همان طور که در دایرکتوری کاری شما در هنگام اجرا git commit ظاهر می شود. اگر شما یک فایل را پس از اجرای git add تغییر دهید، باید دوباره git add را اجرا کنید تا آخرین نسخه فایل را اجرا کنید: $ git add CONTRIBUTING.md $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: README modified: CONTRIBUTING.md خلاصه وضعیت (Short Status) در حالی که خروجی وضعیت git ` کاملاً جامع است ، اما کاملاً کلمه ای است. گیت همچنین دارای یک پرچم وضعیت کوتاه است تا بتوانید تغییرات خود را به روشی جمع و جور تر مشاهده کنید. اگر شما `git status -s یا git status --short را اجرا کنید، خروجی بسیار ساده تر از دستور را دریافت می کنید: $ git status -s M README MM Rakefile A lib/git.rb M lib/simplegit.rb ?? LICENSE.txt فایل های جدید که ردیابی نشده اند دارای ?? ` در کنار آنها، فایل های جدید که به منطقه مرحله ای اضافه شده اند دارای `A ، فایل های اصلاح شده دارای M و غیره هستند. دو ستون در خروجی وجود دارد - ستون سمت چپ وضعیت منطقه مرحله ای را نشان می دهد و ستون سمت راست وضعیت درخت کار را نشان می دهد. بنابراین برای مثال در آن خروجی، فایل README در دایرکتوری کار تغییر یافته است اما هنوز مرحله ای نشده است، در حالی که فایل lib/simplegit.rb تغییر یافته و مرحله ای شده است. Rakefile اصلاح شد، مرحله ای شد و سپس دوباره اصلاح شد، بنابراین تغییراتی در آن وجود دارد که هم مرحله ای و هم غیر مرحله ای است. فایل های ایگنور شده (Ignoring Files) اغلب، شما یک کلاس از فایل هایی دارید که نمی خواهید گیت به طور خودکار آنها را اضافه کند یا حتی به شما نشان دهد که آنها ردیابی نشده اند. این فایل ها به طور کلی به طور خودکار تولید می شوند مانند فایل های لاگ یا فایل هایی که توسط سیستم ساخت شما تولید می شوند. در چنین مواردی، می توانید یک لیست از فایل ها را با نام .gitignore . ایجاد کنید تا با آنها مطابقت داشته باشد. در اینجا یک نمونه از فایل .gitignore وجود دارد: $ cat .gitignore *.[oa] *~ خط اول به گیت می گوید که تمام فایل هایی که به “.o” یا “.a” ختم می شوند را نادیده بگیرد — فایل های شی و آرشیو که ممکن است محصول ساخت کد شما باشند. خط دوم به گیت می گوید که تمام فایل هایی را که نام آنها با تایلد ( ~ ) پایان می یابد، نادیده بگیرد، که توسط بسیاری از ویرایشگرهای متن مانند Emacs برای علامت گذاری فایل های موقت استفاده می شود. شما همچنین می توانید یک دایرکتوری log، tmp، یا pid؛ مستندات تولید شده به طور خودکار؛ و غیره را شامل شوید. تنظیم یک فایل .gitignore برای مخزن جدید قبل از شروع کار به طور کلی ایده خوبی است تا فایل هایی را که واقعاً نمی خواهید در مخزن گیت خود قرار ندهید. قوانین برای الگوهایی که می توانید در فایل .gitignore قرار دهید عبارتند از: خطوط خالی یا خطوطی که با # شروع می شوند نادیده گرفته می شوند. الگوهای استاندارد گلوب کار می کنند و به طور تکراری در کل درخت کاری اعمال می شوند. شما می توانید الگوها را با یک تراش جلو ( / ) شروع کنید تا از تکرار پذیری جلوگیری کنید. شما می توانید الگوها را با یک خط کش ( / ) به سمت جلو برای مشخص کردن یک دایرکتوری به پایان برسانید. شما می توانید یک الگوی را با شروع آن با یک علامت تعجب (`! `). الگوهای گلوب مانند عبارات منظم ساده شده ای هستند که پوسته ها استفاده می کنند. یک ستاره ( * ) با صفر یا بیشتر کاراکترها مطابقت دارد؛ [abc] با هر کاراکتر داخل براکت ها (در این مورد a، b یا c) مطابقت دارد؛ علامت سوال ( ? `) با یک کاراکتر واحد مطابقت دارد؛ و براکت هایی که کاراکترهای جدا شده توسط یک خط کش ( [0-9] ) را در بر می گیرد با هر کاراکتر بین آنها مطابقت دارد (در این مورد از 0 تا 9). شما همچنین می توانید از دو ستاره برای مطابقت با دایرکتوری های آشیانه ای استفاده کنید؛ `a/**/z با a/z ، a/b/z ، a/b/c/z و غیره مطابقت دارد. در اینجا یک مثال دیگر از فایل .gitignore وجود دارد: # ignore all .a files *.a # but do track lib.a, even though you're ignoring .a files above !lib.a # only ignore the TODO file in the current directory, not subdir/TODO /TODO # ignore all files in any directory named build build/ # ignore doc/notes.txt, but not doc/server/arch.txt doc/*.txt # ignore all .pdf files in the doc/ directory and any of its subdirectories doc/**/*.pdf نکته گیت هاب یک لیست کاملا جامع از نمونه های فایل های خوب .gitignore برای ده ها پروژه و زبان در https://github.com/github/gitignore را در اختیار دارد اگر می خواهید نقطه شروع پروژه خود را داشته باشید. یادداشت در مورد ساده، یک مخزن ممکن است یک فایل .gitignore در دایرکتوری ریشه خود داشته باشد، که به طور تکراری برای کل مخزن اعمال می شود. با این حال، ممکن است فایل های .gitignore اضافی در زیرکتاب ها نیز وجود داشته باشد. قوانین موجود در این فایل های .gitignore فقط برای فایل های موجود در دایرکتوری که در آن قرار دارند، اعمال می شود. مخزن منبع هسته لینوکس دارای 206 فایل .gitignore است. این فراتر از محدوده این کتاب است که به جزئیات پرونده های متعدد .gitignore بپردازیم؛ برای جزئیات به man gitignore مراجعه کنید. مشاهده تغییرات آماده و آمادهنشده (Viewing Your Staged and Unstaged Changes) اگر دستور git status برای شما مبهم است - شما می خواهید دقیقا بدانید که چه چیزی را تغییر داده اید، نه فقط اینکه کدام فایل ها تغییر کرده اند - شما می توانید از دستور git diff استفاده کنید. ما بعداً به جزئیات بیشتری در مورد "git diff" می پردازیم، اما احتمالاً بیشتر از همه برای پاسخ به این دو سوال از آن استفاده خواهید کرد: چه چیزی را تغییر داده اید اما هنوز اجرا نشده اید؟ و تو چه صحنه اي رو بازي کردي که ميخواي انجام بدي؟ اگر چه git status به این سوالات با فهرست کردن اسامی فایل ها پاسخ می دهد، git diff خطوط دقیق اضافه شده و حذف شده را به شما نشان می دهد. بیایید بگوییم که فایل README را دوباره ویرایش و مرحله بندی می کنید و سپس فایل CONTRIBUTING.md را بدون مرحله بندی ویرایش می کنید. اگر دستور git status را اجرا کنید، یک بار دیگر چیزی شبیه به این می بینید: $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: README Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: CONTRIBUTING.md برای دیدن اینکه چه چیزی را تغییر داده اید اما هنوز مرحله ای نشده است، با هیچ استدلال دیگری git diff را تایپ کنید: $ git diff diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8ebb991..643e24f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -65,7 +65,8 @@ branch directly, things can get messy. Please include a nice description of your changes when you submit your PR; if we have to read the whole diff to figure out why you're contributing in the first place, you're less likely to get feedback and have your change -merged in. +merged in. Also, split your changes into comprehensive chunks if your patch is +longer than a dozen lines. If you are starting to work on a particular area, feel free to submit a PR that highlights your work in progress (and note in the PR title that it's این دستور آنچه را که در دایرکتوری کار شما هست با آنچه که در منطقه ی تنظیم شما هست مقایسه می کند. نتیجه به شما می گوید که چه تغییراتی را انجام داده اید که هنوز اجرا نکرده اید. اگر می خواهید ببینید که چه چیزی را تنظیم کرده اید که در commit بعدی شما قرار خواهد گرفت، می توانید از git diff --staged استفاده کنید. این دستور تغییرات مرحله ای شما را با آخرین کامیت مقایسه می کند: $ git diff --staged diff --git a/README b/README new file mode 100644 index 0000000..03902a1 --- /dev/null +++ b/README @@ -0,0 +1 @@ +My Project مهم است که توجه داشته باشید که git diff به خودی خود تمام تغییرات ایجاد شده از آخرین ارتکاب شما را نشان نمی دهد - فقط تغییراتی که هنوز مرحله ای نشده اند. اگر تمام تغییرات خود را اجرا کرده باشید، git diff هیچ خروجی به شما نخواهد داد. برای مثال دیگر، اگر فایل CONTRIBUTING.md را مرحله بندی کنید و سپس آن را ویرایش کنید، می توانید از git diff برای دیدن تغییرات در فایل استفاده کنید که مرحله بندی شده اند و تغییراتی که مرحله بندی نشده اند. اگر محیط ما اینگونه باشد: $ git add CONTRIBUTING.md $ echo '# test line' >> CONTRIBUTING.md $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: CONTRIBUTING.md Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: CONTRIBUTING.md حالا شما می توانید از git diff استفاده کنید تا ببینید چه چیزی هنوز اجرا نشده است: $ git diff diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 643e24f..87f08c8 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -119,3 +119,4 @@ at the ## Starter Projects See our [projects list](https://github.com/libgit2/libgit2/blob/development/PROJECTS.md). +# test line و git diff --cached برای دیدن آنچه که تا کنون انجام داده اید ( --staged و --cached مترادف هستند): $ git diff --cached diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8ebb991..643e24f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -65,7 +65,8 @@ branch directly, things can get messy. Please include a nice description of your changes when you submit your PR; if we have to read the whole diff to figure out why you're contributing in the first place, you're less likely to get feedback and have your change -merged in. +merged in. Also, split your changes into comprehensive chunks if your patch is +longer than a dozen lines. If you are starting to work on a particular area, feel free to submit a PR that highlights your work in progress (and note in the PR title that it's یادداشت Git Diff in an External Tool ما به استفاده از دستور git diff به روش های مختلف در بقیه کتاب ادامه خواهیم داد. راه دیگری برای مشاهده این تفاوت ها وجود دارد اگر شما به جای آن یک برنامه گرافیکی یا خارجی برای مشاهده تفاوت ها را ترجیح می دهید. اگر شما git difftool را به جای git diff اجرا کنید، می توانید هر یک از این تفاوت ها را در نرم افزارهای مانند emerge، vimdiff و بسیاری دیگر (از جمله محصولات تجاری) مشاهده کنید. git difftool --tool-help را اجرا کنید تا ببینید چه چیزی در سیستم شما موجود است. کامیت کردن تغییراتتان (Committing Your Changes) حالا که منطقه آماده سازی شما به روشی که می خواهید تنظیم شده، می توانید تغییرات خود را انجام دهید. به یاد داشته باشید که هر چیزی که هنوز مرحله ای نشده است - هر فایل ای که ایجاد کرده اید یا اصلاح کرده اید که از زمانی که آنها را ویرایش کرده اید اجرا نکرده اید - به این ارتکاب نمی رود. آنها به عنوان فایل های اصلاح شده در دیسک شما باقی خواهند ماند. در این مورد، بیایید بگوییم که آخرین باری که شما git status را اجرا کردید، دیدید که همه چیز مرحله ای شده است، بنابراین شما آماده اید که تغییرات خود را انجام دهید. ساده ترین راه برای commit کردن این است که git commit : $ git commit این کار باعث می شود تا ویرایشگر مورد نظر شما را راه اندازی کند. یادداشت این توسط متغیر محیط EDITOR پوسته شما تنظیم می شود — معمولاً vim یا emacs، اگرچه شما می توانید آن را با هر چیزی که می خواهید با استفاده از دستور git config --global core.editor تنظیم کنید همانطور که در شروع به کار (getting started) مشاهده کردید. ویرایشگر متن زیر را نمایش می دهد (این مثال یک صفحه نمایش Vim است): # Please enter the commit message for your changes. Lines starting # with '#' will be ignored, and an empty message aborts the commit. # On branch master # Your branch is up-to-date with 'origin/master'. # # Changes to be committed: # new file: README # modified: CONTRIBUTING.md # ~ ~ ~ ".git/COMMIT_EDITMSG" 9L, 283C شما می توانید ببینید که پیام commit پیش فرض شامل آخرین خروجی از دستور git status با یک خط خالی در بالای آن است. شما می توانید این نظرات را حذف کنید و پیام commit خود را تایپ کنید، یا می توانید آنها را آنجا بگذارید تا به شما کمک کند آنچه را که انجام می دهید به یاد داشته باشید. یادداشت برای یک یادآوری واضح تر از آنچه که تغییر داده اید، می توانید گزینه -v را به git commit منتقل کنید. این کار همچنین تفاوت تغییرات شما را در ویرایشگر قرار می دهد تا بتوانید دقیقا ببینید چه تغییراتی انجام داده اید. هنگامی که شما از ویرایشگر خارج می شوید، گیت commit شما را با این پیام commit (با حذف نظرات و اختلافات) ایجاد می کند. در عوض، می توانید پیام commit خود را با دستور commit با مشخص کردن آن پس از یک پرچم -m ، مانند این تایپ کنید: $ git commit -m "Story 182: fix benchmarks for speed" [master 463dc4f] Story 182: fix benchmarks for speed 2 files changed, 2 insertions(+) create mode 100644 README حالا شما اولين کارتون رو انجام دادين! شما می توانید ببینید که commit به شما در مورد خود خروجی داده است: به کدام شاخه commit کرده اید ( master ) ، چه چک سوم SHA-1 commit دارد ( 463dc4f ) ، چه تعداد فایل تغییر کرده است و آمار مربوط به خطوط اضافه شده و حذف شده در commit. به یاد داشته باشید که ثبت عکس لحظه ای که در منطقه تنظیم خود قرار داده اید را ثبت می کند. هر چیزی که شما تنظیم نکرده اید هنوز در آنجا اصلاح شده است؛ شما می توانید یک تعهد دیگر برای اضافه کردن آن به تاریخچه خود انجام دهید. هر بار که یک commit را انجام می دهید، یک عکس از پروژه خود را ضبط می کنید که می توانید بعداً به آن بازگردید یا با آن مقایسه کنید. عبور از محدوده استیج (Skipping the Staging Area) اگر چه می تواند برای ساخت commit ها به طور دقیق به همان شکل که می خواهید مفید باشد، اما منطقه ی مرحله بندی گاهی کمی پیچیده تر از آنچه در جریان کار شما نیاز دارید است. اگر می خواهید از منطقه مرحله بندی عبور کنید، گیت یک میانبر ساده ارائه می دهد. اضافه کردن گزینه -a به دستور commit `git باعث می شود که Git به طور خودکار هر فایلی را که قبلاً قبل از انجام commit ردیابی شده است ، مرحله بندی کند ، و به شما اجازه می دهد قسمت add `git را حذف کنید: $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: CONTRIBUTING.md no changes added to commit (use "git add" and/or "git commit -a") $ git commit -a -m 'Add new benchmarks' [master 83e38c7] Add new benchmarks 1 file changed, 5 insertions(+), 0 deletions(-) توجه کنید که در این مورد قبل از اینکه commit کنید، لازم نیست git add را در فایل CONTRIBUTING.md اجرا کنید. این به این دلیل است که پرچم -a شامل تمام فایل های تغییر یافته است. این کار راحت است، اما مراقب باشید؛ گاهی اوقات این پرچم باعث می شود تغییرات ناخواسته ای را وارد کنید. حذف فایل ها (Removing Files) برای حذف یک فایل از گیت، باید آن را از فایل های ردیابی شده خود حذف کنید (به طور دقیق تر، آن را از منطقه ردیابی شده خود حذف کنید) و سپس commit کنید. دستور git rm این کار را انجام می دهد، و همچنین فایل را از دایرکتوری کاری شما حذف می کند تا دفعه بعد آن را به عنوان یک فایل غیر ردیابی مشاهده نکنید. اگر فایل را از دایرکتوری کاری خود حذف کنید، در قسمت " تغییرات برای ارتکاب مرحله ای نشده " (یعنی unstaged ) در خروجی وضعیت git ظاهر می شود: $ rm PROJECTS.md $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes not staged for commit: (use "git add/rm <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) deleted: PROJECTS.md no changes added to commit (use "git add" and/or "git commit -a") Then, if you run git rm , it stages the file’s removal: $ git rm PROJECTS.md rm 'PROJECTS.md' $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) deleted: PROJECTS.md دفعه ي بعدي که پيگيري کردي، فايل از بين خواهد رفت و ديگه تعقيب نميشه. اگر فایل را تغییر داده اید یا قبلاً آن را به منطقه مرحله بندی اضافه کرده اید، باید حذف آن را با گزینه -f اجبار کنید. این یک ویژگی امنیتی برای جلوگیری از حذف تصادفی داده هایی است که هنوز در یک عکس ثبت نشده اند و نمی توانند از Git بازیابی شوند. یکی دیگر از کارهای مفید که می توانید انجام دهید این است که فایل را در درخت کار خود نگه دارید اما آن را از منطقه مرحله بندی خود حذف کنید. به عبارت دیگر، شما ممکن است بخواهید فایل را در هارد دیسک خود نگه دارید اما دیگر Git را دنبال نکنید. این به ویژه مفید است اگر شما فراموش کرده اید چیزی را به فایل .gitignore خود اضافه کنید و به طور تصادفی آن را مرحله بندی کنید، مانند یک فایل لاگ بزرگ یا یک دسته از فایل های کامپایل شده .a . برای انجام این کار، از گزینه --cached استفاده کنید: $ git rm --cached README شما می توانید فایل ها، دایرکتوری ها و الگوهای گلوب فایل را به دستور git rm منتقل کنید. این به این معنی است که شما می توانید کارهایی مانند: $ git rm log/\*.log توجه کنید که خط عقب ( \ ) در مقابل * قرار دارد. این کار لازم است زیرا گیت علاوه بر افزونه نام فایل پوسته شما، افزونه نام فایل خود را نیز انجام می دهد. این دستور تمام فایل هایی را که دارای پسوند .log در دایرکتوری log/ هستند حذف می کند. یا، شما می توانید چیزی شبیه به این: $ git rm \*~ این دستور تمام فایل هایی که نامشان با یک ~ به پایان می رسد را حذف می کند. جا به جایی فایل ها (Moving Files) برخلاف بسیاری از VCS های دیگر، گیت به طور صریح حرکت فایل را ردیابی نمی کند. اگر شما یک فایل را در گیت تغییر نام دهید، هیچ متا داده ای در گیت ذخیره نمی شود که به آن بگوید شما فایل را تغییر نام داده اید. با این حال، گیت خیلی باهوش است که این موضوع را بعد از وقوع آن تشخیص می دهد — ما کمی بعد با تشخیص حرکت فایل ها سر و کار خواهیم داشت. بنابراین کمی گیج کننده است که گیت یک دستور mv داشته باشد. اگر می خواهید یک فایل را در گیت تغییر نام دهید، می توانید چیزی شبیه به این را اجرا کنید: $ git mv file_from file_to و خوب کار می کنه. در واقع، اگر شما چیزی شبیه به این را اجرا کنید و به وضعیت نگاه کنید، خواهید دید که گیت آن را به عنوان یک فایل تغییر نام می داند: $ git mv README.md README $ git status On branch master Your branch is up-to-date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) renamed: README.md -> README با این حال، این معادل اجرای چیزی شبیه به این است: $ mv README.md README $ git rm README.md $ git add README گیت متوجه شد که این یک تغییر نام ضمنی است، بنابراین فرقی نمی کند که آیا شما یک فایل را به این طریق یا با دستور mv تغییر نام دهید. تنها تفاوت واقعی این است که git mv یک دستور به جای سه دستور است — این یک تابع راحتی است. مهمتر از همه، شما می توانید از هر ابزاری که دوست دارید برای تغییر نام یک فایل استفاده کنید، و بعدا قبل از اینکه commit کنید به add / rm رسیدگی کنید. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:25 |
https://www.linkedin.com/products/categories/ip-address-management-software?trk=products_details_guest_product_category | Best IP Address Management (IPAM) Software | Products | LinkedIn Skip to main content LinkedIn Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Clear text Used by Used by Network Engineer (2) Information Technology Administrator (2) Network Specialist (2) Information Technology Network Administrator (1) Information Technology Operations Analyst (1) See all products Find top products in IP Address Management (IPAM) Software category Software used to plan and manage the use of IP addresses on a network. - Use unique IP addresses for applications, devices, and related resources - Prevent conflicts and errors with automated audits and network discovery - Use IPv4/IPv6 support and integrate with DNS and DHCP services - View subnet capacity and optimize IP planning space 9 results Next-Gen IPAM IP Address Management (IPAM) Software by IPXO Next-Gen IPAM is designed to simplify and automate public IP resource management. It supports both IPv4 and IPv6, and is built with a focus on automation, transparency, and security. You get a centralized view of IP reputation, WHOIS data, RPKI validation, BGP routing, and geolocation – all in one automated platform. View product AX DHCP | IP Address Management (IPAM) Software IP Address Management (IPAM) Software by Axiros AX DHCP server is a clusterable carrier-grade DHCP / IPAM (IP Address Management) solution that can be seamlessly integrated within given provisioning platforms. AX DHCP copes with FttH, ONT provisioning, VOIP and IPTV services. Telecommunications carriers and internet service providers (ISPs) need powerful and robust infrastructure that supports future workloads. DDI (DNS-DHCP-IPAM) is a critical networking technology for every service provider that ensures customer services availability, security and performance. View product Tidal LightMesh IP Address Management (IPAM) Software by Tidal Go beyond IP Address Management (IPAM) with LightMesh from Tidal. Simplify and automate the administration and management of internet protocol networks. LightMesh makes IP visibility and operation scalable, secure and self-controlled with a central feature-rich interface, reducing complexity, and – all for free. Currently in Public Beta. View product ManageEngine OpUtils IP Address Management (IPAM) Software by ManageEngine ITOM OpUtils is an IP address and switch port management software that is geared towards helping engineers efficiently monitor, diagnose, and troubleshoot IT resources. OpUtils complements existing management tools by providing troubleshooting and real-time monitoring capabilities. It helps network engineers manage their switches and IP address space with ease. With a comprehensive set of over 20 tools, this switch port management tool helps with network monitoring tasks like detecting a rogue device intrusion, keeping an eye on bandwidth usage, monitoring the availability of critical devices, backing up Cisco configuration files, and more. View product Numerus IP Address Management (IPAM) Software by TechNarts-Nart Bilişim Numerus, a mega-scale enterprise-level IP address management tool, helps simplify and automate several tasks related to IP space management. It can manage IP ranges, pools, and VLANs, monitor the hierarchy, manage utilizations and capacities, perform automated IP address assignments, and report assignments to registries with regular synchronization. It provides extensive reporting capabilities and data for 3rd party systems with various integrations. For ISPs, it also provides global IP Registry integrations such as RIPE. View product Find products trusted by professionals in your network See which products are used by connections in your network and those that share similar job titles Sign in to view full insights dedicated datacenter proxies IP Address Management (IPAM) Software by Decodo You can now own IP addresses that are solely yours! SOCKS5 and HTTP(S) proxies that no one else can lay their hands on when you’re using them. View product AX DHCP | Gerenciamento de endereços IP IP Address Management (IPAM) Software by Axiros LATAM O AX DHCP é uma solução DHCP/IPAM que pode ser integrada de forma transparente em determinadas plataformas de provisionamento como FTTH, ONT, VOIP e IPTV. As operadoras de telecomunicações e os provedores de serviços de Internet precisam de uma infraestrutura poderosa e robusta que suporte cargas de trabalho futuras. DDI (DNS-DHCP-IPAM) é uma tecnologia de rede crítica para provedores de serviços que garante a disponibilidade, segurança e desempenho dos serviços ao cliente. … AX DHCP es una solución DHCP/IPAM que se puede integrar perfectamente en determinadas plataformas de aprovisionamiento como FTTH, ONT, VOIP e IPTV. Los operadores de telecomunicaciones y los proveedores de servicios de Internet necesitan una infraestructura potente y robusta para admitir futuras cargas de trabajo. DDI (DNS-DHCP-IPAM) es una tecnología de red fundamental para proveedores de servicios que garantiza la disponibilidad, la seguridad y el rendimiento de los servicios al cliente. View product Cygna runIP Appliance Platform IP Address Management (IPAM) Software by Cygna Labs Deutschland Maximizing the benefits of your DDI-Solution VitalQIP (Nokia) | DiamondIP (BT DiamondIP) | Micetro (Men & Mice) By providing an efficient solution for roll-out, configuration, patching and upgrades of DNS and DHCP servers, the runIP Management Platform optimizes the efficiency and value of your DDI investment. To create runIP, N3K combined the experience gained from setting up thousands of DNS & DHCP servers and hundreds of DDI environments into one holistic solution. runIP is suitable both for those companies that want to further reduce the operating costs of their existing DDI installation and for those that want to make their initial installation or further roll-out even more efficient and successful. The runIP solution is completed by its integrated, comprehensive real-time monitoring of the DNS and DHCP services and operating system and extensive long-term statistics. This ensures that you always have an overview of the DNS and DHCP services as well as the operating system. View product See more How it works Explore Discover the best product for your need from a growing catalog of 25,000 products and categories trusted by LinkedIn professionals Learn Evaluate new tools, explore trending products in your industry and see who in your network is skilled in the product Grow Join communities of product users to learn best practices, celebrate your progress and accelerate your career LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English Language | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/ja_jp/AmazonCloudWatch/latest/logs/LogsAnomalyDetection.html | ログ異常検出 - Amazon CloudWatch Logs ログ異常検出 - Amazon CloudWatch Logs ドキュメント Amazon CloudWatch ユーザーガイド 異常とパターンの重要度と優先度 異常可視性期間 異常の抑制 よくある質問 翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。 ログ異常検出 ログデータの異常を検出するには、継続的モニタリング用の ログ異常ディテクター を作成するか、CloudWatch Logs Insights クエリの anomaly detection コマンドを使用してオンデマンド分析を行うという 2 つの方法があります。 ログ異常ディテクターは、ロググループに取り込まれたログイベントをスキャンし、ログデータ内の異常を自動的に検出します。異常検出では、機械学習とパターン認識を使用して、一般的なログコンテンツのベースラインを確立します。オンデマンド分析では、CloudWatch Logs Insights クエリで anomaly detection コマンドを使用し、時系列データの異常なパターンを特定できます。クエリベースの異常検出の詳細については、「 CloudWatch Logs Insights での異常検出の使用 」を参照してください。 ロググループの異常ディテクターを作成すると、そのグループの過去 2 週間分のログイベントを使用してトレーニングが行われます。トレーニングの所要時間は最大 15 分間です。トレーニングが完了すると、受信ログの分析が開始されて異常が識別されます。CloudWatch Logs コンソールに異常が表示されたら、調査を行います。 CloudWatch Logs パターン認識では、ログ内の静的コンテンツと動的コンテンツを識別することによって、ログパターンを抽出します。多数のログイベントをいくつかのパターンに圧縮することができるため、パターンは大きなログセットを分析する際に役立ちます。 例えば、次の 3 つのログイベントの例を参照してください。 2023-01-01 19:00:01 [INFO] Calling DynamoDB to store for ResourceID: 12342342k124-12345 2023-01-01 19:00:02 [INFO] Calling DynamoDB to store for ResourceID: 324892398123-1234R 2023-01-01 19:00:03 [INFO] Calling DynamoDB to store for ResourceID: 3ff231242342-12345 前のサンプルでは、3 つのログイベントはすべて 1 つのパターンに従っています。 <Date-1> <Time-2> [INFO] Calling DynamoDB to store for resource id <ResourceID-3> パターン内のフィールドは トークン と呼ばれます。リクエスト ID やタイムスタンプなど、パターン内の異なるフィールドは 動的トークン と呼ばれます。動的トークンのそれぞれ異なる値は、 トークン値 と呼ばれます。 CloudWatch Logs が動的トークンの表すデータのタイプを推測できる場合、トークンは < string - number > として表示されます。 文字列 は、トークンが表すデータのタイプの説明です。 数値 は、他の動的トークンと比較して、このトークンがパターン内のどこに表示されるかを示します。 CloudWatch Logs は、名前を含むログイベントのコンテンツの分析に基づいて、その名前の文字列部分を割り当てます。 CloudWatch Logs が動的トークンの表すデータのタイプを推測できない場合は、トークンは <トークン - 数値 > として表示され、 数値 は他の動的トークンと比較して、このトークンがパターン内のどこに表示されるかを示します。 動的トークンの一般的な例には、エラーコード、IP アドレス、タイムスタンプ、リクエスト ID があります。 異常検出ログは、これらのパターンを使用して異常を検出します。異常ディテクターのモデルトレーニング期間が終了したら、ログは既知の傾向に照らして評価されます。異常ディテクターは、大幅な変動を異常とみなし、フラグを立てます。 この章では、異常検出の有効化の方法、異常の表示方法、ログ異常ディテクターのアラームおよび異常ディテクターが発行するメトリクスの作成方法について説明します。また、異常ディテクターとその結果を暗号化する方法についても説明します AWS Key Management Service。 ログ異常ディテクターを作成しても料金は発生しません。 異常とパターンの重要度と優先度 ログ異常ディテクターによって検出される各異常には、 優先度 が割り当てられます。検出された各パターンには 重要度 が割り当てられます。 優先度 は自動的に計算され、パターンの重要度レベルと想定値からの偏差量の両方に基づきます。例えば、特定のトークン値が突然 500% 増加すると、その重要度が NONE であっても、その異常が HIGH 優先度として指定される可能性があります。 重要度 は、 FATAL 、 ERROR 、 WARN などのパターンで見つかったキーワードのみに基づいています。これらのキーワードが見つからない場合、パターンの重要度は NONE としてマークされます。 異常可視性期間 異常ディテクターを作成するときは、その異常の最大可視性期間を指定します。これは、異常がコンソールに表示され、 ListAnomalies API オペレーションによって返される日数です。この期間が経過した後も異常が発生し続けると、自動的に通常の動作として受け入れられ、異常検出モデルは異常としてフラグ付けするのを停止します。 異常ディテクターの作成時に可視性期間を調整しない場合、デフォルトで 21 日が使用されます。 異常の抑制 異常が検出されたら、一時的または永続的に抑制することを選択できます。異常を抑制すると、指定した期間中、異常ディテクターはこの発生を異常としてフラグ付けしなくなります。異常を抑制する場合、その特定の異常のみを抑制するか、異常が検出されたパターンに関連するすべての異常を抑制するかを選択できます。 抑制された異常はコンソールで表示できます。また、抑制を停止することもできます。 よくある質問 自分のデータ AWS を使用して、機械学習アルゴリズム AWS をトレーニングしたり、他の顧客向けにトレーニングしたりしていますか? いいえ。トレーニングによって作成された異常検出モデルは、ロググループのログイベントに基づいており、そのロググループとその AWS アカウント内でのみ使用されます。 異常検出にはどのようなタイプのログイベントが適していますか? ログ異常検出は、アプリケーションログや、ほとんどのログエントリが一般的なパターンに適合するその他のタイプのログに適しています。 INFO 、 ERROR 、 DEBUG などのログレベルまたは重要度キーワードを含むイベントを持つロググループは、ログ異常検出に特に適しています。 ログ異常検出は、CloudTrail Logs などの非常に長い JSON 構造を持つログイベントには適していません。 パターン分析では、ログラインの最初の 1500 文字までしか分析されないため、その制限を超える文字はスキップされます。 VPC フローログなどの監査ログやアクセスログも、異常検出で成功しません。異常検出はアプリケーションの問題を検出することを目的としているため、ネットワークやアクセスの異常には適さない可能性もあります。 異常ディテクターが特定のロググループに適しているかどうかを判断するには、CloudWatch Logs パターン分析を使用してグループのログイベントの中からパターン数を見つけます。パターン数が約 300 以下の場合は、異常検出が適切に機能する可能性があります。パターン分析の詳細については、「 パターン分析 」を参照してください。 何が異常としてフラグ付けされるのですか? 次の状況が発生すると、ログイベントに異常としてフラグが立てられる可能性があります。 ロググループで以前には見られなかったパターンを持つログイベント。 既知のパターンに対する大きな変化。 通常の値の個別のセットを持つ動的トークンの新しい値。 動的トークンの値の発生回数の大きな変化。 上記の項目はすべて異常としてフラグ付けされる可能性がありますが、すべてがアプリケーションのパフォーマンスが低いことを意味するわけではありません。例えば、通常よりも多くの 200 成功値が異常としてフラグ付けされる場合もあります。このような場合は、問題を示していないとして、これらの異常を抑制することを検討してください。 マスキングされている機密データはどうなりますか? 機密データとしてマスクされたログイベントの部分は、異常検出のためにスキャンされることはありません。機密データのマスキングの詳細については、「 Help protect sensitive log data with masking 」を参照してください。 ブラウザで JavaScript が無効になっているか、使用できません。 AWS ドキュメントを使用するには、JavaScript を有効にする必要があります。手順については、使用するブラウザのヘルプページを参照してください。 ドキュメントの表記規則 スケジュールされたクエリのトラブルシューティング CloudWatch Logs Insights での異常検出の使用 このページは役に立ちましたか? - はい ページが役に立ったことをお知らせいただき、ありがとうございます。 お時間がある場合は、何が良かったかお知らせください。今後の参考にさせていただきます。 このページは役に立ちましたか? - いいえ このページは修正が必要なことをお知らせいただき、ありがとうございます。ご期待に沿うことができず申し訳ありません。 お時間がある場合は、ドキュメントを改善する方法についてお知らせください。 | 2026-01-13T09:29:25 |
http://creativecommons.org | Homepage - Creative Commons Skip to content Creative Commons Menu Who We Are What We Do Licenses and Tools Blog Support Us English --> Search Donate Explore CC Creative Commons Our Work Relies On You! Help us keep the internet free and open. --> Global Network Join a global community working to strengthen the Commons Certificate Become an expert in creating and engaging with openly licensed materials Global Summit Attend our annual event, promoting the power of open licensing Chooser Get help choosing the appropriate license for your work Search Portal Find engines to search openly licensed material for creative and educational reuse Open Source Help us build products that maximize creativity and innovation Help us protect the commons. Make a tax deductible gift to fund our work. Donate today! Better Sharing, Brighter Future “ Twenty Years of Creative Commons (in Sixty Seconds) ” by Ryan Junell and Glenn Otis Brown for Creative Commons is licensed via CC BY 4.0 and includes adaptations of the multiple open and public domain works. View full licensing and attribution information about all works included in the video on Flickr . Creative Commons is an international nonprofit organization that empowers people to grow and sustain the thriving commons of shared knowledge and culture we need to address the world's most pressing challenges and create a brighter future for all. Learn more The nonprofit behind the licenses and tools the world uses to share For over 20 years, Creative Commons has supported a global movement built on a belief in the power of open access to knowledge and creativity. From Wikipedia to the Smithsonian, organizations and individuals rely on our work to share billions of historic images, scientific articles, cultural artifacts, educational resources, music, and more! “ Farmer and his brother making music ” by Russell Lee , here cropped, is marked with CC PDM 1.0 . “ Flickr photowalk at the Creative Commons Global Summit 2019, Lisbon ” by Sebastiaan ter Burg , here cropped, is licensed via CC BY 2.0 . “ Novel Coronavirus SARS-CoV-2 ” by NIAID , here cropped, is licensed via CC BY 2.0 . “ Children kabuki theater in Nagahama (warrior Kumagai, 12 y.o.) ” by lensonjapan , here cropped, is licensed via CC BY 2.0 . Wikipedia 55+ million articles Every one of Wikipedia's 55 million plus articles are shared openly and freely using a CC license. The Met 492,000+ images All images of public-domain works in the Met's collection are openly available under Creative Commons Zero (CC0). Khan Academy 100,000+ lessons Many of the lessons found on Khan Academy are openly licensed under CC-BY-NC-SA. Latest News Building the Future in 2026 by Anna Tumadóttir About CC “ Input ” by Adam Pieniazek , modified by Creative Commons, is licensed via CC BY 2.0 . In 2026, Creative Commons will continue to ensure that technological change strengthens, not erodes, the commons and improves the acts of sharing and access that are part of our everyday lives. We do this by applying first principles, practical strategies, and lessons learned from decades of advancing the commons. Sharing of research, educational materials, heritage, and creative works are acts of generosity—these are the gifts people give to the commons. Access to these same shared resources enables collaboration, innovation, and understanding. Together, this is how we improve access to knowledge and build a more equitable future. What We Built Together in 2025 by Anna Tumadóttir About CC CC Signals: What We’ve Been Working On by Sarah Hinchliff Pearson Licenses & Tools Where CC Stands on Pay-to-Crawl by Creative Commons Policy , Sustaining the Commons Global Call to Action: Open Heritage Statement Now Open for Signature by Brigitte Vézina , Dee Harris Open Culture , Open Heritage more posts Images Attri­bution view "Kaleidoscope 2" by Sheila Sund is licensed under CC BY 2.0, remixed by Creative Commons licensed under CC BY 4.0 Distorted Forest Path by Lone Thomasky & Bits&Bäume, licensed with CC BY 4.0. "Distorted Sand Mine" by Lone Thomasky & Bits&Bäume, licensed under CC BY 4.0. "Watering Place at Marley" by Alfred Sisley, 1875, CC0, Art Institute of Chicago, remixed with "TAROCH balloon" by Creative Commons/Dee Harris, 2025, CC0. Creative Commons Contact Newsletter Privacy Policies Terms Contact Us Creative Commons PO Box 1866, Mountain View, CA 94042 info@creativecommons.org Instagram --> Bluesky Mastodon LinkedIn Subscribe to our Newsletter Support Our Work Our work relies on you! Help us keep the Internet free and open. Donate Now Except where otherwise noted , content on this site is licensed under a Creative Commons Attribution 4.0 International license . Icons by Font Awesome . | 2026-01-13T09:29:25 |
https://www.infoworld.com/cloud-computing/ | Cloud Computing | InfoWorld Topics Latest Newsletters Resources Buyer’s Guides About About Us Advertise Contact Us Editorial Ethics Policy Foundry Careers Newsletters Contribute to InfoWorld Reprints Policies Terms of Service Privacy Policy Cookie Policy Copyright Notice Member Preferences About AdChoices Your California Privacy Rights Our Network CIO Computerworld CSO Network World More News Features Blogs BrandPosts Events Videos Enterprise Buyer’s Guides Close Analytics Artificial Intelligence Generative AI Careers Cloud Computing Data Management Databases Emerging Technology Technology Industry Security Software Development Microsoft .NET Development Tools Devops Open Source Programming Languages Java JavaScript Python IT Leadership Enterprise Buyer’s Guides Back Close Back Close Popular Topics Artificial Intelligence Cloud Computing Data Management Software Development Search Topics Latest Newsletters Resources Buyer’s Guides About Policies Our Network More Back Topics Analytics Artificial Intelligence Generative AI Careers Cloud Computing Data Management Databases Emerging Technology Technology Industry Security Software Development Microsoft .NET Development Tools Devops Open Source Programming Languages Java JavaScript Python IT Leadership Enterprise Buyer’s Guides Back About About Us Advertise Contact Us Editorial Ethics Policy Foundry Careers Newsletters Contribute to InfoWorld Reprints Back Policies Terms of Service Privacy Policy Cookie Policy Copyright Notice Member Preferences About AdChoices Your California Privacy Rights Back Our Network CIO Computerworld CSO Network World Back More News Features Blogs BrandPosts Events Videos Enterprise Buyer’s Guides Home Cloud Computing Sponsored by KPMG Cloud Computing Cloud Computing | News, how-tos, features, reviews, and videos Explore related topics Cloud Architecture Cloud Management Cloud Storage Cloud-Native Hybrid Cloud IaaS Managed Cloud Services Multicloud PaaS Private Cloud SaaS Latest from today analysis Which development platforms and tools should you learn now? For software developers, choosing which technologies and skills to master next has never been more difficult. Experts offer their recommendations. By Isaac Sacolick Jan 13, 2026 8 mins Development Tools Devops Generative AI analysis Why hybrid cloud is the future of enterprise platforms By David Linthicum Jan 13, 2026 4 mins Artificial Intelligence Cloud Architecture Hybrid Cloud news Oracle unveils Java development plans for 2026 By Paul Krill Jan 12, 2026 3 mins Java Programming Languages Software Development news AI is causing developers to abandon Stack Overflow By Mikael Markander Jan 12, 2026 2 mins Artificial Intelligence Generative AI Software Development opinion Stack thinking: Why a single AI platform won’t cut it By Tom Popomaronis Jan 12, 2026 8 mins Artificial Intelligence Development Tools Software Development news Postman snaps up Fern to reduce developer friction around API documentation and SDKs By Anirban Ghoshal Jan 12, 2026 3 mins APIs Software Development opinion Why ‘boring’ VS Code keeps winning By Matt Asay Jan 12, 2026 7 mins Developer GitHub Visual Studio Code feature How to succeed with AI-powered, low-code and no-code development tools By Bob Violino Jan 12, 2026 9 mins Development Tools Generative AI No Code and Low Code news Visual Studio Code adds support for agent skills By Paul Krill Jan 9, 2026 3 mins Development Tools Integrated Development Environments Visual Studio Code Articles news analysis Snowflake: Latest news and insights Stay up-to-date on how Snowflake and its underlying architecture has changed how cloud developers, data managers and data scientists approach cloud data management and analytics By Dan Muse Jan 9, 2026 5 mins Cloud Architecture Cloud Computing Cloud Management news Snowflake to acquire Observe to boost observability in AIops The acquisition could position Snowflake as a control plane for production AI, giving CIOs visibility across data, models, and infrastructure without the pricing shock of traditional observability stacks, analysts say. By Anirban Ghoshal Jan 9, 2026 3 mins Artificial Intelligence Software Development feature Python starts 2026 with a bang The world’s most popular programming language kicks off the new year with a wicked-fast type checker, a C code generator, and a second chance for the tail-calling interpreter. By Serdar Yegulalp Jan 9, 2026 2 mins Programming Languages Python Software Development news Microsoft open-sources XAML Studio Forthcoming update of the rapid prototyping tool for WinUI developers, now available on GitHub, adds a new Fluent UI design, folder support, and a live properties panel. By Paul Krill Jan 8, 2026 1 min Development Tools Integrated Development Environments Visual Studio analysis What drives your cloud security strategy? As cloud breaches increase, organizations should prioritize skills and training over the latest tech to address the actual root problems. By David Linthicum Jan 6, 2026 5 mins Careers Cloud Security IT Skills and Training news Databricks says its Instructed Retriever offers better AI answers than RAG in the enterprise Databricks says Instructed Retriever outperforms RAG and could move AI pilots to production faster, but analysts warn it could expose data, governance, and budget gaps that CIOs can’t ignore. By Anirban Ghoshal Jan 8, 2026 5 mins Artificial Intelligence Generative AI opinion The hidden devops crisis that AI workloads are about to expose Devops teams that cling to component-level testing and basic monitoring will struggle to keep pace with the data demands of AI. By Joseph Morais Jan 8, 2026 6 mins Artificial Intelligence Devops Generative AI news AI-built Rue language pairs Rust memory safety with ease of use Developed using Anthropic’s Claude AI model, the new language is intended to provide memory safety without garbage collection while being easier to use than Rust and Zig. By Paul Krill Jan 7, 2026 2 mins Generative AI Programming Languages Rust news Microsoft acquires Osmos to ease data engineering bottlenecks in Fabric The acquisition could help enterprises push analytics and AI projects into production faster while acting as the missing autonomy layer that connects Fabric’s recent enhancements into a coherent system. By Anirban Ghoshal Jan 7, 2026 4 mins Analytics Artificial Intelligence Data Engineering opinion What the loom tells us about AI and coding Like the loom, AI may turn the job market upside down. And enable new technologies and jobs that we simply can’t predict. By Nick Hodges Jan 7, 2026 4 mins Developer Engineer Generative AI analysis Generative UI: The AI agent is the front end In a new model for user interfaces, agents paint the screen with interactive UI components on demand. Let’s take a look. By Matthew Tyson Jan 7, 2026 8 mins Development Tools Generative AI Libraries and Frameworks news AI won’t replace human devs for at least 5 years Progress towards full AI-driven coding automation continues, but in steps rather than leaps, giving organizations time to prepare, according to a new study. By Taryn Plumb Jan 7, 2026 7 mins Artificial Intelligence Developer Roles news Automated data poisoning proposed as a solution for AI theft threat For hackers, the stolen data would be useless, but authorized users would have a secret key that filters out the fake information. By Howard Solomon Jan 7, 2026 6 mins Artificial Intelligence Data Privacy Privacy Show more Show less View all Video on demand video How to generate C-like programs with Python You might be familiar with how Python and C can work together, by way of projects like Cython. The new PythoC project has a unique twist on working with both languages: it lets you write type-decorated Python that can generate entire standalone C programs, not just importable Python libraries written in C. This video shows a few basic PythoC functions, from generating a whole program to using some of PythoC’s typing features to provide better memory management than C alone could. Dec 16, 2025 5 mins Python Zed Editor Review: The Rust-Powered IDE That Might Replace VS Code Dec 3, 2025 5 mins Python Python vs. Kotlin Nov 13, 2025 5 mins Python Hands-on with the new sampling profiler in Python 3.15 Nov 6, 2025 6 mins Python See all videos Explore a topic Analytics Artificial Intelligence Careers Data Management Databases Development Tools Devops Emerging Technology Generative AI Java JavaScript Microsoft .NET Open Source Programming Languages View all topics All topics Close Browse all topics and categories below. Analytics Artificial Intelligence Careers Data Management Databases Development Tools Devops Emerging Technology Generative AI Java JavaScript Microsoft .NET Open Source Programming Languages Python Security Software Development Technology Industry Show me more Latest Articles Videos news Ruby 4.0.0 introduces ZJIT compiler, Ruby Box isolation By Paul Krill Jan 6, 2026 3 mins Programming Languages Ruby Software Development news Open WebUI bug turns the ‘free model’ into an enterprise backdoor By Shweta Sharma Jan 6, 2026 3 mins Artificial Intelligence Security Vulnerabilities interview Generative AI and the future of databases By Martin Heller Jan 6, 2026 14 mins Artificial Intelligence Databases Generative AI video How to make local packages universal across Python venvs Nov 4, 2025 4 mins Python video X-ray vision for your async activity in Python 3.14 Oct 21, 2025 4 mins Python video Why it's so hard to redistribute standalone Python apps Oct 17, 2025 5 mins Python About About Us Advertise Contact Us Editorial Ethics Policy Foundry Careers Reprints Newsletters BrandPosts Policies Terms of Service Privacy Policy Cookie Policy Copyright Notice Member Preferences About AdChoices Your California Privacy Rights Privacy Settings Our Network CIO Computerworld CSO Network World Facebook X YouTube Google News LinkedIn © 2026 FoundryCo, Inc. All Rights Reserved. | 2026-01-13T09:29:25 |
https://vi-vn.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook Email hoặc điện thoại Mật khẩu Bạn quên tài khoản ư? Tạo tài khoản mới Bạn tạm thời bị chặn Bạn tạm thời bị chặn Có vẻ như bạn đang dùng nhầm tính năng này do sử dụng quá nhanh. Bạn tạm thời đã bị chặn sử dụng nó. Back Tiếng Việt 한국어 English (US) Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Đăng ký Đăng nhập Messenger Facebook Lite Video Meta Pay Cửa hàng trên Meta Meta Quest Ray-Ban Meta Meta AI Nội dung khác do Meta AI tạo Instagram Threads Trung tâm thông tin bỏ phiếu Chính sách quyền riêng tư Trung tâm quyền riêng tư Giới thiệu Tạo quảng cáo Tạo Trang Nhà phát triển Tuyển dụng Cookie Lựa chọn quảng cáo Điều khoản Trợ giúp Tải thông tin liên hệ lên & đối tượng không phải người dùng Cài đặt Nhật ký hoạt động Meta © 2026 | 2026-01-13T09:29:25 |
https://www.linkedin.com/products/manageengine-it-operations-management-manageengine-oputils/?trk=products_details_guest_similar_products_section_similar_products_section_product_link_result-card_image-click | ManageEngine OpUtils | LinkedIn Skip to main content LinkedIn ManageEngine ITOM in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in ManageEngine OpUtils IP Address Management (IPAM) Software by ManageEngine ITOM See who's skilled in this Add as skill Learn more Report this product About OpUtils is an IP address and switch port management software that is geared towards helping engineers efficiently monitor, diagnose, and troubleshoot IT resources. OpUtils complements existing management tools by providing troubleshooting and real-time monitoring capabilities. It helps network engineers manage their switches and IP address space with ease. With a comprehensive set of over 20 tools, this switch port management tool helps with network monitoring tasks like detecting a rogue device intrusion, keeping an eye on bandwidth usage, monitoring the availability of critical devices, backing up Cisco configuration files, and more. This product is intended for Network Administrator Information Technology Administrator Network Operations Specialist Information Technology Operations Manager Information Technology Network Administrator Information Technology System Network Administrator Information Technology Operations Analyst Senior Network Administrator Senior Network Analyst Network Specialist Media Products media viewer No more previous content OpUtils IP Address Dashboard Experience OpUtils' IP address dashboard, the central hub for efficient IP address management across your IT infrastructure. Gain control over your network resources, monitor IP address allocation, and enhance efficiency. IP Address Management with OpUtils Elevate your network's IP address management capabilities with OpUtils. Efficiently allocate and manage IP resources, ensuring optimal resource utilization and network availability. Simplify IP address management today. OpUtils Network Toolset Simplify your day-to-day network tasks with OpUtils' comprehensive network toolset. Monitor, inspect, and troubleshoot issues to ensure the availability of critical devices. Explore over 30 network tools in OpUtils' powerful toolkit for advanced network management. OpUtils Rogue Device Detection Enhance network security with OpUtils' rogue device detection module. Identify and eliminate unauthorized devices on your network, bolstering network integrity and security. Strengthen your network's defenses with OpUtils. OpUtils Switch Port Mapping Module OpUtils' switch port mapping module streamlines network port tracking. Gain visibility and control over network ports, making network management a breeze. Simplify the process of tracking and managing network ports with OpUtils. No more next content Featured customers of ManageEngine OpUtils Motorola Solutions Telecommunications 666,237 followers Electronic Data Systems IT Services and IT Consulting 6,094 followers IBM IT Services and IT Consulting 19,098,184 followers UPS Truck Transportation 2,192,713 followers Similar products Next-Gen IPAM Next-Gen IPAM IP Address Management (IPAM) Software AX DHCP | IP Address Management (IPAM) Software AX DHCP | IP Address Management (IPAM) Software IP Address Management (IPAM) Software Tidal LightMesh Tidal LightMesh IP Address Management (IPAM) Software Numerus Numerus IP Address Management (IPAM) Software dedicated datacenter proxies dedicated datacenter proxies IP Address Management (IPAM) Software Sign in to see more Show more Show less ManageEngine ITOM products ManageEngine Applications Manager ManageEngine Applications Manager Application Performance Monitoring (APM) Software ManageEngine Firewall Analyzer ManageEngine Firewall Analyzer ManageEngine NetFlow Analyzer ManageEngine NetFlow Analyzer Network Traffic Analysis (NTA) Tools ManageEngine Network Configuration Manager ManageEngine Network Configuration Manager ManageEngine OpManager ManageEngine OpManager Network Monitoring Software ManageEngine OpManager MSP ManageEngine OpManager MSP ManageEngine OpManager Plus ManageEngine OpManager Plus Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:25 |
https://docs.brightdata.com/api-reference/web-scraper-api/synchronous-requests#body-custom-output-fields | Synchronous Requests - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Web Scraper API Synchronous Requests Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API Scrape data and return it directly in the response. cURL Copy curl --request POST \ --url https://api.brightdata.com/datasets/v3/scrape \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data ' { "input": [ { "url": "www.linkedin.com/in/bulentakar" } ], "custom_output_fields": "url|about.updated_on" } ' 200 202 Copy "OK" Web Scraper API Synchronous Requests Copy page This endpoint allows users to fetch data efficiently and ensures seamless integration with their applications or workflows. Copy page POST / datasets / v3 / scrape Try it Scrape data and return it directly in the response. cURL Copy curl --request POST \ --url https://api.brightdata.com/datasets/v3/scrape \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data ' { "input": [ { "url": "www.linkedin.com/in/bulentakar" } ], "custom_output_fields": "url|about.updated_on" } ' 200 202 Copy "OK" How It Works This synchronous API endpoint allows users to send a scraping request and receive the results in real-time directly in the response, at the point of request - such as a terminal or application - without the need for external storage or manual downloads. This approach streamlines the data collection process by eliminating additional steps for retrieving results. You can specify the desired output format using the format parameter. If no format is provided, the response will default to JSON. Timeout Limit Please note that this synchronous request is subject to a 1 minute timeout limit. If the data retrieval process exceeds this limit, the API will return an HTTP 202 response, indicating that the request is still being processed. In such cases, you will receive a snapshot ID to monitor and retrieve the results asynchronously via the Monitor Snapshot and Download Snapshot endpoints. Example response on timeout: 202 Copy { "snapshot_id" : "s_xxx" , "message" : "Your request is still in progress and cannot be retrieved in this call. Use the provided Snapshot ID to track progress via the Monitor Snapshot endpoint and download it once ready via the Download Snapshot endpoint." } Authorizations Authorization string header required Use your Bright Data API Key as a Bearer token in the Authorization header. How to authenticate: Obtain your API Key from the Bright Data account settings at https://brightdata.com/cp/setting/users Include the API Key in the Authorization header of your requests Format: Authorization: Bearer YOUR_API_KEY Example: Authorization: Bearer b5648e1096c6442f60a6c4bbbe73f8d2234d3d8324554bd6a7ec8f3f251f07df Learn how to get your Bright Data API key: https://docs.brightdata.com/api-reference/authentication Query Parameters dataset_id string required Dataset ID for which data collection is triggered. custom_output_fields string List of output columns, separated by | (e.g., url|about.updated_on ). Filters the response to include only the specified fields. Example : "url|about.updated_on" include_errors boolean Include errors report with the results. format enum<string> default: json Specifies the format of the response (default: ndjson). Available options : ndjson , json , csv Body application/json input object[] required List of input items to scrape. Show child attributes custom_output_fields string List of output columns, separated by | (e.g., url|about.updated_on ). Filters the response to include only the specified fields. Example : "url|about.updated_on" Response 200 text/plain OK The response is of type string . Example : "OK" Was this page helpful? Yes No Asynchronous Requests Crawl API ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:25 |
https://pt-br.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook Email ou telefone Senha Esqueceu a conta? Criar nova conta Você está bloqueado temporariamente Você está bloqueado temporariamente Parece que você estava usando este recurso de forma indevida. Bloqueamos temporariamente sua capacidade de usar o recurso. Back Português (Brasil) 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Français (France) Deutsch Cadastre-se Entrar Messenger Facebook Lite Vídeo Meta Pay Meta Store Meta Quest Ray-Ban Meta Meta AI Mais conteúdo da Meta AI Instagram Threads Central de Informações de Votação Política de Privacidade Central de Privacidade Sobre Criar anúncio Criar Página Desenvolvedores Carreiras Cookies Escolhas para anúncios Termos Ajuda Upload de contatos e não usuários Configurações Registro de atividades Meta © 2026 | 2026-01-13T09:29:25 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/LogsAnomalyDetection-Insights.html | Using anomaly detection in CloudWatch Logs Insights - Amazon CloudWatch Logs Using anomaly detection in CloudWatch Logs Insights - Amazon CloudWatch Logs Documentation Amazon CloudWatch User Guide Using anomaly detection in CloudWatch Logs Insights In addition to creating log anomaly detectors for continuous monitoring, you can also use the anomaly command in CloudWatch Logs Insights queries to identify unusual patterns in your log data on-demand. This command extends the existing pattern functionality and uses machine learning to detect five types of anomalies including pattern frequency changes, new patterns, and token variations. The anomaly command is particularly useful for: Ad-hoc analysis of historical log data to identify unusual patterns Investigating specific time periods for anomalous behavior Monitoring applications like Lambda functions for execution issues For more information about using the anomaly command in your queries, see anomaly . This query-based anomaly detection complements the continuous anomaly detectors described in the following sections, giving you both real-time monitoring and on-demand analysis capabilities. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Log anomaly detection Enable anomaly detection on a log group Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/ja_jp/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | 関連エンティティの CloudWatch エージェントサービスと環境名の設定 - Amazon CloudWatch 関連エンティティの CloudWatch エージェントサービスと環境名の設定 - Amazon CloudWatch ドキュメント Amazon CloudWatch ユーザーガイド 関連エンティティの CloudWatch エージェントサービスと環境名の設定 CloudWatch コンソールの [関連するペインの探索] をサポートするため、CloudWatch エージェントはエンティティデータを持つメトリクスおよびログを送信できます。サービス名または環境名は、 CloudWatch エージェント JSON 設定 で設定できます。 注記 エージェント設定は上書きされる場合があります。エージェントが関連エンティティに送信するデータを決定する方法の詳細については、「 関連するテレメトリで CloudWatch エージェントの使用 」を参照してください。 メトリクスの場合、エージェント、メトリクス、プラグインのレベルで設定できます。ログの場合、エージェント、ログ、ファイルのレベルで設定できます。最も具体的な設定が常に使用されます。例えば、設定がエージェントレベルおよびメトリクスレベルで存在する場合、メトリクスはメトリクス設定を使用し、それ以外のもの (ログ) はエージェント設定を使用します。次の例では、サービス名および環境名を設定するさまざまな方法が示されています。 { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } ブラウザで JavaScript が無効になっているか、使用できません。 AWS ドキュメントを使用するには、JavaScript を有効にする必要があります。手順については、使用するブラウザのヘルプページを参照してください。 ドキュメントの表記規則 Amazon EC2 インスタンスでの Prometheus メトリクスコレクションのセットアップと設定 CloudWatch エージェントを起動する このページは役に立ちましたか? - はい ページが役に立ったことをお知らせいただき、ありがとうございます。 お時間がある場合は、何が良かったかお知らせください。今後の参考にさせていただきます。 このページは役に立ちましたか? - いいえ このページは修正が必要なことをお知らせいただき、ありがとうございます。ご期待に沿うことができず申し訳ありません。 お時間がある場合は、ドキュメントを改善する方法についてお知らせください。 | 2026-01-13T09:29:26 |
https://docs.brightdata.com/api-reference/web-scraper-api/social-media-apis/instagram#param-posts-to-not-include-1 | Instagram API Scrapers - Bright Data Docs Skip to main content Bright Data Docs home page English Search... ⌘ K Support Sign up Sign up Search... Navigation Social Media APIs Instagram API Scrapers Welcome Proxy Infrastructure Web Access APIs Data Feeds AI API Reference General Integrations Overview Authentication Terminology Postman collection Python SDK JavaScript SDK Products Unlocker API SERP API Marketplace Dataset API Web Scraper API POST Asynchronous Requests POST Synchronous Requests POST Crawl API Delivery APIs Management APIs Social Media APIs Overview Facebook Instagram LinkedIn TikTok Reddit Twitter Pinterest Quora Vimeo YouTube Scraper Studio API Scraping Shield Proxy Networks Proxy Manager Unlocker & SERP API Deep Lookup API (Beta) Administrative API Account Management API On this page Overview Profiles API Collect by URL Posts API Collect by URL Discover by URL Comments API Collect by URL Reels API Collect by URL Discover by URL Social Media APIs Instagram API Scrapers Copy page Copy page Overview The Instagram API Suite offers multiple types of APIs, each designed for specific data collection needs from Instagram. Below is an overview of how these APIs connect and interact, based on the available features: Profiles API This API allows users to collect profile details based on a single input: profile URL. Discovery functionality : N/A Interesting Columns : followers , post_count , post_hashtags , profile_name . Posts API This API allows users to collect multiple posts based on a single input URL (such as an Instagram reels URL, search URL or profile URL). Discovery functionality : - Direct URL of the Instagram reel - Direct URL of the search - Direct URL of the profile Interesting Columns : url , followers , hashtags , engagement_score_view . Comments API This API allows users to collect multiple comments from a post using its URL. Discovery functionality : N/A Interesting Columns : comment_user , comment , likes_number , replies_number . The suite of APIs is designed to offer flexibility for targeted data collection, where users can input specific URLs to gather detailed post and comment data, either in bulk or with precise filtering options. Profiles API Collect by URL This API allows users to collect detailed data about an Instagram profile by providing the profile URL. It provides a comprehensive overview of an Instagram profile, including business and engagement information, posts, and user details. Input Parameters URL string required The Instagram profile URL. Output Structure Includes comprehensive data points: Page/Profile Details : account , id , followers , posts_count , is_business_account , is_professional_account , is_verified , avg_engagement , profile_name , profile_url , profile_image_link , and more. For all data points, click here . Posts API Collect by URL This API enables users to collect detailed data from Instagram posts by providing a post URL. Input Parameters URL string required The Instagram post URL. Output Structure Includes comprehensive data points: Post Details : post_id , description , hashtags , date_posted , num_comments , likes , content_type , video_view_count , video_play_count , and more. For all data points, click here . Page/Profile Details : user_posted , followers , posts_count , profile_image_link , is_verified , profile_url . We provide a limited set of data points about the profile. Attachments and Media : photos , videos , thumbnail , display_url (link only, not the file itself), audio. Discover by URL This API allows users to discover recent Instagram posts from a public profile by providing the profile URL and specifying additional parameters. Input Parameters URL string required The Instagram profile URL. num_of_posts number The number of recent posts to collect. If omitted, there is no limit. posts_to_not_include array Array of post IDs to exclude from the results. start_date string Start date for filtering posts in MM-DD-YYYY format (should be earlier than end_date). end_date string End date for filtering posts in MM-DD-YYYY format (should be later than start_date). post_type string Specify the type of posts to collect (e.g., post, reel). Output Structure Includes comprehensive data points: Post Details: post_id , description , hashtags , date_posted , num_comments , likes , video_view_count , video_play_count ,and more. For all data points, click here . Page/Profile Details: user_posted , followers , posts_count , profile_image_link , is_verified , profile_url , is_paid_partnership , partnership_details , user_posted_id Attachments and Media: photos , videos , thumbnail , audio , display_url , content_type , product_type , coauthor_producers , tagged_users . This API is designed to allow for filtering, exclusion of specific posts, and collecting posts by type (regular post or reel) within a defined time frame. It provides detailed post and profile information, making it ideal for data collection and analytics. Comments API Collect by URL This API allows users to collect the latest comments from a specific Instagram post by providing the post URL. This API retrieves the most recent 10 comments along with associated metadata. Input Parameters URL string required The Instagram post URL. Output Structure Includes comprehensive data points: Comment Details : comment_id , comment_user , comment_user_url , comment_date , comment , likes_number , replies_number , replies , hashtag_comment , tagged_users_in_comment , and more. For all data points, click here . User Details : user_name , user_id , user_url We provide a limited set of data points about the profile. Post Metadata : post_url , post_user , post_id . Reels API Collect by URL This API allows users to collect detailed data about Instagram reels from public profiles by providing the reel URL. Input Parameters URL string required The Instagram reel URL. Output Structure Includes comprehensive data points: Reel Details : post_id , description , hashtags , date_posted , tagged_users , num_comments , likes , views , video_play_count , length , and more. For all data points, click here . Page/Profile Details : user_posted , followers , posts_count , profile_image_link , is_verified , profile_url . We provide a limited set of data points about the profile. Attachments and Media : video_url , thumbnail , audio_url . Discover by URL This API allows users to discover Instagram Reels videos from a profile URL or direct search URL. Input Parameters URL string required The Instagram profile or direct search URL. num_of_posts number The number of recent reels to collect. If omitted, there is no limit. posts_to_not_include array Array of post IDs to exclude from the results. start_date string Start date for filtering reels in MM-DD-YYYY format. end_date string End date for filtering reels in MM-DD-YYYY format (should be later than start_date ). Output Structure Includes comprehensive data points: Reel Details : post_id , description , hashtags , date_posted , num_comments , likes , views , video_play_count , top_comments , length , video_url , audio_url , content_id , and more. For all data points, click here . Profile Details : user_posted , followers , posts_count , following . Attachments and Media : video_url , thumbnail , audio_url (link only, not the file itself). This API provides detailed information about Instagram Reels, with filtering options by date range, exclusion of specific posts, and a limit on the number of reels collected. Was this page helpful? Yes No Facebook LinkedIn ⌘ I linkedin youtube github Powered by | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Configure CloudWatch agent service and environment names for related entities - Amazon CloudWatch Configure CloudWatch agent service and environment names for related entities - Amazon CloudWatch Documentation Amazon CloudWatch User Guide Configure CloudWatch agent service and environment names for related entities The CloudWatch agent can send metrics and logs with entity data to support the Explore related pane in the CloudWatch console. The service name or environment name can be configured by the CloudWatch Agent JSON configuration . Note The agent configuration may be overridden. For details about how the agent decides what data to send for related entities, see Using the CloudWatch agent with related telemetry . For metrics, it can be configured at the agent, metrics, or plugin level. For logs it can be configured at the agent, logs, or file level. The most specific configuration is always used. For example if the configuration exists at the agent level and metrics level, then metrics will use the metric configuration, and anything else (logs) will use the agent configuration. The following example shows different ways to configure the service name and environment name. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Set up and configure Prometheus metrics collection on Amazon EC2 instances Start the CloudWatch agent Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/de_de/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Konfigurieren Sie CloudWatch Agentendienst- und Umgebungsnamen für verwandte Entitäten - Amazon CloudWatch Konfigurieren Sie CloudWatch Agentendienst- und Umgebungsnamen für verwandte Entitäten - Amazon CloudWatch Dokumentation Amazon CloudWatch Benutzer-Leitfaden Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Konfigurieren Sie CloudWatch Agentendienst- und Umgebungsnamen für verwandte Entitäten Der CloudWatch Agent kann Metriken und Protokolle mit Entitätsdaten senden, um den Themenbereich Erkunden in der CloudWatch Konsole zu unterstützen. Der Dienstname oder der Umgebungsname kann in der JSON-Konfiguration des CloudWatch Agenten konfiguriert werden. Anmerkung Die Agentenkonfiguration kann überschrieben werden. Einzelheiten dazu, wie der Agent entscheidet, welche Daten für verwandte Entitäten gesendet werden sollen, finden Sie unter Verwenden des Agenten mit zugehöriger Telemetrie CloudWatch . Metriken können auf Agenten-, Metrik- oder Plug-in-Ebene konfiguriert werden. Protokolle können auf Agenten-, Protokoll- oder Dateiebene konfiguriert werden. Es wird immer die spezifischste Konfiguration verwendet. Wenn die Konfiguration beispielsweise auf Agentenebene und Metrikebene existiert, verwenden die Metriken die Metrikkonfiguration und alles andere (Protokolle) verwendet die Agentenkonfiguration. Das folgende Beispiel zeigt verschiedene Möglichkeiten, den Service- und Umgebungsnamen zu konfigurieren. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Prometheus-Metrikerfassung auf Amazon-Instances einrichten und konfigurieren EC2 Starten Sie den CloudWatch Agenten Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/it_it/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Configura i nomi dei servizi e degli ambienti dell' CloudWatch agente per le entità correlate - Amazon CloudWatch Configura i nomi dei servizi e degli ambienti dell' CloudWatch agente per le entità correlate - Amazon CloudWatch Documentazione Amazon CloudWatch Guida per l’utente Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Configura i nomi dei servizi e degli ambienti dell' CloudWatch agente per le entità correlate L' CloudWatch agente può inviare metriche e log con dati di entità per supportare il relativo riquadro Esplora nella CloudWatch console. Il nome del servizio o il nome dell'ambiente possono essere configurati dalla configurazione JSON dell'CloudWatch agente . Nota La configurazione dell'agente può essere sovrascritta. Per dettagli su come l'agente decide quali dati inviare per le entità correlate, consulta Utilizzo dell'agente con la relativa telemetria CloudWatch , Per quanto riguarda le metriche, può essere configurata a livello di agente, metrica o plug-in. Per i log, può essere configurato a livello di agente, di log o di file. Viene sempre utilizzata la configurazione più specifica. Ad esempio, se la configurazione esiste a livello di agente e a livello di metriche, le metriche utilizzeranno la configurazione delle metriche e qualsiasi altra cosa (log) utilizzerà la configurazione dell'agente. L'esempio seguente mostra diversi modi per configurare il nome del servizio e il nome dell'ambiente. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Configura e configura la raccolta di metriche Prometheus sulle istanze Amazon EC2 Avvia l' CloudWatch agente Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:29:26 |
https://ja-jp.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook メールアドレスまたは電話番号 パスワード アカウントを忘れた場合 新しいアカウントを作成 機能の一時停止 機能の一時停止 この機能の使用ペースが早過ぎるため、機能の使用が一時的にブロックされました。 Back 日本語 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) Português (Brasil) Français (France) Deutsch アカウント登録 ログイン Messenger Facebook Lite 動画 Meta Pay Metaストア Meta Quest Ray-Ban Meta Meta AI Meta AIのコンテンツをもっと見る Instagram Threads 投票情報センター プライバシーポリシー プライバシーセンター Facebookについて 広告を作成 ページを作成 開発者 採用情報 Cookie AdChoices 規約 ヘルプ 連絡先のアップロードと非ユーザー 設定 アクティビティログ Meta © 2026 | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-configure-related-telemetry.html | Configure CloudWatch agent service and environment names for related entities - Amazon CloudWatch Configure CloudWatch agent service and environment names for related entities - Amazon CloudWatch Documentation Amazon CloudWatch User Guide Configure CloudWatch agent service and environment names for related entities The CloudWatch agent can send metrics and logs with entity data to support the Explore related pane in the CloudWatch console. The service name or environment name can be configured by the CloudWatch Agent JSON configuration . Note The agent configuration may be overridden. For details about how the agent decides what data to send for related entities, see Using the CloudWatch agent with related telemetry . For metrics, it can be configured at the agent, metrics, or plugin level. For logs it can be configured at the agent, logs, or file level. The most specific configuration is always used. For example if the configuration exists at the agent level and metrics level, then metrics will use the metric configuration, and anything else (logs) will use the agent configuration. The following example shows different ways to configure the service name and environment name. { "agent": { "service.name": "agent-level-service", "deployment.environment": "agent-level-environment" }, "metrics": { "service.name": "metric-level-service", "deployment.environment": "metric-level-environment", "metrics_collected": { "statsd": { "service.name": "statsd-level-service", "deployment.environment": "statsd-level-environment", }, "collectd": { "service.name": "collectdd-level-service", "deployment.environment": "collectd-level-environment", } } }, "logs": { "service.name": "log-level-service", "deployment.environment": "log-level-environment", "logs_collected": { "files": { "collect_list": [ { "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "log_group_name": "amazon-cloudwatch-agent.log", "log_stream_name": "amazon-cloudwatch-agent.log", "service.name": "file-level-service", "deployment.environment": "file-level-environment" } ] } } } } Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Set up and configure Prometheus metrics collection on Amazon EC2 instances Start the CloudWatch agent Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/LogsAnomalyDetection-Insights.html | Using anomaly detection in CloudWatch Logs Insights - Amazon CloudWatch Logs Using anomaly detection in CloudWatch Logs Insights - Amazon CloudWatch Logs Documentation Amazon CloudWatch User Guide Using anomaly detection in CloudWatch Logs Insights In addition to creating log anomaly detectors for continuous monitoring, you can also use the anomaly command in CloudWatch Logs Insights queries to identify unusual patterns in your log data on-demand. This command extends the existing pattern functionality and uses machine learning to detect five types of anomalies including pattern frequency changes, new patterns, and token variations. The anomaly command is particularly useful for: Ad-hoc analysis of historical log data to identify unusual patterns Investigating specific time periods for anomalous behavior Monitoring applications like Lambda functions for execution issues For more information about using the anomaly command in your queries, see anomaly . This query-based anomaly detection complements the continuous anomaly detectors described in the following sections, giving you both real-time monitoring and on-demand analysis capabilities. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Log anomaly detection Enable anomaly detection on a log group Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:26 |
https://git-scm.com/book/uk/v2/Git-%d0%bd%d0%b0-%d1%81%d0%b5%d1%80%d0%b2%d0%b5%d1%80%d1%96-%d0%9f%d1%96%d0%b4%d1%81%d1%83%d0%bc%d0%be%d0%ba | Git - Підсумок About Trademark Learn Book Cheat Sheet Videos External Links Tools Command Line GUIs Hosting Reference Install Community This book is available in English . Full translation available in azərbaycan dili , български език , Deutsch , Español , فارسی , Français , Ελληνικά , 日本語 , 한국어 , Nederlands , Русский , Slovenščina , Tagalog , Українська , 简体中文 , Partial translations available in Čeština , Македонски , Polski , Српски , Ўзбекча , 繁體中文 , Translations started for Беларуская , Indonesian , Italiano , Bahasa Melayu , Português (Brasil) , Português (Portugal) , Svenska , Türkçe . The source of this book is hosted on GitHub. Patches, suggestions and comments are welcome. Chapters ▾ 1. Вступ 1.1 Про систему контролю версій 1.2 Коротка історія Git 1.3 Основи Git 1.4 Git, зазвичай, тільки додає дані 1.5 Три стани 1.6 Командний рядок 1.7 Інсталяція Git 1.8 Початкове налаштування Git 1.9 Отримання допомоги 1.10 Підсумок 2. Основи Git 2.1 Створення Git-сховища 2.2 Запис змін до репозиторія 2.3 Перегляд історії комітів 2.4 Скасування речей 2.5 Взаємодія з віддаленими сховищами 2.6 Теґування 2.7 Псевдоніми Git 2.8 Підсумок 3. Галуження в git 3.1 Гілки у кількох словах 3.2 Основи галуження та зливання 3.3 Управління гілками 3.4 Процеси роботи з гілками 3.5 Віддалені гілки 3.6 Перебазовування 3.7 Підсумок 4. Git на сервері 4.1 Протоколи 4.2 Отримання Git на сервері 4.3 Генерація вашого публічного ключа SSH 4.4 Налаштування Серверу 4.5 Демон Git 4.6 Розумний HTTP 4.7 GitWeb 4.8 GitLab 4.9 Варіанти стороннього хостингу 4.10 Підсумок 5. Розподілений Git 5.1 Розподілені процеси роботи 5.2 Внесення змін до проекту 5.3 Супроводжування проекту 5.4 Підсумок 6. GitHub 6.1 Створення та налаштування облікового запису 6.2 Як зробити внесок до проекту 6.3 Супроводжування проєкту 6.4 Керування організацією 6.5 Скриптування GitHub 6.6 Підсумок 7. Інструменти Git 7.1 Вибір ревізій 7.2 Інтерактивне індексування 7.3 Ховання та чищення 7.4 Підписання праці 7.5 Пошук 7.6 Переписування історії 7.7 Усвідомлення скидання (reset) 7.8 Складне злиття 7.9 Rerere 7.10 Зневадження з Git 7.11 Підмодулі 7.12 Пакування 7.13 Заміна 7.14 Збереження посвідчення (credential) 7.15 Підсумок 8. Налаштування Git 8.1 Конфігурація Git 8.2 Атрибути Git 8.3 Гаки (hooks) Git 8.4 Приклад політики користування виконуваної Git-ом 8.5 Підсумок 9. Git and Other Systems 9.1 Git як клієнт 9.2 Міграція на Git 9.3 Підсумок 10. Git зсередини 10.1 Кухонні та парадні команди 10.2 Об’єкти Git 10.3 Посилання Git 10.4 Файли пакунки 10.5 Специфікація посилань (refspec) 10.6 Протоколи передачі 10.7 Супроводження та відновлення даних 10.8 Змінні середовища 10.9 Підсумок A1. Додаток A: Git в інших середовищах A1.1 Графічні інтерфейси A1.2 Git у Visual Studio A1.3 Git в Eclipse A1.4 Git у Bash A1.5 Git у Zsh A1.6 Git у Powershell A1.7 Підсумок A2. Додаток B: Вбудовування Git у ваші застосунки A2.1 Git з командного рядка A2.2 Libgit2 A2.3 JGit A2.4 go-git A3. Додаток C: Команди Git A3.1 Налаштування та конфігурація A3.2 Отримання та створення проектів A3.3 Базове збереження відбитків A3.4 Галуження та зливання A3.5 Поширення й оновлення проектів A3.6 Огляд та порівняння A3.7 Зневаджування A3.8 Латання (patching) A3.9 Електронна пошта A3.10 Зовнішні системи A3.11 Адміністрування A3.12 Кухонні команди 2nd Edition 4.10 Git на сервері - Підсумок Підсумок У вас декілька варіантів, як отримати працююче віддалене Git сховище, щоб співпрацювати з іншими або надати доступ до своєї праці. Використання власного серверу дає вам повний контроль та дозволяє налаштовувати ваш власний мережевий екран (firewall), проте такий сервер зазвичай вимагає немало вашого часу для налаштування та підтримки. Якщо ви розмістите ваші дані на сервері хостера, його легко налаштувати та підтримувати. Проте, вам доведеться зберігати код на чужому сервері, та деякі організації цього не дозволяють. Має бути доволі просто визначити, яке рішення чи комбінація рішень влаштовують вас або вашу організацію. prev | next About this site Patches, suggestions, and comments are welcome. Git is a member of Software Freedom Conservancy | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/pt_br/AmazonCloudWatch/latest/monitoring/Monitoring-Solutions.html | Soluções de observabilidade do CloudWatch - Amazon CloudWatch Soluções de observabilidade do CloudWatch - Amazon CloudWatch Documentação Amazon CloudWatch Guia do usuário Soluções de observabilidade do CloudWatch As soluções de observabilidade do CloudWatch disponibilizam um catálogo de configurações prontamente disponíveis para auxiliar na implementação rápida do monitoramento para diversos serviços da AWS e workloads conhecidas, como Java Virtual Machine (JVM), Apache Kafka, Apache Tomcat e NGINX. Essas soluções fornecem orientações específicas sobre as tarefas essenciais de monitoramento, incluindo a instalação e a configuração do agente do CloudWatch, a implantação de painéis personalizados e definidos previamente, e a configuração de alarmes de métricas. As soluções foram projetadas para auxiliar as equipes de desenvolvimento e de operações a usarem, de forma mais eficaz, as ferramentas de monitoramento e de observabilidade da AWS. As soluções incluem orientações sobre em quais momentos usar recursos específicos de observabilidade, como as métricas do monitoramento detalhado para a infraestrutura, o Container Insights para o monitoramento de contêiner, e o Application Signals para o monitoramento de aplicações. Ao fornecer exemplos funcionais e configurações práticas, essas soluções têm como objetivo simplificar o processo inicial de configuração, permitindo o estabelecimento de um monitoramento funcional com mais rapidez e a configuração com base em requisitos específicos. Para começar a usar as soluções de observabilidade, acesse a página de soluções de observabilidade no console do CloudWatch. Para obter as soluções de código aberto que funcionam com o Amazon Managed Grafana, consulte as soluções do Amazon Managed Grafana . As soluções que requerem o agente do CloudWatch estão detalhadas abaixo: Tópicos Solução do CloudWatch: workload da JVM no Amazon EC2 Solução do CloudWatch: workload do NGINX no Amazon EC2 Solução do CloudWatch: workload da GPU da NVIDIA no Amazon EC2 Solução do CloudWatch: workload do Kafka no Amazon EC2 Solução do CloudWatch: workload do Tomcat no Amazon EC2 Solução do CloudWatch: integridade do Amazon EC2 Como funcionam os painéis das soluções? Os painéis das soluções do CloudWatch usam variáveis baseadas em pesquisa (menus suspensos) que permitem a exploração e a visualização dinâmicas de diferentes aspectos das workloads. Ao combinar a flexibilidade das variáveis baseadas em pesquisa com os widgets de métricas configurados previamente, o painel disponibiliza insights aprofundados sobre as workloads, possibilitando o monitoramento proativo, a solução de problemas e a otimização. Essa abordagem dinâmica garante que você possa adaptar o painel às suas necessidades específicas de monitoramento com rapidez, sem a necessidade de personalizações ou configurações extensas. As soluções fornecem suporte à observabilidade entre regiões? Os painéis das soluções do CloudWatch exibem métricas da região em que foram criados. Contudo, os painéis das soluções não realizam a exibição de métricas de diversas regiões. Caso em seu caso de uso seja necessário visualizar dados de diversas regiões em um único painel, você deverá personalizar o JSON do painel para adicionar as regiões que deseja visualizar. Para isso, use o atributo region do formato de métrica para consultar as métricas de diferentes regiões. Para obter mais informações sobre como modificar o JSON do painel, consulte Metric Widget: Format for Each Metric in the Array . Os painéis das soluções são compatíveis com o console do CloudWatch entre contas e entre regiões ? Ao usar a observabilidade entre contas do CloudWatch, os painéis das soluções na conta central de monitoramento exibem métricas das contas de origem localizadas na mesma região. Para diferenciar métricas para workloads semelhantes entre contas, forneça valores exclusivos de dimensão de agrupamento nas configurações do agente. Por exemplo, atribua valores de ClusterName distintos para operadores do Kafka em contas diferentes para a workload do Kafka, possibilitando a seleção precisa do cluster e a visualização de métricas no painel. Os painéis das soluções são compatíveis com a observabilidade do CloudWatch entre contas ? Se você tiver habilitado o recurso entre contas usando o console do CloudWatch entre contas e entre regiões, não poderá usar o painel das soluções criado na conta de monitoramento para visualizar métricas de contas de origem. Em vez disso, será necessário criar painéis nas respectivas contas de origem. No entanto, é possível criar o painel na conta de origem e visualizá-lo usando a conta central de monitoramento ao alterar a configuração do ID da conta no console. Quais são as limitações de um painel das soluções? Os painéis das soluções usam expressões de pesquisa para filtrar e para analisar métricas das workloads. Isso possibilita visualizações dinâmicas com base nas seleções de opções do menu suspenso. Essas expressões de pesquisa podem retornar mais de 500 séries temporais, mas cada widget do painel tem um limite de 500 séries temporais para exibição. Se uma pesquisa de métrica no painel da solução resultar em um número superior a 500 séries temporais em todas as instâncias do Amazon EC2, o gráfico que exibe os principais colaboradores pode mostrar resultados imprecisos. Para obter mais informações sobre as expressões de pesquisa, consulte Sintaxe de expressão de pesquisa do CloudWatch . O CloudWatch exibe as informações sobre as métricas nos painéis quando você clica no ícone i no widget do painel. No entanto, neste momento, isso não funciona para widgets de painéis que usam expressões de pesquisa. Como os painéis das soluções dependem de expressões de pesquisa, você não poderá visualizar a descrição da métrica no painel. É possível personalizar a configuração do agente ou o painel fornecido por uma solução? Você tem a possibilidade de personalizar tanto a configuração do agente quanto o painel. No entanto, lembre-se de que, se você personalizar a configuração do agente, será necessário atualizar o painel correspondente, caso contrário, ele exibirá widgets de métrica vazios. Além disso, é importante estar ciente de que, caso o CloudWatch libere uma nova versão de uma solução, poderá ser necessário refazer as personalizações ao aplicar a versão mais recente da solução. Como as soluções têm controle de versão? Cada solução fornece as instruções e os recursos mais atualizados. Sempre recomendamos usar a versão mais recente disponível. Embora as soluções em si não tenham controle de versão, os artefatos associados (como modelos do CloudFormation para painéis e instalações de agentes) têm. Você pode identificar a versão de um artefato implantado anteriormente verificando o campo de descrição do modelo do CloudFormation ou o nome do arquivo do modelo que você baixou. Para determinar se você está usando a versão mais recente, compare a versão implantada com a atualmente referenciada na documentação da solução. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Perfis vinculados a serviço Workload da JVM no EC2 Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/id_id/AmazonCloudWatch/latest/monitoring/ExploreRelated.html | Jelajahi telemetri terkait - Amazon CloudWatch Jelajahi telemetri terkait - Amazon CloudWatch Dokumentasi Amazon CloudWatch Panduan Pengguna Apa itu telemetri terkait? Cara mengakses Menavigasi telemetri terkait Menggunakan peta topologi Menemukan sumber daya tertentu Prasyarat Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Jelajahi telemetri terkait Sistem komputer dapat menghasilkan sejumlah besar telemetri, termasuk metrik dan log, dan sistem yang kompleks terlebih lagi. Saat melihat satu set telemetri tertentu, mungkin sulit untuk menemukan telemetri lain yang terkait dengan set awal Anda. Ini dapat mengambil pelatihan lanjutan untuk mendapatkan keterampilan yang dibutuhkan untuk menemukan masalah dan memecahkan masalah mereka. Karena sistemnya kompleks, memahami apa yang sedang terjadi dapat melibatkan melihat metrik dan log dari berbagai layanan dan sumber daya, yang membutuhkan peralihan konteks dan navigasi antar sistem. Fitur terkait Amazon CloudWatch Explore menawarkan akses ke hubungan AWS sumber daya, metrik terkait, dan log di seluruh konsol layanan, meningkatkan observabilitas dan efisiensi bagi operator dari semua tingkat keahlian. Saat melihat alarm atau anomali di CloudWatch dasbor, atau metrik AWS, pengguna dapat dengan cepat menemukan dan melihat metrik dan log untuk sumber daya terkait di sistem Anda. CloudWatch menyediakan visibilitas ke dalam metrik dan log yang terkait dengan sumber daya tertentu, dan panel terkait Jelajahi memperluasnya dengan memungkinkan Anda menghubungkan sumber daya infrastruktur dengan beban kerja Anda dengan semua telemetri terkait. Ini memberi Anda akses cepat ke informasi yang Anda butuhkan untuk memecahkan masalah terkait infrastruktur. Anda melihat hubungan antara sumber daya, dan telemetri terkait di panel Jelajahi terkait . Panel terkait Jelajahi diakses dari CloudWatch atau dari AWS konsol lain yang menampilkan sumber daya atau telemetri. catatan Jelajahi terkait saat ini terbatas dalam akun yang disiapkan sebagai akun pemantauan dalam observabilitas CloudWatch lintas akun. Anda harus mengakses Jelajahi terkait dari akun sumber tempat sumber daya awalnya dibuat dan dikelola. Di akun sumber, Anda dapat menavigasi antara sumber daya yang terhubung, dan melihat log dan metrik terkait. Topik berikut membahas rincian penjelajahan telemetri terkait. Topik Apa itu telemetri terkait? Cara mengakses panel terkait Jelajahi Menavigasi telemetri terkait Menggunakan peta topologi Menemukan sumber daya tertentu Izin dan prasyarat yang diperlukan untuk melihat dan menjelajahi telemetri terkait Bagaimana CloudWatch menemukan telemetri terkait? AWS layanan yang mendukung telemetri terkait Cara menambahkan informasi terkait ke telemetri khusus yang dikirim ke CloudWatch Apa itu telemetri terkait? Telemetri terkait adalah metrik dan data log dari sumber daya yang terkait dengan sumber daya atau layanan saat ini. Secara tradisional, Anda mungkin melihat metrik dan log yang terkait dengan penyeimbang beban tunggal, atau semua telemetri yang terkait dengan Amazon. EC2 Fitur terkait Jelajahi di Amazon CloudWatch menambahkan peta topologi interaktif. Peta adalah tampilan sumber-sentris di mana Anda dapat menemukan metrik dan log untuk sumber daya tertentu, tetapi Anda juga dapat melihat bagaimana sumber daya tersebut terhubung ke sumber daya lain. Misalnya, jika Anda melihat telemetri untuk penyeimbang beban di panel terkait Jelajahi, selain metrik dan log yang terkait dengan penyeimbang beban tersebut, peta akan menunjukkan kepada Anda grup target untuk penyeimbang beban tersebut. Memilih salah satu grup target kemudian akan menunjukkan EC2 contoh Amazon yang terkait dengan grup target tersebut. Pada setiap langkah dalam proses ini, telemetri, termasuk metrik dan log, untuk sumber daya yang dipilih ditampilkan, sehingga mudah untuk menemukan telemetri yang Anda cari dengan cepat, atau untuk menjelajahi telemetri, mencari penyebab masalah tertentu. Cara mengakses panel terkait Jelajahi Di dalam CloudWatch konsol, ada beberapa cara untuk mengakses telemetri yang terkait dengan tampilan Anda saat ini. Misalnya, jika Anda melihat grafik di dasbor, dan Anda ingin melihat telemetri yang terkait dengan grafik itu atau aspek grafik, Anda dapat memilih untuk menjelajahi data terkait langsung dari grafik itu. Dari banyak tempat di konsol, Anda dapat memilih item menu terkait Jelajahi , atau pilih ikon kompas ( ) untuk menampilkan panel Jelajahi terkait . Anda dapat mengakses pengalaman menjelajah dari titik masuk di seluruh CloudWatch konsol (dan AWS konsol lainnya), termasuk: Navigasi metrik — Saat Anda memilih Metrik lalu Semua metrik dari menu sebelah kiri CloudWatch konsol, ubin untuk layanan atau sumber metrik yang didukung akan menampilkan ikon kompas yang menampilkan telemetri terkait di sudut kanan bawah. Legenda metrik — Saat melihat grafik metrik apa pun (di CloudWatch atau AWS konsol lain), mengarahkan kursor ke legenda grafik menunjukkan informasi tentang data, serta tombol terkait Jelajahi yang menampilkan telemetri terkait. Titik data metrik — Saat melihat grafik metrik apa pun, mengarahkan kursor ke titik data dalam grafik menunjukkan informasi tentang metrik, serta ikon kompas, untuk memunculkan telemetri terkait. Pencarian metrik — Saat mencari metrik di CloudWatch, jika Anda memilih nama metrik yang ditemukan, Anda dapat memilih Jelajahi terkait dari menu yang muncul, yang akan memunculkan telemetri terkait. Bilah alat konsol — Di banyak halaman AWS konsol, bilah alat konsol (biasanya di kanan atas konsol) menyertakan ikon CloudWatch layanan, yang akan memunculkan CloudWatch alat, termasuk panel terkait Jelajahi . Tergantung dari mana Anda mengakses panel, konteks default panel akan menampilkan filter yang sesuai, jika memungkinkan. Menavigasi telemetri terkait Saat Anda memilih salah satu titik masuk ke panel Jelajahi terkait , itu akan muncul di sisi kanan CloudWatch konsol. Panel ini memberi Anda akses untuk melihat dan menemukan telemetri yang terkait dengan entitas , di sistem Anda. Entitas adalah sumber daya, seperti EC2 instans Amazon, atau layanan, seperti aplikasi yang telah Anda buat. Anda dapat bekerja di dalam panel ini tanpa mengganggu alur kerja Anda saat ini, karena terbuka ke sisi halaman awal Anda. Gambar berikut menunjukkan panel Jelajahi terkait yang difokuskan pada satu EC2 instance Amazon, dan entitas terkait. Bagian atas panel terkait Jelajahi adalah peta topologi visual (peta ) entitas saat ini dan entitas terkait lainnya. Entitas yang saat ini dipilih menetapkan fokus untuk panel. Ada dua cara untuk memilih entitas. Peta topologi — Peta adalah tampilan visual dari entitas saat ini dengan fokus. Ini juga menampilkan entitas terkait, memungkinkan Anda untuk menavigasi di sekitar kumpulan sumber daya dan layanan yang terkait satu sama lain. Temukan sumber daya lainnya — Anda dapat menggunakan tombol Temukan sumber daya lain untuk memfilter dan mencari entitas untuk digunakan sebagai fokus. Bagian bawah panel menunjukkan penelusuran metrik dan log otomatis untuk entitas fokus saat ini. Secara default, fokus disetel ke entitas yang cocok dengan lokasi tempat Anda mengakses panel terkait Jelajahi . Misalnya, jika Anda mengaksesnya dengan mengklik ikon kompas yang terkait dengan metrik dari EC2 instance Amazon tertentu, maka fokusnya akan diatur ke EC2 instance Amazon tersebut. Saat memilih AWS sumber daya untuk fokus di panel Jelajahi terkait , Anda dapat menavigasi ke konsol khusus sumber daya untuk sumber daya yang dipilih. Misalnya, jika Anda telah memilih EC2 instans Amazon, Anda dapat memilih tautan Lihat di EC2 konsol untuk membuka EC2 konsol Amazon dengan instance yang dipilih. Saat Anda menetapkan fokus, metrik dan log secara otomatis difilter untuk menampilkan telemetri yang terkait dengan fokus Anda. Metrik — Setiap metrik ditampilkan sebagai grafik untuk periode waktu yang telah Anda pilih. Sama seperti grafik dasbor lainnya CloudWatch, Anda dapat mengarahkan kursor ke atas atau memilih grafik untuk mendapatkan informasi lebih lanjut tentang grafik metrik, dan untuk melihat opsi termasuk melihat dalam CloudWatch metrik. Memilih untuk melihat CloudWatch akan membuka tampilan metrik dengan konteks tampilan yang sama dengan panel Jelajahi terkait , termasuk sumber daya dan rentang waktu. Pola log — CloudWatch menganalisis grup log yang terkait dengan sumber daya fokus dan menunjukkan pola umum di log tersebut. Untuk informasi selengkapnya tentang pola log, lihat Analisis pola di Panduan Pengguna CloudWatch Log Amazon . Anda dapat memilih Bandingkan rentang waktu untuk memilih rentang waktu lain dan membandingkan log di dua rentang waktu. Anda dapat memilih Lihat di Wawasan Log untuk menganalisis CloudWatch log di Wawasan Log dengan opsi yang sama dengan tampilan Anda saat ini, termasuk sumber daya, grup log, dan rentang waktu. Untuk informasi selengkapnya, lihat Menganalisis data CloudWatch log dengan Wawasan Log di Panduan Pengguna CloudWatch Log Amazon . Grup log - Grup log yang berisi log ditampilkan. Anda dapat memilih grup log dan kemudian melakukan salah satu tindakan berikut: Pilih Mulai tailing di Live Tail untuk melihat daftar streaming peristiwa log baru saat dicerna untuk grup log yang dipilih. Sesi Live Tail dimulai di CloudWatch konsol. Untuk informasi selengkapnya tentang Live Tail, lihat Memecahkan Masalah dengan CloudWatch Log LiveTail di Panduan Pengguna Amazon CloudWatch Logs . Pilih Kueri di Wawasan Log untuk membuka Wawasan Log dengan kueri yang tercakup hanya pada grup log tersebut, menerapkan konteks Anda saat ini, termasuk sumber daya dan rentang waktu. Menggunakan peta topologi Peta topologi (peta ) adalah tampilan visual dari entitas fokus saat ini dan sumber daya atau layanan terkait. Anda dapat menggunakan visualisasi interaktif ini untuk melihat hubungan antara sumber daya dan layanan yang berbeda, dan mengeksplorasi hubungan antar komponen dalam sistem Anda. Misalnya, jika Anda melihat sumber daya penyeimbang beban , peta akan menampilkan sumber daya grup target yang terhubung. Memilih grup target akan menampilkan instance terkait. Visualisasi konektivitas membantu operator memahami dan mengeksplorasi hubungan antara berbagai sumber daya dan layanan dalam sistem Anda. Anda dapat menyeret peta, dan memperbesar dan memperkecil, untuk melihat lebih banyak entitas terkait, atau untuk fokus pada entitas yang lebih sedikit. Bila Anda memilih entitas terkait, seperti grup target, fokus panel bergeser untuk menampilkan telemetri untuk entitas tersebut. Peta diperbarui untuk memusatkan pada grup target yang dipilih, menampilkan koneksinya ke entitas lain, seperti penyeimbang beban dan dan EC2 instans Amazon apa pun yang ditentukan dalam grup target tersebut. Saat Anda menavigasi entitas yang berbeda di peta, metrik dan log di bagian bawah panel diperbarui secara dinamis, memberi Anda telemetri yang relevan untuk sumber daya yang baru dipilih. Menemukan sumber daya tertentu Jika sumber daya tidak muncul di peta topologi, Anda dapat menggunakan fitur Temukan sumber daya lain untuk menemukannya. Anda dapat memfilter sumber daya berdasarkan tag atau jenis, lalu pilih yang Anda cari. Setelah menemukan sumber daya untuk difokuskan, Anda dikembalikan ke peta topologi, dengan sumber daya tersebut dipilih, untuk menelusuri entitas dan telemetri terkait. catatan Ada banyak alasan mengapa Anda mungkin tidak melihat sumber daya Anda di peta topologi. Contoh: Ini tidak terkait dengan entitas fokus saat ini. Anda tidak memiliki izin untuk mengakses entitas atau telemetri terkait. Sumber daya atau layanan mungkin tidak mendukung telemetri atau entitas terkait. Dengan menggunakan Temukan sumber daya lain , Anda dapat menemukan dan memvisualisasikan sumber daya yang mungkin tidak terhubung langsung atau terlihat di peta saat ini. Ini memastikan bahwa Anda dapat mengakses dan menganalisis semua komponen infrastruktur yang relevan. Pilih sumber daya dengan Temukan sumber daya. Buka panel Jelajahi terkait dari salah satu titik masuk di CloudWatch konsol. Pilih Temukan sumber daya . Pilih kerangka waktu yang ingin Anda lihat log atau metrik. Pilih Jenis sumber daya , lalu pilih jenis sumber daya yang ingin Anda fokuskan dari daftar drop-down, EC2 misalnya contoh. Secara opsional, filter kumpulan sumber daya dengan memberikan tag untuk difilter. Anda dapat melakukan ini dengan memilih filter sumber daya Filter dengan tag , atau dengan memilih label yang mengatakan 5 tag ditemukan (jumlahnya akan tergantung pada tag di sistem Anda). Ini memberi Anda daftar tag untuk dipilih. Setelah Anda memilih tag, daftar sumber daya secara otomatis difilter hanya untuk yang terkait dengan tag tersebut. Secara opsional, pilih satu atau lebih sumber daya spesifik dari yang ditemukan yang cocok dengan filter Anda. Pilih Tampilkan di peta untuk kembali ke peta Topologi dengan sumber daya Anda dipilih. Daftar Metrik dan Log Anda sekarang difilter hanya untuk log dan metrik yang terkait dengan jenis sumber daya tersebut. Anda dapat memilih tab Metrik atau Log untuk melihat jenis telemetri yang ingin Anda lihat. Izin dan prasyarat yang diperlukan untuk melihat dan menjelajahi telemetri terkait Untuk menjelajahi telemetri terkait, Anda harus mendapatkan informasi entitas dengan telemetri dari beban kerja Anda, dan Anda harus memiliki izin yang tepat untuk melihat data tersebut. Banyak layanan mengirim informasi entitas secara otomatis. Untuk beban kerja yang menggunakan CloudWatch agen, Anda harus memiliki setidaknya versi 1.300049.1 agen, dan Anda harus mengkonfigurasinya dengan benar. Untuk informasi tentang mengonfigurasi agen, lihat Cara menambahkan informasi terkait ke telemetri khusus yang dikirim ke CloudWatch . Untuk beban kerja yang berjalan di Amazon EKS, Anda harus memiliki setidaknya versi v2.3.1-eksbuild.1 add-on Amazon CloudWatch Observability EKS. Untuk informasi selengkapnya tentang add-on ini, lihat Mulai cepat dengan add-on Amazon CloudWatch Observability EKS . Untuk menjelajahi telemetri terkait, Anda harus masuk dengan izin tertentu. Menjelajahi telemetri terkait adalah aktivitas hanya-baca, dan membutuhkan setidaknya akses hanya-baca. CloudWatch Izin yang diperlukan untuk melihat asosiasi antara telemetri dan entitas adalah: logs:ListLogGroupsForEntity ,, logs:ListEntitiesForLogGroup , cloudwatch:ListEntitiesForMetric dan. application-signals:ListObservedEntities Setiap kebijakan AWS terkelola berikut akan memberikan CloudWatch izin yang diperlukan untuk mengakses telemetri terkait di konsol: CloudWatch CloudWatchFullAccessV2 - Menyediakan akses penuh ke CloudWatch. CloudWatchReadOnlyAccess — Menyediakan akses hanya-baca ke. CloudWatch ReadOnlyAccess — Menyediakan akses read-only ke AWS layanan dan sumber daya. Selain itu, Anda harus memiliki setidaknya akses hanya-baca ( Describe* dan Get* ) ke sumber daya apa pun di peta topologi, CloudWatch agar dapat menemukan dan menampilkan hubungan. Untuk detail selengkapnya tentang penggunaan kebijakan untuk mengontrol akses, lihat Mengelola akses menggunakan kebijakan . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Konsol lintas akun Lintas wilayah CloudWatch Bagaimana CloudWatch menemukan telemetri terkait? Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/it_it/AmazonCloudWatch/latest/monitoring/prerequisites.html | Prerequisiti - Amazon CloudWatch Prerequisiti - Amazon CloudWatch Documentazione Amazon CloudWatch Guida per l’utente Ruoli e utenti IAM per l'agente CloudWatch Requisiti di rete Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Prerequisiti Assicuratevi che i seguenti prerequisiti siano soddisfatti prima di installare l' CloudWatch agente per la prima volta. Ruoli e utenti IAM per l'agente CloudWatch L'accesso alle AWS risorse richiede autorizzazioni. Crei un ruolo IAM, un utente IAM o entrambi per concedere le autorizzazioni su cui l' CloudWatch agente deve scrivere le metriche. CloudWatch Crea un ruolo IAM per le EC2 istanze Amazon Se intendi eseguire l' CloudWatch agente su EC2 istanze Amazon, crea un ruolo IAM con le autorizzazioni necessarie. Accedi alla console di AWS gestione e apri la console IAM all'indirizzo. https://console.aws.amazon.com/iam/ Nel pannello di navigazione, scegli Ruoli e quindi Crea ruolo . Assicurati di aver selezionato AWS service (Servizio) in Trusted entity type (Tipo di entità attendibile). Per Caso d'uso , scegli EC2 in Casi d'uso comuni . Scegli Next (Successivo) . Nell'elenco delle politiche, seleziona la casella di controllo accanto a CloudWatchAgentServerPolicy . Se necessario, utilizzare la casella di ricerca per trovare la policy. Scegli Next (Successivo) . In Nome del ruolo , inserisci un nome per il ruolo, ad esempio CloudWatchAgentServerRole . Se desiderato, fornire una descrizione. Quindi seleziona Create role (Crea ruolo) . (Facoltativo) Se l'agente intende inviare i log a CloudWatch Logs e desideri che sia in grado di impostare le politiche di conservazione per questi gruppi di log, devi aggiungere l' logs:PutRetentionPolicy autorizzazione al ruolo. Creazione di un utente IAM per i server on-premises Se intendi eseguire l' CloudWatch agente su server locali, crea un utente IAM con le autorizzazioni necessarie. Nota Questo scenario richiede agli utenti IAM un accesso programmatico e credenziali a lungo termine, il che rappresenta un rischio per la sicurezza. Per ridurre questo rischio, si consiglia di fornire a questi utenti solo le autorizzazioni necessarie per eseguire l'attività e di rimuoverli quando non sono più necessari. Accedi alla console di AWS gestione e apri la console IAM all'indirizzo. https://console.aws.amazon.com/iam/ Nel pannello di navigazione, seleziona Utenti , quindi seleziona Aggiungi utenti . Immetti il nome del nuovo utente. Seleziona Access key - Programmatic access (Chiave di accesso - Accesso programmatico) e scegli Next: Permissions (Successivo: Autorizzazioni). Scegli Attach existing policies directly (Collega direttamente le policy esistenti) . Nell'elenco delle politiche, seleziona la casella di controllo accanto a CloudWatchAgentServerPolicy . Se necessario, utilizzare la casella di ricerca per trovare la policy. Scegli Successivo: Tag . Facoltativamente, crea i tag per il nuovo utente IAM, quindi scegli Successivo: rivedi . Conferma che sia elencata la policy corretta, quindi scegli Create user (Crea utente). Accanto al nome del nuovo utente, seleziona Show (Mostra). Copiare la chiave di accesso e la chiave segreta in un file, in modo da poterle utilizzare durante l'installazione dell'agente. Scegli Chiudi . Associare un ruolo IAM a un'istanza Amazon EC2 Per consentire all' CloudWatch agente di inviare dati da un' EC2 istanza Amazon, devi collegare il ruolo IAM che hai creato all'istanza. Per ulteriori informazioni sul collegamento di un ruolo IAM a un'istanza, consulta Attaching an IAM Role to an Instance nella Guida per l'utente di Amazon Elastic Compute Cloud. Consentire all' CloudWatch agente di impostare la politica di conservazione dei log È possibile configurare l' CloudWatch agente per impostare la politica di conservazione per i gruppi di log a cui invia gli eventi di registro. Se esegui questa operazione, devi concedere l'autorizzazione logs:PutRetentionPolicy al ruolo o all'utente IAM utilizzato dall'agente. L'agente utilizza un ruolo IAM per l'esecuzione su EC2 istanze Amazon e utilizza un utente IAM per i server locali. Per concedere al ruolo IAM dell' CloudWatch agente l'autorizzazione a impostare le politiche di conservazione dei log Accedi Console di gestione AWS e apri la console IAM all'indirizzo https://console.aws.amazon.com/iam/ . Nel pannello di navigazione a sinistra, seleziona Ruoli . Nella casella di ricerca, digita l'inizio del nome del ruolo IAM dell' CloudWatch agente. Hai scelto questo nome al momento della creazione del ruolo. Potrebbe essere denominato CloudWatchAgentServerRole . Quando viene visualizzato il nome del ruolo in questione, sceglilo. Nella scheda Permissions (Autorizzazioni), scegli Add permissions (Aggiungi autorizzazioni), Create inline policy (Crea policy in linea). Scegli la scheda JSON e copia la seguente policy nella casella, sostituendo il JSON predefinito: JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:PutRetentionPolicy", "Resource": "*" } ] } Scegliere Esamina policy . Nel campo Name (Nome), inserisci CloudWatchAgentPutLogsRetention o un nome simile e scegli Create policy (Crea policy). Per concedere all'utente IAM dell' CloudWatch agente l'autorizzazione a impostare le politiche di conservazione dei log Accedi Console di gestione AWS e apri la console IAM all'indirizzo https://console.aws.amazon.com/iam/ . Nel riquadro di navigazione a sinistra, seleziona Users (Utenti) . Nella casella di ricerca, digita l'inizio del nome dell'utente IAM dell' CloudWatch agente. Hai scelto questo nome al momento della creazione dell'utente. Quando vedi l'utente, scegli il suo nome. Nella scheda Permissions (Autorizzazioni) scegli Add inline policy (Aggiungi policy inline) . Scegli la scheda JSON e copia la seguente policy nella casella, sostituendo il JSON predefinito: JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:PutRetentionPolicy", "Resource": "*" } ] } Scegliere Esamina policy . Nel campo Name (Nome), inserisci CloudWatchAgentPutLogsRetention o un nome simile e scegli Create policy (Crea policy). Requisiti di rete Nota Quando il server si trova in una sottorete pubblica, assicurati che sia possibile accedere a un gateway Internet. Quando il server si trova in una sottorete privata, l'accesso avviene tramite i gateway NAT o l'endpoint VPC. Per ulteriori informazioni sui gateway NAT, consulta https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html . Le tue EC2 istanze Amazon devono disporre dell'accesso a Internet in uscita per inviare dati ai nostri log CloudWatch . CloudWatch Per ulteriori informazioni su come configurare l'accesso a Internet, consulta Internet Gateways nella Guida per l'utente di Amazon VPC. Utilizzo di endpoint VPC Se utilizzi un VPC e desideri utilizzare un CloudWatch agente senza accesso pubblico a Internet, puoi configurare gli endpoint VPC per e Logs. CloudWatch CloudWatch Gli endpoint e le porte per configurare il proxy sono i seguenti: Se utilizzi l'agente per raccogliere metriche, devi aggiungere gli CloudWatch endpoint per le regioni appropriate all'elenco delle aree consentite. Questi endpoint sono elencati negli CloudWatchendpoint e nelle quote di Amazon . Se utilizzi l'agente per raccogliere i log, devi aggiungere gli endpoint Logs per le regioni CloudWatch appropriate all'elenco delle regioni consentite. Questi endpoint sono elencati negli endpoint e nelle quote di Amazon CloudWatch Logs. Se stai utilizzando System Manager per installare l'agente o Parameter Store per archiviare il file di configurazione, devi aggiungere gli endpoint Systems Manager per le regioni appropriate all'elenco consentiti. Questi endpoint sono riportati in AWS Systems Manager endpoints and quotas . JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Sistemi operativi supportati Scarica il pacchetto dell'agente CloudWatch Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:29:26 |
https://fr-fr.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook Adresse e-mail ou téléphone Mot de passe Informations de compte oubliées ? Créer un compte Cette fonction est temporairement bloquée Cette fonction est temporairement bloquée Il semble que vous ayez abusé de cette fonctionnalité en l’utilisant trop vite. Vous n’êtes plus autorisé à l’utiliser. Back Français (France) 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Deutsch S’inscrire Se connecter Messenger Facebook Lite Vidéo Meta Pay Boutique Meta Meta Quest Ray-Ban Meta Meta AI Plus de contenu Meta AI Instagram Threads Centre d’information sur les élections Politique de confidentialité Centre de confidentialité À propos Créer une publicité Créer une Page Développeurs Emplois Cookies Choisir sa publicité Conditions générales Aide Importation des contacts et non-utilisateurs Paramètres Historique d’activité Meta © 2026 | 2026-01-13T09:29:26 |
https://www.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026 | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Solution-NVIDIA-GPU-On-EC2.html | CloudWatch solution: NVIDIA GPU workload on Amazon EC2 - Amazon CloudWatch CloudWatch solution: NVIDIA GPU workload on Amazon EC2 - Amazon CloudWatch Documentation Amazon CloudWatch User Guide Requirements Benefits CloudWatch agent configuration for this solution Deploy the agent for your solution Create the NVIDIA GPU solution dashboard CloudWatch solution: NVIDIA GPU workload on Amazon EC2 This solution helps you configure out-of-the-box metric collection using CloudWatch agents for NVIDIA GPU workloads running on EC2 instances. Additionally, it helps you set up a pre-configured CloudWatch dashboard. For general information about all CloudWatch observability solutions, see CloudWatch observability solutions . Topics Requirements Benefits CloudWatch agent configuration for this solution Deploy the agent for your solution Create the NVIDIA GPU solution dashboard Requirements This solution is relevant for the following conditions: Compute: Amazon EC2 Supports up to 500 GPUs across all EC2 instances in a given AWS Region Latest version of CloudWatch agent SSM agent installed on EC2 instance The EC2 instance must have an NVIDIA driver installed. NVIDIA drivers are pre-installed on some Amazon Machine Images (AMIs). Otherwise, you can manually install the driver. For more information, see Install NVIDIA drivers on Linux instances . Note AWS Systems Manager (SSM agent) is pre-installed on some Amazon Machine Images (AMIs) provided by AWS and trusted third-parties. If the agent isn't installed, you can install it manually using the procedure for your operating system type. Manually installing and uninstalling SSM Agent on EC2 instances for Linux Manually installing and uninstalling SSM Agent on EC2 instances for macOS Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server Benefits The solution delivers NVIDIA monitoring, providing valuable insights for the following use cases: Analyze GPU and memory usage for performance bottlenecks or the need for additional resources. Monitor temperature and power draw to ensure GPUs operate within safe limits. Evaluate encoder performance for GPU video workloads. Verify PCIe connectivity for expected generation and width. Monitor GPU clock speeds to detect scaling and throttling issues. Below are the key advantages of the solution: Automates metric collection for NVIDIA using CloudWatch agent configuration, eliminating manual instrumentation. Provides a pre-configured, consolidated CloudWatch dashboard for NVIDIA metrics. The dashboard will automatically handle metrics from new NVIDIA EC2 instances configured using the solution, even if those metrics don't exist when you first create the dashboard. The following image is an example of the dashboard for this solution. Costs This solution creates and uses resources in your account. You are charged for standard usage, including the following: All metrics collected by the CloudWatch agent are charged as custom metrics. The number of metrics used by this solution depends on the number of EC2 hosts. Each EC2 host configured for the solution publishes a total of 17 metrics per GPU. One custom dashboard. API operations requested by the CloudWatch agent to publish the metrics. With the default configuration for this solution, the CloudWatch agent calls the PutMetricData once every minute for each EC2 host. This means the PutMetricData API will be called 30*24*60=43,200 in a 30-day month for each EC2 host. For more information about CloudWatch pricing, see Amazon CloudWatch Pricing . The pricing calculator can help you estimate approximate monthly costs for using this solution. To use the pricing calculator to estimate your monthly solution costs Open the Amazon CloudWatch pricing calculator . For Choose a Region , select the Region where you would like to deploy the solution. In the Metrics section, for Number of metrics , enter 17 * average number of GPUs per EC2 host * number of EC2 instances configured for this solution . In the APIs section, for Number of API requests , enter 43200 * number of EC2 instances configured for this solution . By default, the CloudWatch agent performs one PutMetricData operation each minute for each EC2 host. In the Dashboards and Alarms section, for Number of Dashboards , enter 1 . You can see your monthly estimated costs at the bottom of the pricing calculator. CloudWatch agent configuration for this solution The CloudWatch agent is software that runs continuously and autonomously on your servers and in containerized environments. It collects metrics, logs, and traces from your infrastructure and applications and sends them to CloudWatch and X-Ray. For more information about the CloudWatch agent, see Collect metrics, logs, and traces using the CloudWatch agent . The agent configuration in this solution collects a set of metrics to help you get started monitoring and observing your NVIDIA GPU. The CloudWatch agent can be configured to collect more NVIDIA GPU metrics than the dashboard displays by default. For a list of all NVIDIA GPU metrics that you can collect, see Collect NVIDIA GPU metrics . Agent configuration for this solution The metrics collected by the agent are defined in the agent configuration. The solution provides agent configurations to collect the recommended metrics with suitable dimensions for the solution's dashboard. Use the following CloudWatch agent configuration on EC2 instances with NVIDIA GPUs. Configuration will be stored as a parameter in SSM's Parameter Store, as detailed later in Step 2: Store the recommended CloudWatch agent configuration file in Systems Manager Parameter Store . { "metrics": { "namespace": "CWAgent", "append_dimensions": { "InstanceId": "$ { aws:InstanceId}" }, "metrics_collected": { "nvidia_gpu": { "measurement": [ "utilization_gpu", "temperature_gpu", "power_draw", "utilization_memory", "fan_speed", "memory_total", "memory_used", "memory_free", "pcie_link_gen_current", "pcie_link_width_current", "encoder_stats_session_count", "encoder_stats_average_fps", "encoder_stats_average_latency", "clocks_current_graphics", "clocks_current_sm", "clocks_current_memory", "clocks_current_video" ], "metrics_collection_interval": 60 } } }, "force_flush_interval": 60 } Deploy the agent for your solution There are several approaches for installing the CloudWatch agent, depending on the use case. We recommend using Systems Manager for this solution. It provides a console experience and makes it simpler to manage a fleet of managed servers within a single AWS account. The instructions in this section use Systems Manager and are intended for when you don't have the CloudWatch agent running with existing configurations. You can check whether the CloudWatch agent is running by following the steps in Verify that the CloudWatch agent is running . If you are already running the CloudWatch agent on the EC2 hosts where the workload is deployed and managing agent configurations, you can skip the instructions in this section and follow your existing deployment mechanism to update the configuration. Be sure to merge the agent configuration of NVIDIA GPU with your existing agent configuration, and then deploy the merged configuration. If you are using Systems Manager to store and manage the configuration for the CloudWatch agent, you can merge the configuration to the existing parameter value. For more information, see Managing CloudWatch agent configuration files . Note Using Systems Manager to deploy the following CloudWatch agent configurations will replace or overwrite any existing CloudWatch agent configuration on your EC2 instances. You can modify this configuration to suit your unique environment or use case. The metrics defined in configuration are the minimum required for the dashboard provided the solution. The deployment process includes the following steps: Step 1: Ensure that the target EC2 instances have the required IAM permissions. Step 2: Store the recommended agent configuration file in the Systems Manager Parameter Store. Step 3: Install the CloudWatch agent on one or more EC2 instances using an CloudFormation stack. Step 4: Verify the agent setup is configured properly. Step 1: Ensure the target EC2 instances have the required IAM permissions You must grant permission for Systems Manager to install and configure the CloudWatch agent. You must also grant permission for the CloudWatch agent to publish telemetry from your EC2 instance to CloudWatch. Make sure that the IAM role attached to the instance has the CloudWatchAgentServerPolicy and AmazonSSMManagedInstanceCore IAM policies attached. After the role is created, attach the role to your EC2 instances. To attach a role to an EC2 instance, follow the steps in Attach an IAM role to an instance . Step 2: Store the recommended CloudWatch agent configuration file in Systems Manager Parameter Store Parameter Store simplifies the installation of the CloudWatch agent on an EC2 instance by securely storing and managing configuration parameters, eliminating the need for hard-coded values. This ensures a more secure and flexible deployment process, enabling centralized management and easier updates to configurations across multiple instances. Use the following steps to store the recommended CloudWatch agent configuration file as a parameter in Parameter Store. To create the CloudWatch agent configuration file as a parameter Open the AWS Systems Manager console at https://console.aws.amazon.com/systems-manager/ . Verify that the selected Region on the console is the Region where the NVIDIA GPU workload is running. From the navigation pane, choose Application Management , Parameter Store . Follow these steps to create a new parameter for the configuration. Choose Create parameter . In the Name box, enter a name that you'll use to reference the CloudWatch agent configuration file in later steps. For example, AmazonCloudWatch-NVIDIA-GPU-Configuration . (Optional) In the Description box, type a description for the parameter. For Parameter tier , choose Standard . For Type , choose String . For Data type , choose text . In the Value box, paste the corresponding JSON block that was listed in Agent configuration for this solution . Choose Create parameter . Step 3: Install the CloudWatch agent and apply the configuration using an CloudFormation template You can use AWS CloudFormation to install the agent and configure it to use the CloudWatch agent configuration that you created in the previous steps. To install and configure the CloudWatch agent for this solution Open the CloudFormation Quick create stack wizard using this link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw-agent-installation-template-1.0.0.json . Verify that the selected Region on the console is the Region where the NVIDIA GPU workload is running. For Stack name , enter a name to identity this stack, such as CWAgentInstallationStack . In the Parameters section, specify the following: For CloudWatchAgentConfigSSM , enter the name of the Systems Manager parameter for the agent configuration that you created earlier, such as AmazonCloudWatch-NVIDIA-GPU-Configuration . To select the target instances, you have two options. For InstanceIds , specify a comma-delimited list of instance IDs list of instance IDs where you want to install the CloudWatch agent with this configuration. You can list a single instance or several instances. If you are deploying at scale, you can specify the TagKey and the corresponding TagValue to target all EC2 instances with this tag and value. If you specify a TagKey , you must specify a corresponding TagValue . (For an Auto Scaling group, specify aws:autoscaling:groupName for the TagKey and specify the Auto Scaling group name for the TagValue to deploy to all instances within the Auto Scaling group.) Review the settings, then choose Create stack . If you want to edit the template file first to customize it, choose the Upload a template file option under Create Stack Wizard to upload the edited template. For more information, see Creating a stack on CloudFormation console . Note After this step is completed, this Systems Manager parameter will be associated with the CloudWatch agents running in the targeted instances. This means that: If the Systems Manager parameter is deleted, the agent will stop. If the Systems Manager parameter is edited, the configuration changes will automatically apply to the agent at the scheduled frequency which is 30 days by default. If you want to immediately apply changes to this Systems Manager parameter, you must run this step again. For more information about associations, see Working with associations in Systems Manager . Step 4: Verify the agent setup is configured properly You can verify whether the CloudWatch agent is installed by following the steps in Verify that the CloudWatch agent is running . If the CloudWatch agent is not installed and running, make sure you have set up everything correctly. Be sure you have attached a role with correct permissions for the EC2 instance as described in Step 1: Ensure the target EC2 instances have the required IAM permissions . Be sure you have correctly configured the JSON for the Systems Manager parameter. Follow the steps in Troubleshooting installation of the CloudWatch agent with CloudFormation . If everything is set up correctly, then you should see the NVIDIA GPU metrics being published to CloudWatch. You can check the CloudWatch console to verify they are being published. To verify that NVIDIA GPU metrics are being published to CloudWatch Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/ . Choose Metrics , All metrics . Make sure you've selected the Region where you deployed the solution, and choose Custom namespaces , CWAgent . Search for the metrics mentioned in Agent configuration for this solution , such as nvidia_smi_utilization_gpu . If you see results for these metrics, then the metrics are being published to CloudWatch. Create the NVIDIA GPU solution dashboard The dashboard provided by this solution presents NVIDIA GPUs metrics by aggregating and presenting metrics across all instances. The dashboard shows a breakdown of the top contributors (top 10 per metric widget) for each metric. This helps you to quickly identify outliers or instances that significantly contribute to the observed metrics. To create the dashboard, you can use the following options: Use CloudWatch console to create the dashboard. Use AWS CloudFormation console to deploy the dashboard. Download the AWS CloudFormation infrastructure as code and integrate it as part of your continuous integration (CI) automation. By using the CloudWatch console to create a dashboard, you can preview the dashboard before actually creating and being charged. Note The dashboard created with CloudFormation in this solution displays metrics from the Region where the solution is deployed. Be sure to create the CloudFormation stack in the Region where your NVIDIA GPU metrics are published. If you've specified a custom namespace other than CWAgent in the CloudWatch agent configuration, you'll have to change the CloudFormation template for the dashboard to replace CWAgent with the customized namespace you are using. To create the dashboard via CloudWatch Console Open the CloudWatch Console Create Dashboard using this link: https://console.aws.amazon.com/cloudwatch/home?#dashboards?dashboardTemplate=NvidiaGpuOnEc2&referrer=os-catalog . Verify that the selected Region on the console is the Region where the NVIDIA GPU workload is running. Enter the name of the dashboard, then choose Create Dashboard . To easily differentiate this dashboard from similar dashboards in other Regions, we recommend including the Region name in the dashboard name, such as NVIDIA-GPU-Dashboard-us-east-1 . Preview the dashboard and choose Save to create the dashboard. To create the dashboard via CloudFormation Open the CloudFormation Quick create stack wizard using this link: https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Verify that the selected Region on the console is the Region where the NVIDIA GPU workload is running. For Stack name , enter a name to identity this stack, such as NVIDIA-GPU-DashboardStack . In the Parameters section, specify the name of the dashboard under the DashboardName parameter. To easily differentiate this dashboard from similar dashboards in other Regions, we recommend including the Region name in the dashboard name, such as NVIDIA-GPU-Dashboard-us-east-1 . Acknowledge access capabilities for transforms under Capabilities and transforms . Note that CloudFormation doesn't add any IAM resources. Review the settings, then choose Create stack . After the stack status is CREATE_COMPLETE , choose the Resources tab under the created stack and then choose the link under Physical ID to go to the dashboard. You can also access the dashboard in the CloudWatch console by choosing Dashboards in the left navigation pane of the console, and finding the dashboard name under Custom Dashboards . If you want to edit the template file to customize it for any purpose, you can use Upload a template file option under Create Stack Wizard to upload the edited template. For more information, see Creating a stack on CloudFormation console . You can use this link to download the template: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NVIDIA_GPU_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Get started with the NVIDIA GPU dashboard Here are a few tasks that you can try out with the new NVIDIA GPU dashboard. These tasks allow you to validate that the dashboard is working correctly and provide you some hands-on experience using it to monitor your NVIDIA GPUs. As you try these out, you'll get familiar with navigating the dashboard and interpreting the visualized metrics. Review GPU utilization From the Utilization section, find the GPU Utilization and Memory Utilization widgets. These show the percentage of time the GPU is being actively used for computations and the percentage of global memory being read or written, respectively. High utilization could indicate potential performance bottlenecks or the need for additional GPU resources. Analyze GPU memory usage In the Memory section, find the Total Memory , Used Memory , and Free Memory widgets. These provide insights into the overall memory capacity of the GPUs and how much memory is currently being consumed or available. Memory pressure could lead to performance issues or out-of-memory errors, so it's important to monitor these metrics and ensure sufficient memory is available for your workloads. Monitor temperature and power draw In the Temperature / Power section, find the GPU Temperature and Power Draw widgets. These metrics are essential for ensuring that your GPUs are operating within safe thermal and power limits. Identify encoder performance In the Encoder section, find the Encoder Session Count , Average FPS , and Average Latency widgets. These metrics are relevant if you're running video encoding workloads on your GPUs. Monitor these metrics to ensure that your encoders are performing optimally and identify any potential bottlenecks or performance issues. Check PCIe link status In the PCIe section, find the PCIe Link Generation and PCIe Link Width widgets. These metrics provide information about the PCIe link connecting the GPU to the host system. Ensure that the link is operating at the expected generation and width to avoid potential performance limitations due to PCIe bottlenecks. Review GPU clocks In the Clock section, find the Graphics Clock , SM Clock , Memory Clock , and Video Clock widgets. These metrics show the current operating frequencies of various GPU components. Monitoring these clocks can help identify potential issues with GPU clock scaling or frequency throttling, which could impact performance. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions NGINX workload on EC2 Kafka workload on EC2 Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:29:26 |
https://adnanthekhan.com/2024/07/30/blackhat-2024-and-def-con-32-preview/ | BlackHat 2024 and DEF CON 32 Preview | Adnan Khan - Security Research Adnan Khan's Blog Post Archive About Talks Search ⌘ K ESC Start typing to search posts... Search across titles, content, and tags No results found Try different keywords or check your spelling ↑ ↓ to navigate ↵ to select Powered by Fuse.js Post Archive About Talks BlackHat 2024 and DEF CON 32 Preview July 30, 2024 4 min read adnanthekhan bugbounty github githubactions security Overview In just over a week from now, I’ll be speaking at Black Hat 2024 and DEF CON 32 along with my co-presenter John Stawinski . Our talks will focus on attacks against self-hosted runners on public repositories, illustrated by real world case studies involving companies you’ve definitely heard of. Our research campaign leading to these talks exceeded every expectation that I had when we started it. One of the bug bounties was for a whopping $100,000 ! The Journey This research has been quite a journey for me personally and professionally. If you had asked me two years ago whether it was possible for an average guy that no one knew about to lead a 2-man nights and weekends research campaign that touched some of the largest companies in the world I’d say “I don’t know a world where that can happen, that’s the stuff of fantasy.” The beauty of offensive security is that it can reward those who always ask “What if?” The chain of events that led to Black Hat 2024 and DEF CON 32 started with me revisiting previous research that had fizzled, and asking “What if I fixed a typo?” That led to One Supply Chain Attack to Rule Them All – Poisoning GitHub’s Runner Images . After that, I asked John if he wanted to collaborate outside of work to rake in some bounties - we signed bounty sharing contract and the rest was history. Of course - there is far more to it then that, but that’s why we’ve lined up two information packed talks where we will share it all! The Talks John and I will be presenting two distinct talks at Black Hat 2024 and DEF CON. Each will cover case studies against real companies. We have already blogged about some of them, but there is one case study that will be a surprise. The details are known only by a handful of people involved in the Coordinated Vulnerability Disclosure and approval process. Those who attend our Black Hat talk in person are in for a treat. As the lead researcher for this effort, I want to emphasize and make the following clear: The research we are about to present was conducted entirely in our capacity as independent security researchers. The case studies that we are about to present along with views expressed in the presentations are our own. Self-Hosted GitHub CI/CD Runners: Continuous Integration, Continuous Destruction Our first presentation will serve as an overview of the risks of self-hosted runners and how impactful misconfigured self-hosted runners can be. We’ll drive this point with case studies, and I’ll be sharing a new case study that has not been publicly discussed outside of my disclosure process to the featured company. You’ll leave the talk with actionable changes organizations that use self-hosted runners can make and plenty of evidence to convince stakeholders why it is critical to securely deploy self-hosted runners. When: Wednesday, August 7th, 1:30 PM-2:10 PM Where: South Seas AB, Level 3 Black Hat 2024 will also be the official, version 1.0 launch date of Gato-X . The tool is a fork of Gato under the Apache 2.0 license and improves upon the original tool in every way. Gato-X automates the attacks we will showcase, and contains improvements to scanning speed, coverage, and user experience. As an added bonus, it also includes a scanner for GitHub Actions Injection and Pwn Requests - something that the original tool does not check for, and is a vulnerability class that can sometimes be chained into self-hosted runner takeover for extreme impact. Grand Theft Actions For Grand Theft Actions, we’re going to dive deep. We’ll present an in-depth walkthrough of one of our most impactful submissions, including some as-it-happened , never before seen video recorded during the original attack. After that, we’ll walk through an arsenal of post-exploitation techniques that you can use to obtain maximum impact after taking over a self-hosted runner. When: Saturday, August 10th, 12:00 PM Where: Las Vegas Convention Center - L1 - HW1-11-04 Hope to See You There! If we’ve connected online or you’ve read our research or received a report from myself or John, or just have questions, then feel free to say hi in-person! On this page Overview The Journey The Talks Self-Hosted GitHub CI/CD Runners: Continuous Integration, Continuous Destruction Grand Theft Actions Hope to See You There! Tags: #bugbounty #devops #github #github-actions © 2026 Adnan Khan. All rights reserved. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/it_it/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-LambdaMain.html | Abilitazione delle applicazioni su Lambda - Amazon CloudWatch Abilitazione delle applicazioni su Lambda - Amazon CloudWatch Documentazione Amazon CloudWatch Guida per l’utente Nozioni di base Usa la console CloudWatch Application Signals Utilizzo della console Lambda Abilita Application Signals su Lambda utilizzando AWS CDK Abilita i segnali applicativi su Lambda utilizzando il Model Context Protocol (MCP) (Facoltativo) Monitoraggio dell'integrità delle applicazioni Abilita manualmente Application Signals. Disabilitazione manuale di Application Signals Configurazione di Application Signals AWS Lambda Layer per OpenTelemetry ARNs Implementazione delle funzioni Lambda utilizzando il container Amazon ECR Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Abilitazione delle applicazioni su Lambda È possibile abilitare Application Signals per le funzioni Lambda. Application Signals strumenta automaticamente le funzioni Lambda utilizzando le librerie AWS Distro for OpenTelemetry (ADOT) avanzate, fornite tramite un livello Lambda. Questo AWS Lambda Layer per OpenTelemetry pacchetti e distribuisce le librerie necessarie per la strumentazione automatica per Application Signals. Oltre a supportare Application Signals, questo layer Lambda è anche un componente del OpenTelemetry supporto Lambda e fornisce funzionalità di tracciamento. È possibile migliorare l'osservabilità di Lambda anche utilizzando la ricerca delle transazioni, che consente di acquisire intervalli di traccia per l'invocazione della funzione Lambda senza campionamento. Questa funzionalità consente di raccogliere intervalli per le funzioni, indipendentemente dal flag sampled nella propagazione del contesto di traccia. Ciò garantisce che non vi sia alcun impatto aggiuntivo sui servizi dipendenti a valle. Abilitando la ricerca delle transazioni su Lambda, ottieni una visibilità completa sulle prestazioni delle tue funzioni e puoi risolvere i problemi che si verificano raramente. Per iniziare, consulta Transaction Search Argomenti Nozioni di base Usa la console CloudWatch Application Signals Utilizzo della console Lambda Abilita Application Signals su Lambda utilizzando AWS CDK Abilita i segnali applicativi su Lambda utilizzando il Model Context Protocol (MCP) (Facoltativo) Monitoraggio dell'integrità delle applicazioni Abilita manualmente Application Signals. Disabilitazione manuale di Application Signals Configurazione di Application Signals AWS Lambda Layer per OpenTelemetry ARNs Implementazione delle funzioni Lambda utilizzando il container Amazon ECR Nozioni di base Esistono tre metodi per abilitare Application Signals per le funzioni Lambda. Dopo aver abilitato Application Signals per una funzione Lambda, occorrono alcuni minuti prima che la telemetria di tale funzione venga visualizzata nella console Application Signals. Usa la console Application Signals CloudWatch Utilizzo della console Lambda Aggiungi manualmente il AWS Lambda Layer per OpenTelemetry al runtime della funzione Lambda. Ciascuno di questi metodi aggiunge il AWS Lambda Layer for OpenTelemetry alla tua funzione. Usa la console CloudWatch Application Signals Segui questi passaggi per utilizzare la console Application Signals per abilitare Application Signals per una funzione Lambda. Apri la CloudWatch console all'indirizzo https://console.aws.amazon.com/cloudwatch/ . Nel riquadro di navigazione, scegli Application Signals , Servizi . Nell'area dell'elenco dei Servizi , scegli Abilita Application Signals . Scegli il titolo Lambda . Seleziona ogni funzione che desideri abilitare per Application Signals, quindi scegli Fine . Utilizzo della console Lambda Segui questi passaggi per utilizzare la console Lambda per abilitare Application Signals per una funzione Lambda. Apri la AWS Lambda console all'indirizzo https://console.aws.amazon.com/lambda/ . Nel pannello di navigazione, scegli Funzioni , quindi scegli il nome della funzione che desideri abilitare. Scegli la scheda Configurazione e quindi Strumenti di monitoraggio e operazioni . Scegli Modifica . Nella sezione Segnali CloudWatch applicativi e raggi X , seleziona sia Raccogli automaticamente le tracce delle applicazioni e le metriche standard delle applicazioni con Application Signals sia Raccogli automaticamente le tracce del servizio Lambda per la visibilità end-to-end con X-Ray. . Scegli Save (Salva). Abilita Application Signals su Lambda utilizzando AWS CDK Se non hai ancora abilitato Application Signals in questo account, devi concedere ad Application Signals le autorizzazioni necessarie per scoprire i tuoi servizi. Per ulteriori informazioni, consulta Abilitazione di Application Signals in un account . Abilitazione di Application Signals per le applicazioni import { aws_applicationsignals as applicationsignals } from 'aws-cdk-lib'; const cfnDiscovery = new applicationsignals.CfnDiscovery(this, 'ApplicationSignalsServiceRole', { } ); La CloudFormation risorsa Discovery concede ad Application Signals le seguenti autorizzazioni: xray:GetServiceGraph logs:StartQuery logs:GetQueryResults cloudwatch:GetMetricData cloudwatch:ListMetrics tag:GetResources Per ulteriori informazioni su questo ruolo, consulta Autorizzazioni di ruolo collegate al servizio per Application Signals CloudWatch . Aggiungi la policy IAM CloudWatchLambdaApplicationSignalsExecutionRolePolicy alla funzione Lambda. const fn = new Function(this, 'DemoFunction', { code: Code.fromAsset('$YOUR_LAMBDA.zip'), runtime: Runtime.PYTHON_3_12, handler: '$YOUR_HANDLER' }) fn.role?.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('CloudWatchLambdaApplicationSignalsExecutionRolePolicy')); Sostituisci $AWS_LAMBDA_LAYER_FOR_OTEL_ARN con l'attuale AWS Lambda Layer for OpenTelemetry ARN per la tua regione. fn.addLayers(LayerVersion.fromLayerVersionArn( this, 'AwsLambdaLayerForOtel', '$AWS_LAMBDA_LAYER_FOR_OTEL_ARN' )) fn.addEnvironment("AWS_LAMBDA_EXEC_WRAPPER", "/opt/otel-instrument"); Abilita i segnali applicativi su Lambda utilizzando il Model Context Protocol (MCP) Puoi utilizzare il server CloudWatch Application Signals Model Context Protocol (MCP) per abilitare Application Signals sulle tue funzioni Lambda tramite interazioni IA conversazionali. Ciò fornisce un'interfaccia in linguaggio naturale per configurare il monitoraggio di Application Signals. Il server MCP automatizza il processo di abilitazione comprendendo i requisiti dell'utente e generando la configurazione appropriata. Invece di seguire manualmente i passaggi della console o scrivere codice CDK, puoi semplicemente descrivere cosa vuoi abilitare. Prerequisiti Prima di utilizzare il server MCP per abilitare Application Signals, assicuratevi di avere: Un ambiente di sviluppo che supporti MCP (come Kiro, Claude Desktop, VSCode con estensioni MCP o altri strumenti compatibili con MCP) Il server MCP CloudWatch Application Signals configurato nel tuo IDE. Per istruzioni di configurazione dettagliate, consultate la documentazione del server MCP di CloudWatch Application Signals . Utilizzo del server MCP Dopo aver configurato il server MCP CloudWatch Application Signals nell'IDE, puoi richiedere indicazioni sull'abilitazione utilizzando istruzioni in linguaggio naturale. Sebbene l'assistente di codifica sia in grado di dedurre il contesto dalla struttura del progetto, fornire dettagli specifici nelle istruzioni aiuta a garantire una guida più accurata e pertinente. Includi informazioni come il linguaggio di programmazione della funzione Lambda, il nome della funzione e i percorsi assoluti del codice della funzione Lambda e del codice dell'infrastruttura. Istruzioni sulle migliori pratiche (specifiche e complete): "Enable Application Signals for my Python Lambda function. My function code is in /home/user/order-processor/lambda and IaC is in /home/user/order-processor/terraform" "I want to add observability to my Node.js Lambda function 'checkout-handler'. The function code is at /Users/dev/checkout-function and the CDK infrastructure is at /Users/dev/checkout-function/cdk" "Help me instrument my Java Lambda function with Application Signals. Function directory: /opt/apps/payment-lambda CDK infrastructure: /opt/apps/payment-lambda/cdk" Suggerimenti meno efficaci: "Enable monitoring for my Lambda" → Missing: language, paths "Enable Application Signals. My code is in ./src and IaC is in ./infrastructure" → Problem: Relative paths instead of absolute paths "Enable Application Signals for my Lambda at /home/user/myfunction" → Missing: programming language Modello rapido: "Enable Application Signals for my [LANGUAGE] Lambda function. Function code: [ABSOLUTE_PATH_TO_FUNCTION] IaC code: [ABSOLUTE_PATH_TO_IAC]" Vantaggi dell'utilizzo del server MCP L'utilizzo del server MCP CloudWatch Application Signals offre diversi vantaggi: Interfaccia in linguaggio naturale: descrivi cosa vuoi abilitare senza memorizzare comandi o sintassi di configurazione Guida sensibile al contesto: il server MCP comprende l'ambiente specifico dell'utente e fornisce consigli personalizzati Errori ridotti: la generazione automatizzata della configurazione riduce al minimo gli errori di digitazione manuale Configurazione più rapida: passa più rapidamente dall'intenzione all'implementazione Strumento di apprendimento: guarda le configurazioni generate e scopri come funziona Application Signals Per ulteriori informazioni sulla configurazione e l'utilizzo del server MCP CloudWatch Application Signals, consultate la documentazione del server MCP . (Facoltativo) Monitoraggio dell'integrità delle applicazioni Dopo aver abilitato le applicazioni su Lambda, è possibile monitorarne l'integrità. Per ulteriori informazioni, consulta Monitoraggio dell'integrità operativa delle applicazioni con Application Signals . Abilita manualmente Application Signals. Segui questi passaggi per abilitare manualmente Application Signals per una funzione Lambda. Aggiungi il AWS Lambda Layer per OpenTelemetry al tuo runtime Lambda. Per trovare il layer ARN per la tua regione, consulta ADOT Lambda Layer . ARNs Aggiunta della variabile di ambiente AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument Aggiungi la variabile di ambiente LAMBDA_APPLICATION_SIGNALS_REMOTE_ENVIRONMENT per configurare ambienti Lambda personalizzati. Per impostazione predefinita, gli ambienti Lambda sono configurati per lambda:default . Collega la policy IAM AWS gestita CloudWatchLambdaApplicationSignalsExecutionRolePolicy al ruolo di esecuzione Lambda. (Facoltativo) Ti consigliamo di abilitare il tracciamento attivo Lambda per ottenere un'esperienza di tracciamento migliore. Per ulteriori informazioni, consulta Visualizzare le chiamate alla funzione Lambda utilizzando . AWS X-Ray Disabilitazione manuale di Application Signals Per disabilitare manualmente Application Signals for a Lambda, rimuovi il AWS Lambda Layer for dal runtime Lambda e OpenTelemetry rimuovi la variabile di ambiente. AWS_LAMBDA_EXEC_WRAPPER=/opt/otel-instrument Configurazione di Application Signals Puoi utilizzare questa sezione per configurare Application Signals in Lambda. Raggruppamento di più funzioni Lambda in un unico servizio La variabile di ambiente OTEL_SERVICE_NAME imposta il nome del servizio. Questo verrà visualizzato come nome del servizio per l'applicazione nei pannelli di controllo di Application Signals. Puoi assegnare lo stesso nome di servizio a più funzioni Lambda e queste verranno unite in un unico servizio in Application Signals. Se non si fornisce un valore per questa chiave, viene utilizzato il nome della funzione Lambda predefinito. Campionamento Per impostazione predefinita, la strategia di campionamento delle tracce è basata sui genitori. È possibile modificare la strategia di campionamento impostando le variabili di ambiente OTEL_TRACES_SAMPLER . Ad esempio, imposta la frequenza di campionamento delle tracce al 30%. OTEL_TRACES_SAMPLER=traceidratio OTEL_TRACES_SAMPLER_ARG=0.3 Per ulteriori informazioni, vedere Specificazione delle variabili di OpenTelemetry ambiente . Abilitazione di tutta l'instrumentazione della libreria Per ridurre gli avvii a freddo di Lambda, per impostazione predefinita, solo la strumentazione AWS SDK e HTTP è abilitata per Python, Node e Java. Puoi impostare delle variabili di ambiente per abilitare l'instrumentazione per altre librerie utilizzate nella tua funzione Lambda. Python: OTEL_PYTHON_DISABLED_INSTRUMENTATIONS=none Node: OTEL_NODE_DISABLED_INSTRUMENTATIONS=none Java: OTEL_INSTRUMENTATION_COMMON_DEFAULT_ENABLED=true AWS Lambda Layer per OpenTelemetry ARNs Per l'elenco completo di AWS Lambda Layer for per OpenTelemetry ARNs regione e runtime, consulta ADOT Lambda ARNs Layer nella AWS distribuzione per la documentazione. OpenTelemetry Il layer è disponibile per i runtime Python, Node.js, .NET e Java. Implementazione delle funzioni Lambda utilizzando il container Amazon ECR Le funzioni Lambda implementate come immagini di container non supportano i livelli Lambda nel modo tradizionale. Quando si utilizzano immagini di container, non è possibile collegare un livello come si farebbe con altri metodi di implementazione di Lambda. È invece necessario incorporare manualmente il contenuto del livello nell'immagine del container durante il processo di creazione. Java Puoi imparare a integrare AWS Lambda Layer for OpenTelemetry nella tua funzione Java Lambda containerizzata, scaricare layer.zip l'artefatto e integrarlo nel contenitore di funzioni Java Lambda per abilitare il monitoraggio dei segnali applicativi. Prerequisiti AWS CLI configurato con le tue credenziali Docker installato Queste istruzioni presuppongono l'utilizzo di una piattaforma x86_64 Impostazione della struttura del progetto Creazione di una directory per la funzione Lambda mkdir java-appsignals-container-lambda && \ cd java-appsignals-container-lambda Creazione di una struttura di progetto Maven mkdir -p src/main/java/com/example/java/lambda mkdir -p src/main/resources Creazione di un Dockerfile Scarica e integra il supporto OpenTelemetry Layer with Application Signals direttamente nell'immagine del tuo contenitore Lambda. A tale scopo, viene creato il file Dockerfile . FROM public.ecr.aws/lambda/java:21 # Install utilities RUN dnf install -y unzip wget maven # Download the OpenTelemetry Layer with AppSignals Support RUN wget https://github.com/aws-observability/aws-otel-java-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip # Extract and include Lambda layer contents RUN mkdir -p /opt && \ unzip /tmp/layer.zip -d /opt/ && \ chmod -R 755 /opt/ && \ rm /tmp/layer.zip # Copy and build function code COPY pom.xml $ { LAMBDA_TASK_ROOT} COPY src $ { LAMBDA_TASK_ROOT}/src RUN mvn clean package -DskipTests # Copy the JAR file to the Lambda runtime directory (from inside the container) RUN mkdir -p $ { LAMBDA_TASK_ROOT}/lib/ RUN cp $ { LAMBDA_TASK_ROOT}/target/function.jar $ { LAMBDA_TASK_ROOT}/lib/ # Set the handler CMD ["com.example.java.lambda.App::handleRequest"] Nota Il layer.zip file contiene la OpenTelemetry strumentazione necessaria per il supporto di AWS Application Signals per monitorare la funzione Lambda. Le fasi di estrazione del livello garantiscono: La corretta estrazione dei contenuti di layer.zip in /opt/ directory La ricezione delle autorizzazioni di esecuzioni appropriate da parte dello script otel-instrument La rimozione del file temporaneo layer.zip per ridurre le dimensioni dell'immagine Codice funzione Lambda : crea un file Java per il tuo gestore Lambda all'indirizzo src/main/java/com/example/lambda/App.java: Il progetto dovrebbe essere simile a questo: . . ├── Dockerfile ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ └── java │ └── lambda │ └── App.java └── resources Creazione e implementazione dell'immagine di container Impostazione delle variabili di ambiente AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) AWS_REGION=$(aws configure get region) # For fish shell users: # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text) # set AWS_REGION (aws configure get region) Autenticazione con ECR Innanzitutto con ECR pubblico (per l'immagine di base): aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws Quindi con il tuo ECR privato: aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com Crea, aggiungi tag e invia un'immagine # Build the Docker image docker build -t lambda-appsignals-demo . # Tag the image docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest # Push the image docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest Creazione e configurazione della funzione Lambda Crea una nuova funzione utilizzando la console Lambda. Seleziona Immagine del container come opzione di implementazione. Scegli Sfoglia immagini per selezionare la tua immagine Amazon ECR. Test e verifiche: testa la tua Lambda con un semplice evento. Se l'integrazione del livello ha esito positivo, la Lambda viene visualizzata nella mappa dei servizi di Application Signals. Vedrai tracce e metriche per la tua funzione Lambda nella CloudWatch console. Risoluzione dei problemi Se Application Signals non funziona, verifica gli aspetti seguenti: Controlla i registri delle funzioni per eventuali errori relativi alla strumentazione OpenTelemetry Verifica se la variabile di ambiente AWS_LAMBDA_EXEC_WRAPPER è impostata correttamente Assicurati che l'estrazione del livello nel file Docker sia stata completata correttamente Verifica se le autorizzazioni IAM sono collegate correttamente Se necessario, aumenta le impostazioni di Timeout e memoria nella configurazione generale della funzione Lambda .Net Puoi imparare come integrare il supporto OpenTelemetry Layer with Application Signals nella tua funzione.Net Lambda containerizzata, scaricare l' layer.zip artefatto e integrarlo nella funzione.Net Lambda per abilitare il monitoraggio degli Application Signals. Prerequisiti AWS CLI configurato con le tue credenziali Docker installato SDK .Net 8 Queste istruzioni presuppongono l'utilizzo di una piattaforma x86_64 Impostazione della struttura del progetto Creazione di una directory per l'immagine del container della funzione Lambda mkdir dotnet-appsignals-container-lambda && \ cd dotnet-appsignals-container-lambda Creazione di un Dockerfile Scarica e integra il supporto OpenTelemetry Layer with Application Signals direttamente nell'immagine del tuo contenitore Lambda. A tale scopo, viene creato il file Dockerfile . FROM public.ecr.aws/lambda/dotnet:8 # Install utilities RUN dnf install -y unzip wget dotnet-sdk-8.0 which # Add dotnet command to docker container's PATH ENV PATH="/usr/lib64/dotnet:$ { PATH}" # Download the OpenTelemetry Layer with AppSignals Support RUN wget https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip # Extract and include Lambda layer contents RUN mkdir -p /opt && \ unzip /tmp/layer.zip -d /opt/ && \ chmod -R 755 /opt/ && \ rm /tmp/layer.zip WORKDIR $ { LAMBDA_TASK_ROOT} # Copy the project files COPY dotnet-lambda-function/src/dotnet-lambda-function/*.csproj $ { LAMBDA_TASK_ROOT}/ COPY dotnet-lambda-function/src/dotnet-lambda-function/Function.cs $ { LAMBDA_TASK_ROOT}/ COPY dotnet-lambda-function/src/dotnet-lambda-function/aws-lambda-tools-defaults.json $ { LAMBDA_TASK_ROOT}/ # Install dependencies and build the application RUN dotnet restore # Use specific runtime identifier and disable ReadyToRun optimization RUN dotnet publish -c Release -o out --self-contained false /p:PublishReadyToRun=false # Copy the published files to the Lambda runtime directory RUN cp -r out/* $ { LAMBDA_TASK_ROOT}/ CMD ["dotnet-lambda-function::dotnet_lambda_function.Function::FunctionHandler"] Nota Il layer.zip file contiene la OpenTelemetry strumentazione necessaria per il supporto di AWS Application Signals per monitorare la funzione Lambda. Le fasi di estrazione del livello garantiscono: La corretta estrazione dei contenuti di layer.zip in /opt/ directory La ricezione delle autorizzazioni di esecuzioni appropriate da parte dello script otel-instrument La rimozione del file temporaneo layer.zip per ridurre le dimensioni dell'immagine Codice della funzione Lambda : inizializza il tuo progetto Lambda utilizzando il modello Lambda .NET: AWS # Install the Lambda templates if you haven't already dotnet new -i Amazon.Lambda.Templates # Create a new Lambda project dotnet new lambda.EmptyFunction -n dotnet-lambda-function Il progetto dovrebbe essere simile a questo: . . ├── Dockerfile └── dotnet-lambda-function ├── src │ └── dotnet-lambda-function │ ├── Function.cs │ ├── Readme.md │ ├── aws-lambda-tools-defaults.json │ └── dotnet-lambda-function.csproj └── test └── dotnet-lambda-function.Tests ├── FunctionTest.cs └── dotnet-lambda-function.Tests.csproj Creazione e implementazione dell'immagine di container Impostazione delle variabili di ambiente AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) AWS_REGION=$(aws configure get region) # For fish shell users: # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text) # set AWS_REGION (aws configure get region) Aggiorna il codice Function.cs come segue: Aggiorna il codice dotnet-lambda-function.csproj come segue: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net8.0>/TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles> <AWSProjectType>Lambda</AWSProjectType> <CopyLocalLockFileAssemblies>true</CopyLocalLockFileAssemblies> <PublishReadyToRun>true</PublishReadyToRun> </PropertyGroup> <ItemGroup> <PackageReference Include="Amazon.Lambda.Core" Version="2.5.0" /> <PackageReference Include="Amazon.Lambda.Serialization.SystemTextJson" Version="2.4.4" /> <PackageReference Include="AWSSDK.S3" Version="3.7.305.23" /> </ItemGroup> </Project> Creazione e implementazione dell'immagine di container Impostazione delle variabili di ambiente AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) AWS_REGION=$(aws configure get region) # For fish shell users: # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text) # set AWS_REGION (aws configure get region) Autenticazione con Amazon ECR pubblico aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws Autenticazione con Amazon ECR privato aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com Creazione di un repository Amazon ECR (se necessario) aws ecr create-repository \ --repository-name lambda-appsignals-demo \ --region $AWS_REGION Creazione, aggiunta di tag e invio di un'immagine # Build the Docker image docker build -t lambda-appsignals-demo . # Tag the image docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest # Push the image docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest 5. Create and Configure the Lambda Function Creazione e configurazione della funzione Lambda Crea una nuova funzione utilizzando la console Lambda. Seleziona Immagine del container come opzione di implementazione. Scegli Sfoglia immagini per selezionare la tua immagine Amazon ECR. Test e verifiche: testa la tua Lambda con un semplice evento. Se l'integrazione del livello ha esito positivo, la Lambda viene visualizzata nella mappa dei servizi di Application Signals. Vedrai tracce e metriche per la tua funzione Lambda nella CloudWatch console. Risoluzione dei problemi Se Application Signals non funziona, verifica gli aspetti seguenti: Controlla i registri delle funzioni per eventuali errori relativi alla strumentazione OpenTelemetry Verifica se la variabile di ambiente AWS_LAMBDA_EXEC_WRAPPER è impostata correttamente Assicurati che l'estrazione del livello nel file Docker sia stata completata correttamente Verifica se le autorizzazioni IAM sono collegate correttamente Se necessario, aumenta le impostazioni di Timeout e memoria nella configurazione generale della funzione Lambda Node.js Puoi imparare come integrare il supporto OpenTelemetry Layer with Application Signals nella funzione Lambda Node.js containerizzata, scaricare l' layer.zip artefatto e integrarlo nella funzione Lambda Node.js per abilitare il monitoraggio di Application Signals. Prerequisiti AWS CLI configurato con le tue credenziali Docker installato Queste istruzioni presuppongono l'utilizzo di una piattaforma x86_64 Impostazione della struttura del progetto Creazione di una directory per l'immagine del container della funzione Lambda mkdir nodejs-appsignals-container-lambda &&\ cd nodejs-appsignals-container-lambda Creazione di un Dockerfile Scarica e integra il supporto OpenTelemetry Layer with Application Signals direttamente nell'immagine del tuo contenitore Lambda. A tale scopo, viene creato il file Dockerfile . # Dockerfile FROM public.ecr.aws/lambda/nodejs:22 # Install utilities RUN dnf install -y unzip wget # Download the OpenTelemetry Layer with AppSignals Support RUN wget https://github.com/aws-observability/aws-otel-js-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip # Extract and include Lambda layer contents RUN mkdir -p /opt && \ unzip /tmp/layer.zip -d /opt/ && \ chmod -R 755 /opt/ && \ rm /tmp/layer.zip # Install npm dependencies RUN npm init -y RUN npm install # Copy function code COPY *.js $ { LAMBDA_TASK_ROOT}/ # Set the CMD to your handler CMD [ "index.handler" ] Nota Il layer.zip file contiene la OpenTelemetry strumentazione necessaria per il supporto di AWS Application Signals per monitorare la funzione Lambda. Le fasi di estrazione del livello garantiscono: La corretta estrazione dei contenuti di layer.zip in /opt/ directory La ricezione delle autorizzazioni di esecuzioni appropriate da parte dello script otel-instrument La rimozione del file temporaneo layer.zip per ridurre le dimensioni dell'immagine Codice della funzione Lambda Crea un file index.js con i seguenti contenuti: const { S3Client, ListBucketsCommand } = require('@aws-sdk/client-s3'); // Initialize S3 client const s3Client = new S3Client( { region: process.env.AWS_REGION }); exports.handler = async function(event, context) { console.log('Received event:', JSON.stringify(event, null, 2)); console.log('Handler initializing:', exports.handler.name); const response = { statusCode: 200, body: { } }; try { // List S3 buckets const command = new ListBucketsCommand( { }); const data = await s3Client.send(command); // Extract bucket names const bucketNames = data.Buckets.map(bucket => bucket.Name); response.body = { message: 'Successfully retrieved buckets', buckets: bucketNames }; } catch (error) { console.error('Error listing buckets:', error); response.statusCode = 500; response.body = { message: `Error listing buckets: $ { error.message}` }; } return response; }; La struttura del progetto dovrebbe essere simile a questa: . ├── Dockerfile └── index.js Creazione e implementazione dell'immagine di container Impostazione delle variabili di ambiente AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) AWS_REGION=$(aws configure get region) # For fish shell users: # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text) # set AWS_REGION (aws configure get region) Autenticazione con Amazon ECR pubblico aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws Autenticazione con Amazon ECR privato aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com Creazione di un repository Amazon ECR (se necessario) aws ecr create-repository \ --repository-name lambda-appsignals-demo \ --region $AWS_REGION Creazione, aggiunta di tag e invio di un'immagine # Build the Docker image docker build -t lambda-appsignals-demo . # Tag the image docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest # Push the image docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest 5. Create and Configure the Lambda Function Creazione e configurazione della funzione Lambda Crea una nuova funzione utilizzando la console Lambda. Seleziona Immagine del container come opzione di implementazione. Scegli Sfoglia immagini per selezionare la tua immagine Amazon ECR. Test e verifiche: testa la tua Lambda con un semplice evento. Se l'integrazione del livello ha esito positivo, la Lambda viene visualizzata nella mappa dei servizi di Application Signals. Vedrai tracce e metriche per la tua funzione Lambda nella CloudWatch console. Risoluzione dei problemi Se Application Signals non funziona, verifica gli aspetti seguenti: Controlla i registri delle funzioni per eventuali errori relativi alla strumentazione OpenTelemetry Verifica se la variabile di ambiente AWS_LAMBDA_EXEC_WRAPPER è impostata correttamente Assicurati che l'estrazione del livello nel file Docker sia stata completata correttamente Verifica se le autorizzazioni IAM sono collegate correttamente Se necessario, aumenta le impostazioni di Timeout e memoria nella configurazione generale della funzione Lambda Python Puoi imparare come integrare il supporto OpenTelemetry Layer with Application Signals nella tua funzione Python Lambda containerizzata, scaricare l' layer.zip artefatto e integrarlo nella tua funzione Python Lambda per abilitare il monitoraggio di Application Signals. Prerequisiti AWS CLI configurato con le tue credenziali Docker installato Queste istruzioni presuppongono l'utilizzo di una piattaforma x86_64 Impostazione della struttura del progetto Creazione di una directory per l'immagine del container della funzione Lambda mkdir python-appsignals-container-lambda &&\ cd python-appsignals-container-lambda Creazione di un Dockerfile Scarica e integra il supporto OpenTelemetry Layer with Application Signals direttamente nell'immagine del tuo contenitore Lambda. A tale scopo, viene creato il file Dockerfile . # Dockerfile FROM public.ecr.aws/lambda/python:3.13 # Copy function code COPY app.py $ { LAMBDA_TASK_ROOT} # Install unzip and wget utilities RUN dnf install -y unzip wget # Download the OpenTelemetry Layer with AppSignals Support RUN wget https://github.com/aws-observability/aws-otel-python-instrumentation/releases/latest/download/layer.zip -O /tmp/layer.zip # Extract and include Lambda layer contents RUN mkdir -p /opt && \ unzip /tmp/layer.zip -d /opt/ && \ chmod -R 755 /opt/ && \ rm /tmp/layer.zip # Set the CMD to your handler CMD [ "app.lambda_handler" ] Nota Il layer.zip file contiene la OpenTelemetry strumentazione necessaria per il supporto di AWS Application Signals per monitorare la funzione Lambda. Le fasi di estrazione del livello garantiscono: La corretta estrazione dei contenuti di layer.zip in /opt/ directory La ricezione delle autorizzazioni di esecuzioni appropriate da parte dello script otel-instrument La rimozione del file temporaneo layer.zip per ridurre le dimensioni dell'immagine Codice della funzione Lambda Crea la funzione Lambda in un file app.py : import json import boto3 def lambda_handler(event, context): """ Sample Lambda function that can be used in a container image. Parameters: ----------- event: dict Input event data context: LambdaContext Lambda runtime information Returns: __ dict Response object """ print("Received event:", json.dumps(event, indent=2)) # Create S3 client s3 = boto3.client('s3') try: # List buckets response = s3.list_buckets() # Extract bucket names buckets = [bucket['Name'] for bucket in response['Buckets']] return { 'statusCode': 200, 'body': json.dumps( { 'message': 'Successfully retrieved buckets', 'buckets': buckets }) } except Exception as e: print(f"Error listing buckets: { str(e)}") return { 'statusCode': 500, 'body': json.dumps( { 'message': f'Error listing buckets: { str(e)}' }) } La struttura del progetto dovrebbe essere simile a questa: . ├── Dockerfile ├── app.py └── instructions.md Creazione e implementazione dell'immagine di container Impostazione delle variabili di ambiente AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) AWS_REGION=$(aws configure get region) # For fish shell users: # set AWS_ACCOUNT_ID (aws sts get-caller-identity --query Account --output text) # set AWS_REGION (aws configure get region) Autenticazione con Amazon ECR pubblico aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws Autenticazione con Amazon ECR privato aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com Creazione di un repository Amazon ECR (se necessario) aws ecr create-repository \ --repository-name lambda-appsignals-demo \ --region $AWS_REGION Creazione, aggiunta di tag e invio di un'immagine # Build the Docker image docker build -t lambda-appsignals-demo . # Tag the image docker tag lambda-appsignals-demo:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest # Push the image docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/lambda-appsignals-demo:latest 5. Create and Configure the Lambda Function Creazione e configurazione della funzione Lambda Crea una nuova funzione utilizzando la console Lambda. Seleziona Immagine del container come opzione di implementazione. Scegli Sfoglia immagini per selezionare la tua immagine Amazon ECR. Test e verifiche: testa la tua Lambda con un semplice evento. Se l'integrazione del livello ha esito positivo, la Lambda viene visualizzata nella mappa dei servizi di Application Signals. Vedrai tracce e metriche per la tua funzione Lambda nella CloudWatch console. Risoluzione dei problemi Se Application Signals non funziona, verifica gli aspetti seguenti: Controlla i registri delle funzioni per eventuali errori relativi alla strumentazione OpenTelemetry Verifica se la variabile di ambiente AWS_LAMBDA_EXEC_WRAPPER è impostata correttamente Assicurati che l'estrazione del livello nel file Docker sia stata completata correttamente Verifica se le autorizzazioni IAM sono collegate correttamente Se necessario, aumenta le impostazioni di Timeout e memoria nella configurazione generale della funzione Lambda JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Abilitazione delle applicazioni su Kubernetes Risoluzione dei problemi relativi all'installazione di Application Signals Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/id_id/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-PrometheusEC2.html | Siapkan dan konfigurasikan koleksi metrik Prometheus di instans Amazon EC2 - Amazon CloudWatch Siapkan dan konfigurasikan koleksi metrik Prometheus di instans Amazon EC2 - Amazon CloudWatch Dokumentasi Amazon CloudWatch Panduan Pengguna Langkah 1: Instal CloudWatch agen Langkah 2: Melakukan scraping pada sumber Prometheus dan mengimpor metrik Contoh: Siapkan beban kerja Java/JMX sampel untuk pengujian metrik Prometheus Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Siapkan dan konfigurasikan koleksi metrik Prometheus di instans Amazon EC2 Bagian berikut menjelaskan cara menginstal CloudWatch agen dengan pemantauan Prometheus EC2 pada instance, dan cara mengonfigurasi agen untuk mengikis target tambahan. Ini juga menyediakan tutorial untuk menyiapkan contoh beban kerja untuk menggunakan pengujian dengan pemantauan Prometheus. Instans Linux dan Windows didukung. Untuk informasi tentang sistem operasi yang didukung oleh CloudWatch agen, lihat Kumpulkan metrik, log, dan jejak menggunakan agen CloudWatch Persyaratan grup keamanan VPC Jika Anda menggunakan VPC, persyaratan berikut berlaku. Aturan masuknya kelompok keamanan untuk beban kerja Prometheus harus membuka port CloudWatch Prometheus ke agen untuk mengikis metrik Prometheus oleh IP pribadi. Aturan keluar dari grup keamanan untuk CloudWatch agen harus memungkinkan agen untuk terhubung ke port CloudWatch beban kerja Prometheus dengan IP pribadi. Topik Langkah 1: Instal CloudWatch agen Langkah 2: Melakukan scraping pada sumber Prometheus dan mengimpor metrik Contoh: Siapkan beban kerja Java/JMX sampel untuk pengujian metrik Prometheus Langkah 1: Instal CloudWatch agen Langkah pertama adalah menginstal CloudWatch agen pada EC2 instance. Untuk petunjuk, lihat Memasang CloudWatch agen . Langkah 2: Melakukan scraping pada sumber Prometheus dan mengimpor metrik CloudWatch Agen dengan pemantauan Prometheus membutuhkan dua konfigurasi untuk mengikis metrik Prometheus. Salah satunya adalah untuk konfigurasi Prometheus standar seperti yang didokumentasikan dalam <scrape_config> dalam dokumentasi Prometheus. Yang lainnya adalah untuk CloudWatch konfigurasi agen. Konfigurasi scraping Prometheus CloudWatch <scrape_config>Agen mendukung konfigurasi scrape Prometheus standar seperti yang didokumentasikan dalam dokumentasi Prometheus. https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config Anda dapat mengedit bagian ini untuk memperbarui konfigurasi yang sudah ada dalam file ini, dan menambahkan target scraping Prometheus tambahan. Contoh file konfigurasi berisi baris konfigurasi global berikut: PS C:\ProgramData\Amazon\AmazonCloudWatchAgent> cat prometheus.yaml global: scrape_interval: 1m scrape_timeout: 10s scrape_configs: - job_name: MY_JOB sample_limit: 10000 file_sd_configs: - files: ["C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\prometheus_sd_1.yaml", "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\prometheus_sd_2.yaml"] Bagian global menentukan parameter yang valid dalam semua konteks konfigurasi. Mereka juga berfungsi sebagai default untuk bagian konfigurasi lainnya. Ini berisi parameter berikut: scrape_interval — Menentukan seberapa sering mengeruk target. scrape_timeout — Mendefinisikan berapa lama untuk menunggu sebelum permintaan pengerukan habis. Bagian scrape_configs menentukan satu set target dan parameter yang menentukan bagaimana untuk mengeruknya. Ini berisi parameter berikut: job_name — Nama pekerjaan ditugaskan untuk metrik terkeruk secara bawaan. sample_limit — Batas per goresan pada jumlah sampel terkeruk yang akan diterima. file_sd_configs — Daftar konfigurasi penemuan layanan file. Itu membaca satu set file yang berisi daftar nol atau lebih konfigurasi statis. Bagian file_sd_configs berisi parameter files yang mendefinisikan pola untuk file dari grup target yang diekstrak. CloudWatch Agen mendukung jenis konfigurasi penemuan layanan berikut. static_config Memungkinkan menentukan daftar target dan label umum ditetapkan untuk mereka. Ini adalah cara kanonik untuk menentukan target statis dalam konfigurasi mengeruk. Berikut ini adalah contoh konfigurasi statis untuk mengeruk metrik Prometheus dari host lokal. Metrik juga dapat diambil dari server lain jika port Prometheus terbuka ke server tempat agen berjalan. PS C:\ProgramData\Amazon\AmazonCloudWatchAgent> cat prometheus_sd_1.yaml - targets: - 127.0.0.1:9404 labels: key1: value1 key2: value2 Contoh ini berisi parameter berikut: targets — Target terambil oleh konfigurasi statis. labels — Label ditugaskan untuk semua metrik yang terambil dari target. ec2_sd_config Memungkinkan pengambilan target pengikisan dari instans Amazon EC2 . Berikut ini adalah contoh ec2_sd_config untuk mengikis metrik Prometheus dari daftar contoh. EC2 Port Prometheus dari instance ini harus terbuka ke server tempat agen berjalan. CloudWatch Peran IAM untuk EC2 contoh di mana CloudWatch agen berjalan harus menyertakan ec2:DescribeInstance izin. Misalnya, Anda dapat melampirkan kebijakan terkelola Amazon EC2 ReadOnlyAccess ke instance yang menjalankan CloudWatch agen. PS C:\ProgramData\Amazon\AmazonCloudWatchAgent> cat prometheus.yaml global: scrape_interval: 1m scrape_timeout: 10s scrape_configs: - job_name: MY_JOB sample_limit: 10000 ec2_sd_configs: - region: us-east-1 port: 9404 filters: - name: instance-id values: - i-98765432109876543 - i-12345678901234567 Contoh ini berisi parameter berikut: region — AWS Wilayah tempat EC2 instance target berada. Jika Anda membiarkan kosong ini, Wilayah dari metadata instans digunakan. port — Port untuk mengambil metrik. filters — Filter opsional yang akan digunakan untuk memfilter daftar instans. Contoh ini menyaring berdasarkan EC2 contoh IDs. Untuk kriteria lainnya yang dapat Anda filter, lihat DescribeInstances . CloudWatch konfigurasi agen untuk Prometheus File konfigurasi CloudWatch agen mencakup prometheus bagian di bawah keduanya logs dan metrics_collected . Ini termasuk parameter berikut. cluster_name — menentukan nama klaster yang akan ditambahkan sebagai label pada peristiwa log. Bidang ini bersifat opsional. log_group_name — menentukan nama grup log untuk metrik-metrik Prometheus yang di-scraping. prometheus_config_path — menentukan jalur file konfigurasi scraping Prometheus. emf_processor — menentukan konfigurasi prosesor format metrik tersemat. Untuk informasi selengkapnya tentang format metrik tersemat, silakan lihat Menyematkan metrik dalam log . Bagian emf_processor dapat berisi parameter berikut: metric_declaration_dedup — Ini diatur ke betul, fungsi de-duplikasi untuk metrik format metrik tersemat diaktifkan. metric_namespace - Menentukan namespace metrik untuk metrik yang dipancarkan. CloudWatch metric_unit — Menentukan nama metrik: peta unit metrik. Untuk informasi tentang satuan metrik yang didukung, lihat MetricDatum . metric_declaration — adalah bagian-bagian yang menentukan larik log dengan format metrik tersemat yang akan dihasilkan. Ada metric_declaration bagian untuk setiap sumber Prometheus yang diimpor agen secara default CloudWatch . Masing-masing bagian ini mencakup bidang-bidang berikut: source_labels menentukan nilai dari label-label yang diperiksa oleh baris label_matcher . label_matcher adalah ekspresi reguler yang memeriksa nilai dari label-label yang tercantum dalam source_labels . Metrik yang cocok diaktifkan untuk dimasukkan dalam format metrik tertanam yang dikirim ke CloudWatch. metric_selectors adalah ekspresi reguler yang menentukan metrik yang akan dikumpulkan dan dikirim ke CloudWatch. dimensions adalah daftar label yang akan digunakan sebagai CloudWatch dimensi untuk setiap metrik yang dipilih. Berikut ini adalah contoh konfigurasi CloudWatch agen untuk Prometheus. { "logs": { "metrics_collected": { "prometheus": { "cluster_name":"prometheus-cluster", "log_group_name":"Prometheus", "prometheus_config_path":"C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\prometheus.yaml", "emf_processor": { "metric_declaration_dedup":true, "metric_namespace":"CWAgent-Prometheus", "metric_unit": { "jvm_threads_current": "Count", "jvm_gc_collection_seconds_sum": "Milliseconds" }, "metric_declaration":[ { "source_labels":[ "job", "key2" ], "label_matcher":"MY_JOB;^value2", "dimensions":[ [ "key1", "key2" ], [ "key2" ] ], "metric_selectors":[ "^jvm_threads_current$", "^jvm_gc_collection_seconds_sum$" ] } ] } } } } } Contoh sebelumnya mengonfigurasi bagian format metrik yang tersemat untuk dikirim sebagai peristiwa log jika kondisi berikut terpenuhi: Nilai label job adalah MY_JOB Nilai label key2 adalah value2 Metrik Prometheus jvm_threads_current dan jvm_gc_collection_seconds_sum berisi label job dan key2 . Peristiwa log yang dikirim mencakup bagian yang disorot berikut ini. { "CloudWatchMetrics": [ { "Metrics": [ { "Unit": "Count", "Name": "jvm_threads_current" }, { "Unit": "Milliseconds", "Name": "jvm_gc_collection_seconds_sum" } ], "Dimensions": [ [ "key1", "key2" ], [ "key2" ] ], "Namespace": "CWAgent-Prometheus" } ], "ClusterName": "prometheus-cluster", "InstanceId": "i-0e45bd06f196096c8", "Timestamp": "1607966368109", "Version": "0", "host": "EC2AMAZ-PDDOIUM", "instance": "127.0.0.1:9404", "jvm_threads_current": 2, "jvm_gc_collection_seconds_sum": 0.006000000000000002, "prom_metric_type": "gauge", ... } Contoh: Siapkan beban kerja Java/JMX sampel untuk pengujian metrik Prometheus JMX Exporter adalah sebuah pengekspor Prometheus resmi yang dapat melakukan scraping dan mengekspos JMX mBeans sebagai metrik-metrik Prometheus. Untuk informasi selengkapnya, silakan lihat prometheus/jmx_exporter . CloudWatch Agen dapat mengumpulkan metrik Prometheus yang telah ditentukan sebelumnya dari Java Virtual Machine (JVM), Hjava, dan Tomcat (Catalina), dari eksportir JMX pada instance. EC2 Langkah 1: Instal CloudWatch agen Langkah pertama adalah menginstal CloudWatch agen pada EC2 instance. Untuk petunjuk, lihat Memasang CloudWatch agen . Langkah 2: Memulai beban kerja Java/JMX Langkah selanjutnya adalah memulai Java/JMX beban kerja. Pertama, unduh file jar JMX exporter terbaru dari lokasi berikut: prometheus/jmx_exporter . Gunakan jar untuk aplikasi sampel Anda Contoh perintah di bagian berikut menggunakan SampleJavaApplication-1.0-SNAPSHOT.jar sebagai file jar. Ganti bagian perintah ini dengan jar untuk aplikasi Anda. Siapkan konfigurasi JMX exporter File config.yaml adalah file konfigurasi JMX exporter. Untuk informasi selengkapnya, silakan lihat Configuration di dokumentasi JMX exporter. Berikut adalah contoh konfigurasi untuk Java dan Tomcat. --- lowercaseOutputName: true lowercaseOutputLabelNames: true rules: - pattern: 'java.lang<type=OperatingSystem><>(FreePhysicalMemorySize|TotalPhysicalMemorySize|FreeSwapSpaceSize|TotalSwapSpaceSize|SystemCpuLoad|ProcessCpuLoad|OpenFileDescriptorCount|AvailableProcessors)' name: java_lang_OperatingSystem_$1 type: GAUGE - pattern: 'java.lang<type=Threading><>(TotalStartedThreadCount|ThreadCount)' name: java_lang_threading_$1 type: GAUGE - pattern: 'Catalina<type=GlobalRequestProcessor, name=\"(\w+-\w+)-(\d+)\"><>(\w+)' name: catalina_globalrequestprocessor_$3_total labels: port: "$2" protocol: "$1" help: Catalina global $3 type: COUNTER - pattern: 'Catalina<j2eeType=Servlet, WebModule=//([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), name=([-a-zA-Z0-9+/$%~_-|!.]*), J2EEApplication=none, J2EEServer=none><>(requestCount|maxTime|processingTime|errorCount)' name: catalina_servlet_$3_total labels: module: "$1" servlet: "$2" help: Catalina servlet $3 total type: COUNTER - pattern: 'Catalina<type=ThreadPool, name="(\w+-\w+)-(\d+)"><>(currentThreadCount|currentThreadsBusy|keepAliveCount|pollerThreadCount|connectionCount)' name: catalina_threadpool_$3 labels: port: "$2" protocol: "$1" help: Catalina threadpool $3 type: GAUGE - pattern: 'Catalina<type=Manager, host=([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), context=([-a-zA-Z0-9+/$%~_-|!.]*)><>(processingTime|sessionCounter|rejectedSessions|expiredSessions)' name: catalina_session_$3_total labels: context: "$2" host: "$1" help: Catalina session $3 total type: COUNTER - pattern: ".*" Mulai aplikasi Java dengan pengekspor Prometheus Mulai aplikasi sampel. Ini akan memancarkan metrik Prometheus ke port 9404. Pastikan untuk mengganti titik masuk com.gubupt.sample.app.App dengan informasi yang benar untuk aplikasi java sampel Anda. Di Linux, masukkan perintah berikut. $ nohup java -javaagent:./jmx_prometheus_javaagent-0.14.0.jar=9404:./config.yaml -cp ./SampleJavaApplication-1.0-SNAPSHOT.jar com.gubupt.sample.app.App & Di Windows, masukkan perintah berikut. PS C:\> java -javaagent:.\jmx_prometheus_javaagent-0.14.0.jar=9404:.\config.yaml -cp .\SampleJavaApplication-1.0-SNAPSHOT.jar com.gubupt.sample.app.App Verifikasi emisi metrik Prometheus Verifikasi bahwa metrik Prometheus dipancarkan. Di Linux, masukkan perintah berikut. $ curl localhost:9404 Di Windows, masukkan perintah berikut. PS C:\> curl http://localhost:9404 Contoh keluaran di Linux: StatusCode : 200 StatusDescription : OK Content : # HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM # TYPE jvm_classes_loaded gauge jvm_classes_loaded 2526.0 # HELP jvm_classes_loaded_total The total number of class... RawContent : HTTP/1.1 200 OK Content-Length: 71908 Content-Type: text/plain; version=0.0.4; charset=utf-8 Date: Fri, 18 Dec 2020 16:38:10 GMT # HELP jvm_classes_loaded The number of classes that are currentl... Forms : { } Headers : { [Content-Length, 71908], [Content-Type, text/plain; version=0.0.4; charset=utf-8], [Date, Fri, 18 Dec 2020 16:38:10 GMT]} Images : { } InputFields : { } Links : { } ParsedHtml : System.__ComObject RawContentLength : 71908 Langkah 3: Konfigurasikan CloudWatch agen untuk mengikis metrik Prometheus Selanjutnya, atur konfigurasi scrape Prometheus di file konfigurasi agen. CloudWatch Untuk mengatur konfigurasi scrape Prometheus untuk contoh Java/JMX Mengatur konfigurasi untuk file_sd_config dan static_config . Di Linux, masukkan perintah berikut. $ cat /opt/aws/amazon-cloudwatch-agent/var/prometheus.yaml global: scrape_interval: 1m scrape_timeout: 10s scrape_configs: - job_name: jmx sample_limit: 10000 file_sd_configs: - files: [ "/opt/aws/amazon-cloudwatch-agent/var/prometheus_file_sd.yaml" ] Di Windows, masukkan perintah berikut. PS C:\ProgramData\Amazon\AmazonCloudWatchAgent> cat prometheus.yaml global: scrape_interval: 1m scrape_timeout: 10s scrape_configs: - job_name: jmx sample_limit: 10000 file_sd_configs: - files: [ "C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\prometheus_file_sd.yaml" ] Mengatur konfigurasi target ambilan. Di Linux, masukkan perintah berikut. $ cat /opt/aws/amazon-cloudwatch-agent/var/prometheus_file_sd.yaml - targets: - 127.0.0.1:9404 labels: application: sample_java_app os: linux Di Windows, masukkan perintah berikut. PS C:\ProgramData\Amazon\AmazonCloudWatchAgent> cat prometheus_file_sd.yaml - targets: - 127.0.0.1:9404 labels: application: sample_java_app os: windows Atur konfigurasi ambilan Prometheus dengan ec2_sc_config . Ganti your-ec2-instance-id dengan ID EC2 instance yang benar. Di Linux, masukkan perintah berikut. $ cat .\prometheus.yaml global: scrape_interval: 1m scrape_timeout: 10s scrape_configs: - job_name: jmx sample_limit: 10000 ec2_sd_configs: - region: us-east-1 port: 9404 filters: - name: instance-id values: - your-ec2-instance-id Di Windows, masukkan perintah berikut. PS C:\ProgramData\Amazon\AmazonCloudWatchAgent> cat prometheus_file_sd.yaml - targets: - 127.0.0.1:9404 labels: application: sample_java_app os: windows Siapkan konfigurasi CloudWatch agen. Pertama, arahkan ke direktori yang benar. Di Linux, itu adalah /opt/aws/amazon-cloudwatch-agent/var/cwagent-config.json . Pada Windows, ini adalah C:\ProgramData\Amazon\AmazonCloudWatchAgent\cwagent-config.json . Berikut ini adalah konfigurasi sampel dengan metrik Java/JHX Prometheus didefinisikan. Pastikan untuk mengganti path-to-Prometheus-Scrape-Configuration-file dengan jalur yang benar. { "agent": { "region": "us-east-1" }, "logs": { "metrics_collected": { "prometheus": { "cluster_name": "my-cluster", "log_group_name": "prometheus-test", "prometheus_config_path": " path-to-Prometheus-Scrape-Configuration-file ", "emf_processor": { "metric_declaration_dedup": true, "metric_namespace": "PrometheusTest", "metric_unit": { "jvm_threads_current": "Count", "jvm_classes_loaded": "Count", "java_lang_operatingsystem_freephysicalmemorysize": "Bytes", "catalina_manager_activesessions": "Count", "jvm_gc_collection_seconds_sum": "Seconds", "catalina_globalrequestprocessor_bytesreceived": "Bytes", "jvm_memory_bytes_used": "Bytes", "jvm_memory_pool_bytes_used": "Bytes" }, "metric_declaration": [ { "source_labels": ["job"], "label_matcher": "^jmx$", "dimensions": [["instance"]], "metric_selectors": [ "^jvm_threads_current$", "^jvm_classes_loaded$", "^java_lang_operatingsystem_freephysicalmemorysize$", "^catalina_manager_activesessions$", "^jvm_gc_collection_seconds_sum$", "^catalina_globalrequestprocessor_bytesreceived$" ] }, { "source_labels": ["job"], "label_matcher": "^jmx$", "dimensions": [["area"]], "metric_selectors": [ "^jvm_memory_bytes_used$" ] }, { "source_labels": ["job"], "label_matcher": "^jmx$", "dimensions": [["pool"]], "metric_selectors": [ "^jvm_memory_pool_bytes_used$" ] } ] } } }, "force_flush_interval": 5 } } Mulai ulang CloudWatch agen dengan memasukkan salah satu perintah berikut. Di Linux, masukkan perintah berikut. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/var/cwagent-config.json Di Windows, masukkan perintah berikut. & "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:C:\ProgramData\Amazon\AmazonCloudWatchAgent\cwagent-config.json Melihat metrik dan log Prometheus Anda sekarang dapat melihat Java/JMX metrik yang dikumpulkan. Untuk melihat metrik untuk beban kerja sampel Java/JMX Anda Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Di Wilayah tempat klaster Anda berjalan, pilih Metrik pada panel navigasi yang ada sebelah kiri. Temukan PrometheusTest namespace untuk melihat metrik. Untuk melihat peristiwa CloudWatch Log, pilih Grup log di panel navigasi. Peristiwa-peristiwa tersebut berada di grup log prometheus-test . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Ambil Metrik Kustom dengan koleksi Konfigurasikan informasi tentang entitas terkait Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/it_it/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-EC2Main.html | Abilita le tue applicazioni su Amazon EC2 - Amazon CloudWatch Abilita le tue applicazioni su Amazon EC2 - Amazon CloudWatch Documentazione Amazon CloudWatch Guida per l’utente Passaggio 1: abilitazione di Application Signals nell'account Passaggio 2: scarica e avvia l' CloudWatch agente Passaggio 3: instrumentazione e avvio dell'applicazione Abilita i segnali applicativi su Amazon EC2 utilizzando Model Context Protocol (MCP) (Facoltativo) Monitoraggio dell'integrità delle applicazioni Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Abilita le tue applicazioni su Amazon EC2 Abilita CloudWatch Application Signals su Amazon EC2 utilizzando i passaggi di configurazione personalizzati descritti in questa sezione. Per le applicazioni in esecuzione su Amazon EC2, puoi installare e configurare OpenTelemetry autonomamente l' CloudWatch agente e AWS Distro. Su queste architetture abilitate con una configurazione personalizzata di Application Signals, Application Signals non rileva automaticamente i nomi dei tuoi servizi o dei cluster o host su cui vengono eseguiti. Devi specificare questi nomi durante la configurazione personalizzata e i nomi specificati sono quelli visualizzati nei pannelli di controllo di Application Signals. Le istruzioni in questa sezione si riferiscono alle applicazioni Java, Python e .NET. I passaggi sono stati testati su EC2 istanze Amazon, ma si prevede che funzionino anche su altre architetture che supportano AWS Distro for. OpenTelemetry Requisiti Per ottenere supporto per Application Signals, devi utilizzare la versione più recente sia dell' CloudWatchagente che di Distro for agent. AWS OpenTelemetry È necessario che AWS CLI sia installato sull'istanza. Consigliamo AWS CLI la versione 2, ma dovrebbe funzionare anche la versione 1. Per ulteriori informazioni sull'installazione di AWS CLI, consulta Installare o aggiornare la versione più recente di AWS CLI . Importante Se stai già utilizzando OpenTelemetry un'applicazione che intendi abilitare per Application Signals, consulta Sistemi supportati prima di abilitare Application Signals. Passaggio 1: abilitazione di Application Signals nell'account Prima devi abilitare Application Signals nel tuo account. Se non lo hai ancora fatto, consulta Abilitazione di Application Signals in un account . Passaggio 2: scarica e avvia l' CloudWatch agente Per installare l' CloudWatch agente come parte dell'abilitazione di Application Signals su un' EC2 istanza Amazon o un host locale Scarica la versione più recente dell' CloudWatch agente sull'istanza. Se l' CloudWatch agente è già installato sull'istanza, potrebbe essere necessario aggiornarlo. Solo le versioni dell'agente rilasciate il 30 novembre 2023 o successive supportano CloudWatch Application Signals. Prima di avviare l' CloudWatch agente, configuralo per abilitare Application Signals. L'esempio seguente è una configurazione di CloudWatch agente che abilita Application Signals sia per le metriche che per le tracce su un EC2 host. Ti consigliamo di collocare questo file in /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json sui sistemi Linux. { "traces": { "traces_collected": { "application_signals": { } } }, "logs": { "metrics_collected": { "application_signals": { } } } } Collega la policy CloudWatchAgentServerPolicy IAM al ruolo IAM della tua EC2 istanza Amazon. Per le autorizzazioni per gli host on-premises, consulta Autorizzazioni per server on-premises . Accedi Console di gestione AWS e apri la console IAM all'indirizzo https://console.aws.amazon.com/iam/ . Scegli Ruoli e trova il ruolo utilizzato dalla tua EC2 istanza Amazon. Quindi scegli il nome del ruolo. Nella scheda Autorizzazioni , scegli Aggiungi autorizzazioni , quindi Collega policy . Trova CloudWatchAgentServerPolicy . Usa la casella di ricerca se necessario. Quindi seleziona la casella di controllo della policy e seleziona Aggiungi autorizzazioni . Avvia l' CloudWatch agente inserendo i seguenti comandi. Sostituisci agent-config-file-path con il percorso del file di configurazione CloudWatch dell'agente, ad esempio ./amazon-cloudwatch-agent.json . È necessario includere il prefisso file: come mostrato. export CONFIG_FILE_PATH=./amazon-cloudwatch-agent.json sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \ -a fetch-config \ -m ec2 -s -c file: agent-config-file-path Autorizzazioni per server on-premises Per un host locale, dovrai fornire AWS l'autorizzazione al tuo dispositivo. Per impostare le autorizzazioni per un host on-premises Crea l'utente IAM da utilizzare per fornire le autorizzazioni al tuo host on-premises: Aprire la console IAM all'indirizzo https://console.aws.amazon.com/iam/ . Scegli Utenti , quindi seleziona Crea utente . In Dettagli utente , per Nome utente , inserisci un nome per il nuovo utente IAM. Questo è il nome di accesso AWS che verrà utilizzato per autenticare l'host. quindi scegliere Next . Nella pagina Imposta autorizzazioni , nel campo Opzioni delle autorizzazioni , scegli Collega direttamente le policy . Dall'elenco delle politiche di autorizzazione , seleziona la CloudWatchAgentServerPolicy politica da aggiungere al tuo utente. Quindi scegli Successivo . Nella pagina Rivedi e crea , assicurati di essere soddisfatto del nome utente e che la CloudWatchAgentServerPolicy politica sia inclusa nel riepilogo delle autorizzazioni . Scegli Crea utente . Crea e recupera la tua chiave di AWS accesso e la chiave segreta: Nel pannello di navigazione della console IAM, scegli Utenti e seleziona il nome utente dell'utente creato nel passaggio precedente. Nella pagina dell'utente, scegli la scheda Credenziali di sicurezza . Quindi, nella sezione Chiavi di accesso , scegli Crea chiave di accesso . Per Crea chiave di accesso (passaggio 1) , scegli Interfaccia a riga di comando (CLI) . Per Crea chiave di accesso (passaggio 2) , inserisci facoltativamente un tag e scegli Avanti . Per Crea chiave di accesso (passaggio 3) , seleziona Scarica il file .csv per salvare un file .csv con la chiave di accesso e la chiave di accesso segreta del tuo utente IAM. Queste informazioni serviranno per i passaggi successivi. Seleziona Fatto . Configura AWS le tue credenziali nel tuo host locale inserendo il seguente comando. Sostituisci ACCESS_KEY_ID e SECRET_ACCESS_ID con la chiave di accesso appena generata e la chiave di accesso segreta dal file.csv scaricato nel passaggio precedente. $ aws configure AWS Access Key ID [None]: ACCESS_KEY_ID AWS Secret Access Key [None]: SECRET_ACCESS_ID Default region name [None]: MY_REGION Default output format [None]: json Passaggio 3: instrumentazione e avvio dell'applicazione Il passo successivo consiste nello strumentare la vostra CloudWatch applicazione per Application Signals. Java Per strumentare le tue applicazioni Java come parte dell'abilitazione di Application Signals su un' EC2 istanza Amazon o un host locale Scarica l'ultima versione dell'agente di strumentazione automatica AWS Distro for OpenTelemetry Java. Puoi scaricare la versione più recente utilizzando questo link . È possibile visualizzare informazioni su tutte le versioni rilasciate in Releases. aws-otel-java-instrumentation Per ottimizzare i vantaggi di Application Signals, utilizza le variabili di ambiente per fornire informazioni aggiuntive prima di avviare l'applicazione. Queste informazioni verranno visualizzate nei pannelli di controllo di Application Signals. Per la variabile OTEL_RESOURCE_ATTRIBUTES , specifica le seguenti informazioni come coppie chiave-valore: (Facoltativo) service.name imposta il nome del servizio. Questo verrà visualizzato come nome del servizio per l'applicazione nei pannelli di controllo di Application Signals. Se non si fornisce un valore per questa chiave, viene utilizzato il valore predefinito di UnknownService . (Facoltativo) deployment.environment imposta l'ambiente in cui viene eseguita l'applicazione. Questo verrà visualizzato come ambiente ospitato dell'applicazione nei pannelli di controllo di Application Signals. Se non si specifica un'opzione, viene utilizzata una delle impostazioni predefinite riportate di seguito: Se si tratta di un'istanza che fa parte di un gruppo Auto Scaling, è impostato su ec2: name-of-Auto-Scaling-group Se si tratta di un' EC2 istanza Amazon che non fa parte di un gruppo Auto Scaling, è impostata su ec2:default Se si tratta di un host on-premises, è impostato su generic:default Questa variabile di ambiente viene utilizzata solo da Application Signals e viene convertita in annotazioni di tracce a raggi X e CloudWatch dimensioni metriche. Per la variabile OTEL_EXPORTER_OTLP_TRACES_ENDPOINT , specifica l'URL dell'endpoint di base in cui esportare le tracce. L' CloudWatch agente espone 4316 come porta OTLP. Su Amazon EC2, poiché le applicazioni comunicano con l' CloudWatch agente locale, è necessario impostare questo valore su OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces Per la variabile OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT , specifica l'URL dell'endpoint di base in cui esportare i parametri. L' CloudWatch agente espone 4316 come porta OTLP. Su Amazon EC2, poiché le applicazioni comunicano con l' CloudWatch agente locale, è necessario impostare questo valore su OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics Per la JAVA_TOOL_OPTIONS variabile, specificare il percorso in cui è archiviato l'agente di strumentazione automatica AWS Distro for OpenTelemetry Java. export JAVA_TOOL_OPTIONS=" -javaagent: $AWS_ADOT_JAVA_INSTRUMENTATION_PATH " Ad esempio: export AWS_ADOT_JAVA_INSTRUMENTATION_PATH=./aws-opentelemetry-agent.jar Per la variabile OTEL_METRICS_EXPORTER , si consiglia di impostare il valore su none . Questa operazione disabilita gli esportatori di altri parametri in modo che venga utilizzato solo l'esportatore Application Signals. Imposta OTEL_AWS_APPLICATION_SIGNALS_ENABLED su true . Questo genera i parametri di Application Signals a partire dalle tracce. Avvia l'applicazione con le variabili di ambiente elencate nel passaggio precedente. Di seguito è riportato un esempio di script di avvio. Nota La seguente configurazione supporta solo le versioni 1.32.2 e successive dell'agente AWS Distro for OpenTelemetry auto-instrumentation for Java. JAVA_TOOL_OPTIONS=" -javaagent:$AWS_ADOT_JAVA_INSTRUMENTATION_PATH" \ OTEL_METRICS_EXPORTER=none \ OTEL_LOGS_EXPORTER=none \ OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \ OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \ OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \ OTEL_RESOURCE_ATTRIBUTES="service.name= $YOUR_SVC_NAME " \ java -jar $MY_JAVA_APP.jar (Facoltativo) Per abilitare la correlazione dei log, in OTEL_RESOURCE_ATTRIBUTES , imposta una variabile di ambiente aggiuntiva aws.log.group.names per i gruppi di log dell'applicazione. In questo modo, le tracce e le metriche dell'applicazione possono essere correlate alle voci di log pertinenti di questi gruppi di log. Per questa variabile, sostituisci $YOUR_APPLICATION_LOG_GROUP con i nomi dei gruppi di log dell'applicazione. Se hai più gruppi di log, puoi usare una e commerciale ( & ) per separarli come in questo esempio: aws.log.group.names=log-group-1&log-group-2 . Per abilitare la correlazione tra metrica e log, è sufficiente impostare questa variabile di ambiente corrente. Per ulteriori informazioni, consulta Abilitazione della correlazione tra metrica e log . Per abilitare la correlazione tra traccia e log, dovrai anche modificare la configurazione di registrazione nell'applicazione. Per ulteriori informazioni, consulta Abilitazione della correlazione tra traccia e log . Di seguito è riportato un esempio di script di avvio che consente di abilitare la correlazione dei log. JAVA_TOOL_OPTIONS=" -javaagent:$AWS_ADOT_JAVA_INSTRUMENTATION_PATH" \ OTEL_METRICS_EXPORTER=none \ OTEL_LOGS_EXPORT=none \ OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \ OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \ OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \ OTEL_RESOURCE_ATTRIBUTES="aws.log.group.names= $YOUR_APPLICATION_LOG_GROUP ,service.name= $YOUR_SVC_NAME " \ java -jar $MY_JAVA_APP.jar Python Nota Se stai usando un server WSGI per l'applicazione Python, oltre ai passaggi successivi di questa sezione, consulta Nessun dato di Application Signals per l'applicazione Python che utilizza un server WSGI per informazioni sull'abilitazione di Application Signals. Per strumentare le tue applicazioni Python come parte dell'abilitazione di Application Signals su un'istanza Amazon EC2 Scarica l'ultima versione dell'agente di AWS strumentazione automatica Distro for OpenTelemetry Python. Installarlo eseguendo il seguente comando . pip install aws-opentelemetry-distro È possibile visualizzare informazioni su tutte le versioni rilasciate nella AWS strumentazione Distro for OpenTelemetry Python . Per ottimizzare i vantaggi di Application Signals, utilizza le variabili di ambiente per fornire informazioni aggiuntive prima di avviare l'applicazione. Queste informazioni verranno visualizzate nei pannelli di controllo di Application Signals. Per la variabile OTEL_RESOURCE_ATTRIBUTES , specifica le seguenti informazioni come coppie chiave-valore: service.name imposta il nome del servizio. Questo verrà visualizzato come nome del servizio per l'applicazione nei pannelli di controllo di Application Signals. Se non si fornisce un valore per questa chiave, viene utilizzato il valore predefinito di UnknownService . deployment.environment imposta l'ambiente in cui viene eseguita l'applicazione. Questo verrà visualizzato come ambiente ospitato dell'applicazione nei pannelli di controllo di Application Signals. Se non si specifica un'opzione, viene utilizzata una delle impostazioni predefinite riportate di seguito: Se si tratta di un'istanza che fa parte di un gruppo Auto Scaling, è impostato su ec2: name-of-Auto-Scaling-group . Se si tratta di un' EC2 istanza Amazon che non fa parte di un gruppo Auto Scaling, è impostata su ec2:default Se si tratta di un host on-premises, è impostato su generic:default Questa chiave di attributo viene utilizzata solo da Application Signals e viene convertita in annotazioni di tracce a raggi X e CloudWatch dimensioni metriche. Per la OTEL_EXPORTER_OTLP_PROTOCOL variabile, specificate di http/protobuf esportare i dati di telemetria tramite HTTP negli endpoint dell' CloudWatch agente elencati nei passaggi seguenti. Per la variabile OTEL_EXPORTER_OTLP_TRACES_ENDPOINT , specifica l'URL dell'endpoint di base in cui esportare le tracce. L' CloudWatch agente espone 4316 come porta OTLP su HTTP. Su Amazon EC2, poiché le applicazioni comunicano con l' CloudWatch agente locale, è necessario impostare questo valore su OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces Per la variabile OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT , specifica l'URL dell'endpoint di base in cui esportare i parametri. L' CloudWatch agente espone 4316 come porta OTLP su HTTP. Su Amazon EC2, poiché le applicazioni comunicano con l' CloudWatch agente locale, è necessario impostare questo valore su OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics Per la variabile OTEL_METRICS_EXPORTER , si consiglia di impostare il valore su none . Questa operazione disabilita gli esportatori di altri parametri in modo che venga utilizzato solo l'esportatore Application Signals. Imposta la OTEL_AWS_APPLICATION_SIGNALS_ENABLED variabile in modo che il contenitore inizi true a inviare tracce e CloudWatch metriche X-Ray a Application Signals. Avvia l'applicazione con le variabili di ambiente illustrate nel passaggio precedente. Di seguito è riportato un esempio di script di avvio. Sostituisci $SVC_NAME con il nome della tua applicazione. Questo verrà visualizzato come nome dell'applicazione nei pannelli di controllo di Application Signals. Sostituisci $PYTHON_APP con la posizione e il nome della tua applicazione. OTEL_METRICS_EXPORTER=none \ OTEL_LOGS_EXPORTER=none \ OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \ OTEL_PYTHON_DISTRO=aws_distro \ OTEL_PYTHON_CONFIGURATOR=aws_configurator \ OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ OTEL_TRACES_SAMPLER=xray \ OTEL_TRACES_SAMPLER_ARG="endpoint=http://localhost:2000" \ OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \ OTEL_RESOURCE_ATTRIBUTES="service.name= $SVC_NAME " \ opentelemetry-instrument python $MY_PYTHON_APP.py Prima di abilitare Application Signals per le applicazioni Python, tieni presente le considerazioni riportate di seguito. In alcune applicazioni containerizzate, l'assenza della variabile di ambiente PYTHONPATH può talvolta impedire l'avvio dell'applicazione. Per risolvere questo problema, assicurati di impostare la variabile di ambiente PYTHONPATH sulla posizione della directory di lavoro dell'applicazione. Ciò è dovuto a un problema noto con la OpenTelemetry strumentazione automatica. Per ulteriori informazioni su questo problema, consulta Python autoinstrumentation setting of PYTHONPATH is not compliant . Per le applicazioni Django, ci sono configurazioni aggiuntive richieste, che sono descritte nella documentazione di Python OpenTelemetry . Usa il flag --noreload per impedire il ricaricamento automatico. Imposta la variabile di ambiente DJANGO_SETTINGS_MODULE sulla posizione del file settings.py dell'applicazione Django. Ciò garantisce che OpenTelemetry possa accedere e integrarsi correttamente con le impostazioni di Django. (Facoltativo) Per abilitare la correlazione dei log, in OTEL_RESOURCE_ATTRIBUTES , imposta una variabile di ambiente aggiuntiva aws.log.group.names per i gruppi di log dell'applicazione. In questo modo, le tracce e le metriche dell'applicazione possono essere correlate alle voci di log pertinenti di questi gruppi di log. Per questa variabile, sostituisci $YOUR_APPLICATION_LOG_GROUP con i nomi dei gruppi di log dell'applicazione. Se hai più gruppi di log, puoi usare una e commerciale ( & ) per separarli come in questo esempio: aws.log.group.names=log-group-1&log-group-2 . Per abilitare la correlazione tra metrica e log, è sufficiente impostare questa variabile di ambiente corrente. Per ulteriori informazioni, consulta Abilitazione della correlazione tra metrica e log . Per abilitare la correlazione tra traccia e log, dovrai anche modificare la configurazione di registrazione nell'applicazione. Per ulteriori informazioni, consulta Abilitazione della correlazione tra traccia e log . Di seguito è riportato un esempio di script di avvio che consente di abilitare la correlazione dei log. OTEL_METRICS_EXPORTER=none \ OTEL_LOGS_EXPORTER=none \ OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \ OTEL_PYTHON_DISTRO=aws_distro \ OTEL_PYTHON_CONFIGURATOR=aws_configurator \ OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ OTEL_TRACES_SAMPLER=xray \ OTEL_TRACES_SAMPLER_ARG="endpoint=http://localhost:2000" \ OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \ OTEL_RESOURCE_ATTRIBUTES="aws.log.group.names= $YOUR_APPLICATION_LOG_GROUP ,service.name= $YOUR_SVC_NAME " \ java -jar $MY_PYTHON_APP.jar .NET Per strumentare le tue applicazioni.NET come parte dell'abilitazione di Application Signals su un' EC2 istanza Amazon o un host locale Scarica l'ultima versione del pacchetto di strumentazione automatica AWS Distro for OpenTelemetry .NET. È possibile scaricare la versione più recente alla pagina Releases. aws-otel-dotnet-instrumentation Per abilitare Application Signals, imposta le seguenti variabili di ambiente per fornire informazioni aggiuntive prima di avviare l'applicazione. Queste variabili sono necessarie per configurare l'hook di avvio per l'instrumentazione .NET prima di avviare l'applicazione .NET. Sostituisci dotnet-service-name nella variabile di ambiente OTEL_RESOURCE_ATTRIBUTES con il nome di servizio che preferisci. Di seguito è riportato un esempio per Linux. export INSTALL_DIR=OpenTelemetryDistribution export CORECLR_ENABLE_PROFILING=1 export CORECLR_PROFILER= { 918728DD-259F-4A6A-AC2B-B85E1B658318} export CORECLR_PROFILER_PATH=$ { INSTALL_DIR}/linux-x64/OpenTelemetry.AutoInstrumentation.Native.so export DOTNET_ADDITIONAL_DEPS=$ { INSTALL_DIR}/AdditionalDeps export DOTNET_SHARED_STORE=$ { INSTALL_DIR}/store export DOTNET_STARTUP_HOOKS=$ { INSTALL_DIR}/net/OpenTelemetry.AutoInstrumentation.StartupHook.dll export OTEL_DOTNET_AUTO_HOME=$ { INSTALL_DIR} export OTEL_DOTNET_AUTO_PLUGINS="AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation" export OTEL_RESOURCE_ATTRIBUTES=service.name=dotnet-service-name export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf export OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4316 export OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://127.0.0.1:4316/v1/metrics export OTEL_METRICS_EXPORTER=none export OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true export OTEL_TRACES_SAMPLER=xray export OTEL_TRACES_SAMPLER_ARG=http://127.0.0.1:2000 Di seguito è riportato un esempio per Windows Server. $env:INSTALL_DIR = "OpenTelemetryDistribution" $env:CORECLR_ENABLE_PROFILING = 1 $env:CORECLR_PROFILER = " { 918728DD-259F-4A6A-AC2B-B85E1B658318}" $env:CORECLR_PROFILER_PATH = Join-Path $env:INSTALL_DIR "win-x64/OpenTelemetry.AutoInstrumentation.Native.dll" $env:DOTNET_ADDITIONAL_DEPS = Join-Path $env:INSTALL_DIR "AdditionalDeps" $env:DOTNET_SHARED_STORE = Join-Path $env:INSTALL_DIR "store" $env:DOTNET_STARTUP_HOOKS = Join-Path $env:INSTALL_DIR "net/OpenTelemetry.AutoInstrumentation.StartupHook.dll" $env:OTEL_DOTNET_AUTO_HOME = $env:INSTALL_DIR $env:OTEL_DOTNET_AUTO_PLUGINS = "AWS.Distro.OpenTelemetry.AutoInstrumentation.Plugin, AWS.Distro.OpenTelemetry.AutoInstrumentation" $env:OTEL_RESOURCE_ATTRIBUTES = "service.name=dotnet-service-name" $env:OTEL_EXPORTER_OTLP_PROTOCOL = "http/protobuf" $env:OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4316" $env:OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT = "http://127.0.0.1:4316/v1/metrics" $env:OTEL_METRICS_EXPORTER = "none" $env:OTEL_AWS_APPLICATION_SIGNALS_ENABLED = "true" $env:OTEL_TRACES_SAMPLER = "xray" $env:OTEL_TRACES_SAMPLER_ARG = "http://127.0.0.1:2000" Avvia l'applicazione con le variabili di ambiente elencate nel passaggio precedente. (Facoltativo) In alternativa, è possibile utilizzare gli script di installazione forniti per facilitare l'installazione e la configurazione del pacchetto di strumentazione automatica AWS Distro for OpenTelemetry .NET. Per Linux, scarica e installa lo script di installazione di Bash dalla pagina delle versioni: GitHub # Download and Install curl -L -O https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases/latest/download/aws-otel-dotnet-install.sh chmod +x ./aws-otel-dotnet-install.sh ./aws-otel-dotnet-install.sh # Instrument . $HOME/.otel-dotnet-auto/instrument.sh export OTEL_RESOURCE_ATTRIBUTES=service.name=dotnet-service-name Per Windows Server, scarica e installa lo script di PowerShell installazione dalla pagina delle GitHub versioni: # Download and Install $module_url = "https://github.com/aws-observability/aws-otel-dotnet-instrumentation/releases/latest/download/AWS.Otel.DotNet.Auto.psm1" $download_path = Join-Path $env:temp "AWS.Otel.DotNet.Auto.psm1" Invoke-WebRequest -Uri $module_url -OutFile $download_path Import-Module $download_path Install-OpenTelemetryCore # Instrument Import-Module $download_path Register-OpenTelemetryForCurrentSession -OTelServiceName "dotnet-service-name" Register-OpenTelemetryForIIS Puoi trovare il NuGet pacchetto del pacchetto di strumentazione automatica AWS Distro for OpenTelemetry .NET nel repository ufficiale. NuGet Assicurati di controllare le istruzioni contenute nel file README . Node.js Nota Se stai abilitando Application Signals per un'applicazione Node.js con ESM, consulta Setting up a Node.js application with the ESM module format prima di iniziare questi passaggi. Per strumentare le tue applicazioni Node.js come parte dell'abilitazione di Application Signals su un' EC2 istanza Amazon Scarica l'ultima versione dell'agente AWS Distro for OpenTelemetry JavaScript auto-instrumentation for Node.js. Installarlo eseguendo il seguente comando . npm install @aws/aws-distro-opentelemetry-node-autoinstrumentation È possibile visualizzare informazioni su tutte le versioni rilasciate su Distro for instrumentation AWS . OpenTelemetry JavaScript Per ottimizzare i vantaggi di Application Signals, utilizza le variabili di ambiente per fornire informazioni aggiuntive prima di avviare l'applicazione. Queste informazioni verranno visualizzate nei pannelli di controllo di Application Signals. Per la variabile OTEL_RESOURCE_ATTRIBUTES , specifica le seguenti informazioni come coppie chiave-valore: service.name imposta il nome del servizio. Questo verrà visualizzato come nome del servizio per l'applicazione nei pannelli di controllo di Application Signals. Se non si fornisce un valore per questa chiave, viene utilizzato il valore predefinito di UnknownService . deployment.environment imposta l'ambiente in cui viene eseguita l'applicazione. Questo verrà visualizzato come ambiente ospitato dell'applicazione nei pannelli di controllo di Application Signals. Se non si specifica un'opzione, viene utilizzata una delle impostazioni predefinite riportate di seguito: Se si tratta di un'istanza che fa parte di un gruppo Auto Scaling, è impostato su ec2: name-of-Auto-Scaling-group . Se si tratta di un' EC2 istanza Amazon che non fa parte di un gruppo Auto Scaling, è impostata su ec2:default Se si tratta di un host on-premises, è impostato su generic:default Questa chiave di attributo viene utilizzata solo da Application Signals e viene convertita in annotazioni di tracce a raggi X e CloudWatch dimensioni metriche. Per la OTEL_EXPORTER_OTLP_PROTOCOL variabile, specificate di http/protobuf esportare i dati di telemetria tramite HTTP negli endpoint dell' CloudWatch agente elencati nei passaggi seguenti. Per la variabile OTEL_EXPORTER_OTLP_TRACES_ENDPOINT , specifica l'URL dell'endpoint di base in cui esportare le tracce. L' CloudWatch agente espone 4316 come porta OTLP su HTTP. Su Amazon EC2, poiché le applicazioni comunicano con l' CloudWatch agente locale, è necessario impostare questo valore su OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces Per la variabile OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT , specifica l'URL dell'endpoint di base in cui esportare i parametri. L' CloudWatch agente espone 4316 come porta OTLP su HTTP. Su Amazon EC2, poiché le applicazioni comunicano con l' CloudWatch agente locale, è necessario impostare questo valore su OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics Per la variabile OTEL_METRICS_EXPORTER , si consiglia di impostare il valore su none . Questa operazione disabilita gli esportatori di altri parametri in modo che venga utilizzato solo l'esportatore Application Signals. Imposta la OTEL_AWS_APPLICATION_SIGNALS_ENABLED variabile in modo che il contenitore inizi true a inviare tracce e CloudWatch metriche X-Ray a Application Signals. Avvia l'applicazione con le variabili di ambiente illustrate nel passaggio precedente. Di seguito è riportato un esempio di script di avvio. Sostituisci $SVC_NAME con il nome della tua applicazione. Questo verrà visualizzato come nome dell'applicazione nei pannelli di controllo di Application Signals. OTEL_METRICS_EXPORTER=none \ OTEL_LOGS_EXPORTER=none \ OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \ OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ OTEL_TRACES_SAMPLER=xray \ OTEL_TRACES_SAMPLER_ARG="endpoint=http://localhost:2000" \ OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \ OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \ OTEL_RESOURCE_ATTRIBUTES="service.name= $SVC_NAME " \ node --require '@aws/aws-distro-opentelemetry-node-autoinstrumentation/register' your-application.js (Facoltativo) Per abilitare la correlazione dei log, in OTEL_RESOURCE_ATTRIBUTES , imposta una variabile di ambiente aggiuntiva aws.log.group.names per i gruppi di log dell'applicazione. In questo modo, le tracce e le metriche dell'applicazione possono essere correlate alle voci di log pertinenti di questi gruppi di log. Per questa variabile, sostituisci $YOUR_APPLICATION_LOG_GROUP con i nomi dei gruppi di log dell'applicazione. Se hai più gruppi di log, puoi usare una e commerciale ( & ) per separarli come in questo esempio: aws.log.group.names=log-group-1&log-group-2 . Per abilitare la correlazione tra metrica e log, è sufficiente impostare questa variabile di ambiente corrente. Per ulteriori informazioni, consulta Abilitazione della correlazione tra metrica e log . Per abilitare la correlazione tra traccia e log, dovrai anche modificare la configurazione di registrazione nell'applicazione. Per ulteriori informazioni, consulta Abilitazione della correlazione tra traccia e log . Di seguito è riportato un esempio di script di avvio che consente di abilitare la correlazione dei log. export OTEL_METRICS_EXPORTER=none \ export OTEL_LOGS_EXPORTER=none \ export OTEL_AWS_APPLICATION_SIGNALS_ENABLED=true \ export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ export OTEL_TRACES_SAMPLER=xray \ export OTEL_TRACES_SAMPLER_ARG=endpoint=http://localhost:2000 \ export OTEL_AWS_APPLICATION_SIGNALS_EXPORTER_ENDPOINT=http://localhost:4316/v1/metrics \ export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4316/v1/traces \ export OTEL_RESOURCE_ATTRIBUTES="aws.log.group.names= $YOUR_APPLICATION_LOG_GROUP ,service.name= $SVC_NAME " \ node --require '@aws/aws-distro-opentelemetry-node-autoinstrumentation/register' your-application.js Configurazione di un'applicazione Node.js con il formato del modulo ESM Forniamo un supporto limitato per le applicazioni Node.js con il formato del modulo ESM. Per informazioni dettagliate, vedi Limitazioni note di Node.js con ESM . Per abilitare Application Signals per un'applicazione Node.js con ESM, è necessario modificare i passaggi della procedura precedente. Innanzitutto, installa @opentelemetry/instrumentation per la tua applicazione Node.js: npm install @opentelemetry/instrumentation@0.54.0 Quindi, nei passaggi 3 e 4 della procedura precedente, modifica le opzioni del nodo da: --require '@aws/aws-distro-opentelemetry-node-autoinstrumentation/register' ai valori seguenti: --import @aws/aws-distro-opentelemetry-node-autoinstrumentation/register --experimental-loader=@opentelemetry/instrumentation/hook.mjs Abilita i segnali applicativi su Amazon EC2 utilizzando Model Context Protocol (MCP) Puoi utilizzare il server CloudWatch Application Signals Model Context Protocol (MCP) per abilitare Application Signals sulle tue EC2 istanze Amazon tramite interazioni IA conversazionali. Ciò fornisce un'interfaccia in linguaggio naturale per configurare il monitoraggio di Application Signals. Il server MCP automatizza il processo di abilitazione comprendendo i requisiti dell'utente e generando la configurazione appropriata. Invece di seguire manualmente i passaggi di configurazione, puoi semplicemente descrivere cosa vuoi abilitare. Prerequisiti Prima di utilizzare il server MCP per abilitare Application Signals, assicuratevi di avere: Un ambiente di sviluppo che supporti MCP (come Kiro, Claude Desktop, VSCode con estensioni MCP o altri strumenti compatibili con MCP) Il server MCP CloudWatch Application Signals configurato nel tuo IDE. Per istruzioni di configurazione dettagliate, consultate la documentazione del server MCP di CloudWatch Application Signals . Utilizzo del server MCP Dopo aver configurato il server MCP CloudWatch Application Signals nell'IDE, puoi richiedere indicazioni sull'abilitazione utilizzando istruzioni in linguaggio naturale. Sebbene l'assistente di codifica sia in grado di dedurre il contesto dalla struttura del progetto, fornire dettagli specifici nelle istruzioni aiuta a garantire una guida più accurata e pertinente. Includi informazioni come il linguaggio dell'applicazione, i dettagli dell'istanza e i percorsi assoluti dell'infrastruttura e del codice dell'applicazione. Suggerimenti sulle migliori pratiche (specifici e completi): "Enable Application Signals for my Python service running on EC2. My app code is in /home/ec2-user/flask-api and IaC is in /home/ec2-user/flask-api/terraform" "I want to add observability to my Java application on EC2. The application code is at /opt/apps/checkout-service and the infrastructure code is at /opt/apps/checkout-service/cloudformation" "Help me instrument my Node.js application on EC2 with Application Signals. Application directory: /home/ubuntu/payment-api Terraform code: /home/ubuntu/payment-api/terraform" Suggerimenti meno efficaci: "Enable monitoring for my app" → Missing: platform, language, paths "Enable Application Signals. My code is in ./src and IaC is in ./infrastructure" → Problem: Relative paths instead of absolute paths "Enable Application Signals for my EC2 service at /home/user/myapp" → Missing: programming language Modello rapido: "Enable Application Signals for my [LANGUAGE] service on EC2. App code: [ABSOLUTE_PATH_TO_APP] IaC code: [ABSOLUTE_PATH_TO_IAC]" Vantaggi dell'utilizzo del server MCP L'utilizzo del server MCP CloudWatch Application Signals offre diversi vantaggi: Interfaccia in linguaggio naturale: descrivi cosa vuoi abilitare senza memorizzare comandi o sintassi di configurazione Guida sensibile al contesto: il server MCP comprende l'ambiente specifico dell'utente e fornisce consigli personalizzati Errori ridotti: la generazione automatizzata della configurazione riduce al minimo gli errori di digitazione manuale Configurazione più rapida: passa più rapidamente dall'intenzione all'implementazione Strumento di apprendimento: guarda le configurazioni generate e scopri come funziona Application Signals Per ulteriori informazioni sulla configurazione e l'utilizzo del server MCP CloudWatch Application Signals, consultate la documentazione del server MCP . (Facoltativo) Monitoraggio dell'integrità delle applicazioni Dopo aver abilitato le applicazioni su Amazon EC2, puoi monitorare lo stato delle applicazioni. Per ulteriori informazioni, consulta Monitoraggio dell'integrità operativa delle applicazioni con Application Signals . JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Abilitazione delle applicazioni sui cluster Amazon EKS Abilitazione delle applicazioni in Amazon ECS Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:29:26 |
https://www.linkedin.com/products/technarts-numerus/ | Numerus | LinkedIn Skip to main content LinkedIn TechNarts-Nart Bilişim in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Numerus IP Address Management (IPAM) Software by TechNarts-Nart Bilişim See who's skilled in this Add as skill Contact us Report this product About Numerus, a mega-scale enterprise-level IP address management tool, helps simplify and automate several tasks related to IP space management. It can manage IP ranges, pools, and VLANs, monitor the hierarchy, manage utilizations and capacities, perform automated IP address assignments, and report assignments to registries with regular synchronization. It provides extensive reporting capabilities and data for 3rd party systems with various integrations. For ISPs, it also provides global IP Registry integrations such as RIPE. Featured customers of Numerus Turkcell Telecommunications 584,615 followers Similar products Next-Gen IPAM Next-Gen IPAM IP Address Management (IPAM) Software AX DHCP | IP Address Management (IPAM) Software AX DHCP | IP Address Management (IPAM) Software IP Address Management (IPAM) Software Tidal LightMesh Tidal LightMesh IP Address Management (IPAM) Software ManageEngine OpUtils ManageEngine OpUtils IP Address Management (IPAM) Software dedicated datacenter proxies dedicated datacenter proxies IP Address Management (IPAM) Software Sign in to see more Show more Show less TechNarts-Nart Bilişim products Inventum Inventum Network Monitoring Software MoniCAT MoniCAT Network Monitoring Software Redkit Redkit Software Configuration Management (SCM) Tools Star Suite Star Suite Network Monitoring Software TART TART Network Traffic Analysis (NTA) Tools Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:26 |
https://th-th.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0WMRvnl7WlxQooJ04UhL3b9qUpdtPlmpa1O0gB6bIJM-T60aONZLzYzvGZlbyf6-hpzHtm4IvtCReDdDPRMse0eNOpWmpYf0LavXLTW8iAB7H9JF6jgkn7dL3LyhLtioeHbWE5w6T00ZkN | Facebook Facebook อีเมลหรือโทรศัพท์ รหัสผ่าน ลืมบัญชีใช่หรือไม่ สร้างบัญชีใหม่ คุณถูกบล็อกชั่วคราว คุณถูกบล็อกชั่วคราว ดูเหมือนว่าคุณจะใช้คุณสมบัตินี้ในทางที่ผิดโดยการใช้เร็วเกินไป คุณถูกบล็อกจากการใช้โดยชั่วคราว Back ภาษาไทย 한국어 English (US) Tiếng Việt Bahasa Indonesia Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch สมัคร เข้าสู่ระบบ Messenger Facebook Lite วิดีโอ Meta Pay Meta Store Meta Quest Ray-Ban Meta Meta AI เนื้อหาเพิ่มเติมจาก Meta AI Instagram Threads ศูนย์ข้อมูลการลงคะแนนเสียง นโยบายความเป็นส่วนตัว ศูนย์ความเป็นส่วนตัว เกี่ยวกับ สร้างโฆษณา สร้างเพจ ผู้พัฒนา ร่วมงานกับ Facebook คุกกี้ ตัวเลือกโฆษณา เงื่อนไข ความช่วยเหลือ การอัพโหลดผู้ติดต่อและผู้ที่ไม่ได้ใช้บริการ การตั้งค่า บันทึกกิจกรรม Meta © 2026 | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/id_id/AmazonCloudWatch/latest/monitoring/what-is-network-monitor.html | Menggunakan Monitor Sintetis Jaringan - Amazon CloudWatch Menggunakan Monitor Sintetis Jaringan - Amazon CloudWatch Dokumentasi Amazon CloudWatch Panduan Pengguna Fitur utama Terminologi dan komponen Persyaratan dan pembatasan Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Menggunakan Monitor Sintetis Jaringan Network Synthetic Monitor memberikan visibilitas ke kinerja jaringan yang menghubungkan aplikasi yang AWS dihosting ke tujuan lokal, dan memungkinkan Anda mengidentifikasi sumber penurunan kinerja jaringan dalam hitungan menit. Network Synthetic Monitor sepenuhnya dikelola oleh AWS, dan tidak memerlukan agen terpisah pada sumber daya yang dipantau. Gunakan Network Synthetic Monitor untuk memvisualisasikan kehilangan paket dan latensi koneksi jaringan hybrid Anda, dan atur peringatan dan ambang batas. Kemudian, berdasarkan informasi ini, Anda dapat mengambil tindakan untuk meningkatkan pengalaman pengguna akhir Anda. Network Synthetic Monitor ditujukan untuk operator jaringan dan pengembang aplikasi yang menginginkan wawasan real-time tentang kinerja jaringan. Fitur utama Network Synthetic Monitor Gunakan Network Synthetic Monitor untuk membandingkan lingkungan jaringan hybrid Anda yang berubah dengan metrik kehilangan paket dan latensi real-time yang berkelanjutan. Saat Anda terhubung dengan menggunakan AWS Direct Connect, Network Synthetic Monitor dapat membantu Anda mendiagnosis degradasi jaringan dengan cepat dalam AWS jaringan dengan indikator kesehatan jaringan (NHI), yang ditulis oleh Network Synthetic Monitor ke akun Amazon CloudWatch Anda. Metrik NHI adalah nilai biner, berdasarkan skor probabilistik tentang apakah degradasi jaringan ada di dalamnya. AWS Network Synthetic Monitor menyediakan pendekatan agen yang dikelola sepenuhnya untuk pemantauan, sehingga Anda tidak perlu menginstal agen baik di VPCs maupun di tempat. Untuk memulai, Anda hanya perlu menentukan subnet VPC dan alamat IP lokal. Anda dapat membuat koneksi pribadi antara VPC dan sumber daya Network Synthetic Monitor dengan menggunakan. AWS PrivateLink Untuk informasi selengkapnya, lihat Menggunakan CloudWatch, CloudWatch Synthetics, dan CloudWatch Network Monitoring dengan antarmuka VPC endpoint . Network Synthetic Monitor menerbitkan metrik ke CloudWatch Metrik. Anda dapat membuat dasbor untuk melihat metrik, dan juga membuat ambang batas dan alarm yang dapat ditindaklanjuti pada metrik yang spesifik untuk aplikasi Anda. Untuk informasi selengkapnya, lihat Bagaimana Network Synthetic Monitor bekerja . Terminologi dan komponen Network Synthetic Monitor Probe — Probe adalah lalu lintas yang dikirim dari sumber daya yang AWS di-host ke alamat IP tujuan lokal. Metrik Network Synthetic Monitor yang diukur oleh probe ditulis ke CloudWatch akun Anda untuk setiap probe yang dikonfigurasi di monitor. Monitor — Monitor menampilkan kinerja jaringan dan informasi kesehatan lainnya untuk lalu lintas yang telah Anda buat probe Network Synthetic Monitor. Anda menambahkan probe sebagai bagian dari pembuatan monitor, dan kemudian Anda dapat melihat informasi metrik kinerja jaringan menggunakan monitor. Saat Anda membuat monitor untuk aplikasi, Anda menambahkan sumber daya yang AWS dihosting sebagai sumber jaringan. Network Synthetic Monitor kemudian membuat daftar semua kemungkinan probe antara sumber daya yang AWS dihosting dan alamat IP tujuan Anda. Anda memilih tujuan yang ingin Anda pantau lalu lintas. AWS Sumber jaringan — Sumber AWS jaringan adalah AWS sumber asal probe monitor, yang merupakan subnet di salah satu sumber Anda. VPCs Tujuan — Tujuan adalah target di jaringan lokal Anda untuk sumber AWS jaringan. Tujuan adalah kombinasi dari alamat IP lokal Anda, protokol jaringan, port, dan ukuran paket jaringan. IPv4 dan IPv6 alamat keduanya didukung. Persyaratan dan batasan Network Synthetic Monitor Berikut ini merangkum persyaratan dan batasan untuk Network Synthetic Monitor. Untuk kuota (atau batas) tertentu, lihat Monitor Sintetis Jaringan . Subnet monitor harus dimiliki oleh akun yang sama dengan monitor. Network Synthetic Monitor tidak menyediakan failover jaringan otomatis jika terjadi masalah AWS jaringan. Ada biaya untuk setiap probe yang Anda buat. Untuk detail harga, silakan lihat Harga untuk Monitor Sintetis Jaringan . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Peran tertaut layanan Bagaimana Network Synthetic Monitor bekerja Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:29:26 |
https://docs.aws.amazon.com/id_id/AmazonCloudWatch/latest/monitoring/Solution-NGINX-On-EC2.html | CloudWatch solusi: Beban kerja NGINX di Amazon EC2 - Amazon CloudWatch CloudWatch solusi: Beban kerja NGINX di Amazon EC2 - Amazon CloudWatch Dokumentasi Amazon CloudWatch Panduan Pengguna Persyaratan Manfaat Biaya CloudWatch konfigurasi agen untuk solusi ini Menyebarkan agen untuk solusi Anda Buat dasbor solusi NGINX Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. CloudWatch solusi: Beban kerja NGINX di Amazon EC2 Solusi ini membantu Anda mengonfigurasi pengumpulan out-of-the-box metrik menggunakan CloudWatch agen untuk aplikasi NGINX yang berjalan pada EC2 instance. Untuk informasi umum tentang semua solusi CloudWatch observabilitas, lihat CloudWatch solusi observabilitas . Topik Persyaratan Manfaat Biaya CloudWatch konfigurasi agen untuk solusi ini Menyebarkan agen untuk solusi Anda Buat dasbor solusi NGINX Persyaratan Solusi ini relevan untuk kondisi berikut: Versi yang didukung: NGINX versi 1.24 Hitung: Amazon EC2 Mendukung hingga 500 EC2 instans di semua beban kerja NGINX dalam satu Wilayah AWS Versi terbaru dari CloudWatch agent Prometheus Eksportir: nginxinc/ (lisensi Apache 2.0) nginx-prometheus-exporter Agen SSM diinstal pada contoh EC2 catatan AWS Systems Manager (Agen SSM) sudah diinstal sebelumnya pada beberapa Amazon Machine Images (AMIs) yang disediakan oleh AWS dan pihak ketiga tepercaya. Jika agen tidak diinstal, Anda dapat menginstalnya secara manual menggunakan prosedur untuk jenis sistem operasi Anda. Menginstal dan menghapus instalan Agen SSM secara manual pada EC2 instance untuk Linux Menginstal dan menghapus instalan Agen SSM secara manual pada EC2 instance untuk macOS Menginstal dan menghapus instalan Agen SSM secara manual pada EC2 instance untuk Windows Server Manfaat Solusi ini memberikan pemantauan NGINX, memberikan wawasan berharga untuk kasus penggunaan berikut: Tinjau metrik koneksi untuk mengidentifikasi potensi kemacetan, masalah koneksi, atau penggunaan yang tidak terduga. Analisis volume permintaan HTTP untuk memahami beban lalu lintas keseluruhan pada NGINX. Di bawah ini adalah keuntungan utama dari solusi ini: Mengotomatiskan pengumpulan metrik untuk NGINX menggunakan konfigurasi CloudWatch agen, menghilangkan instrumentasi manual. Menyediakan CloudWatch dasbor terkonsolidasi yang telah dikonfigurasi sebelumnya untuk metrik NGINX. Dasbor akan secara otomatis menangani metrik dari EC2 instans NGINX baru yang dikonfigurasi menggunakan solusi, bahkan jika metrik tersebut tidak ada saat Anda pertama kali membuat dasbor. Gambar berikut adalah contoh dasbor untuk solusi ini. Biaya Solusi ini membuat dan menggunakan sumber daya di akun Anda. Anda dikenakan biaya untuk penggunaan standar, termasuk yang berikut: Semua metrik yang dikumpulkan oleh CloudWatch agen untuk solusi ini dipublikasikan ke CloudWatch Log menggunakan Embedded Metric Format (EMF). CloudWatch Log ini dibebankan berdasarkan volume dan periode retensi mereka. Oleh karena itu, Anda tidak akan ditagih untuk panggilan PutMetricData API apa pun untuk solusi ini. Metrik yang diekstraksi dan dicerna dari log Anda dikenakan biaya sebagai metrik khusus. Jumlah metrik yang digunakan oleh solusi ini tergantung pada jumlah EC2 host. Setiap EC2 host NGINX yang dikonfigurasi untuk solusi menerbitkan total delapan metrik. Satu dasbor khusus. Untuk informasi selengkapnya tentang CloudWatch harga, lihat CloudWatch Harga Amazon . Kalkulator harga dapat membantu Anda memperkirakan perkiraan biaya bulanan untuk menggunakan solusi ini. Untuk menggunakan kalkulator harga untuk memperkirakan biaya solusi bulanan Anda Buka kalkulator CloudWatch harga Amazon . Untuk Pilih Wilayah , pilih Wilayah AWS tempat Anda ingin menerapkan solusi. Di bagian Metrik , untuk Jumlah metrik, masukkan . 8 * number of EC2 instances configured for this solution Di bagian Log , untuk Log Standar: Data Tertelan , masukkan perkiraan volume log harian yang dihasilkan oleh CloudWatch Agen di semua EC2 host. Misalnya, lima EC2 instance menghasilkan kurang dari 1000 byte per hari. Setelah diatur, Anda dapat memeriksa penggunaan byte Anda menggunakan IncomingBytes metrik, yang dijual oleh CloudWatch Log. Pastikan untuk memilih grup log yang sesuai. Di bagian Log , untuk Penyimpanan/Arsip Log (Log Standar dan Penjual), pilih. Yes to Store Logs: Assuming 1 month retention Ubah nilai ini jika Anda memutuskan untuk membuat perubahan khusus pada periode retensi. Di bagian Dasbor dan Alarm , untuk Jumlah Dasbor , masukkan. 1 Anda dapat melihat perkiraan biaya bulanan Anda di bagian bawah kalkulator harga. CloudWatch konfigurasi agen untuk solusi ini CloudWatch Agen adalah perangkat lunak yang berjalan terus menerus dan mandiri di server Anda dan di lingkungan kontainer. Ini mengumpulkan metrik, log, dan jejak dari infrastruktur dan aplikasi Anda dan mengirimkannya ke dan CloudWatch X-Ray. Untuk informasi lebih lanjut tentang CloudWatch agen, lihat Kumpulkan metrik, log, dan jejak menggunakan agen CloudWatch . Konfigurasi agen dalam solusi ini mengumpulkan satu set metrik untuk membantu Anda mulai memantau dan mengamati beban kerja NGINX Anda. CloudWatch Agen dapat dikonfigurasi untuk mengumpulkan lebih banyak metrik NGINX daripada tampilan dasbor secara default. Untuk daftar semua metrik NGINX yang dapat Anda kumpulkan, lihat Metrik untuk NGINX OSS. Sebelum mengonfigurasi CloudWatch agen, Anda harus terlebih dahulu mengonfigurasi NGINX untuk mengekspos metriknya. Kedua, Anda harus menginstal dan mengkonfigurasi eksportir metrik Prometheus pihak ketiga. Paparkan metrik NGINX catatan Perintah berikut adalah untuk Linux. Periksa NGINX untuk halaman Windows untuk perintah yang setara di Windows Server. Anda harus mengaktifkan stub_status modul terlebih dahulu. Tambahkan blok lokasi baru di file konfigurasi NGINX Anda. Tambahkan baris berikut di server blok Anda nginx.conf untuk mengaktifkan modul NGINX: stub_status location /nginx_status { stub_status on; allow 127.0.0.1; # Allow only localhost to access deny all; # Deny all other IPs } Sebelum memuat ulang NGINX, validasi konfigurasi NGINX Anda: sudo nginx -t Perintah validasi ini membantu mencegah kesalahan yang tidak terduga, yang dapat menyebabkan situs web Anda rusak. Contoh berikut menunjukkan respons yang berhasil: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Setelah Anda berhasil memvalidasi konfigurasi yang diperbarui, muat ulang NGINX (tidak ada output yang diharapkan): sudo systemctl reload nginx Perintah ini menginstruksikan proses NGINX untuk memuat ulang konfigurasi. Reload lebih anggun dibandingkan dengan restart penuh. Muat ulang memulai proses pekerja baru dengan konfigurasi baru, dengan anggun mematikan proses pekerja lama. Uji titik akhir status NGINX: curl http://127.0.0.1/nginx_status Contoh berikut menunjukkan respons yang berhasil: Active connections: 1 server accepts handled requests 6 6 6 Reading: 0 Writing: 1 Waiting: 0 Contoh berikut menunjukkan respons kegagalan (tinjau langkah-langkah sebelumnya sebelum melanjutkan): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>The page is not found</title> ... Konfigurasikan eksportir metrik Prometheus Unduh rilis eksportir NGINX Prometheus terbaru dari repositori resmi. GitHub Anda harus mengunduh biner yang relevan untuk platform Anda. Contoh berikut menunjukkan perintah untuk AMD64: cd /tmp wget https://github.com/nginxinc/nginx-prometheus-exporter/releases/download/v1.3.0/nginx-prometheus-exporter_1.3.0_linux_amd64.tar.gz tar -xzvf nginx-prometheus-exporter_1.3.0_linux_amd64.tar.gz sudo cp nginx-prometheus-exporter /usr/local/bin/ rm /tmp/nginx-prometheus-exporter* Jalankan eksportir Prometheus dan arahkan ke halaman status rintisan NGINX: nohup /usr/local/bin/nginx-prometheus-exporter -nginx.scrape-uri http://127.0.0.1/nginx_status &>/dev/null & Contoh berikut menunjukkan respon (ID pekerjaan latar belakang dan PID): [1] 74699 Uji titik akhir Prometheus NGINX Validasi bahwa eksportir Prometheus NGINX telah mulai mengekspos metrik yang relevan: curl http://localhost: port-number /metrics Contoh berikut menunjukkan respons yang berhasil: # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds { quantile="0"} 0 go_gc_duration_seconds { quantile="0.25"} 0 ... # HELP nginx_connections_accepted Accepted client connections # TYPE nginx_connections_accepted counter nginx_connections_accepted 14 # HELP nginx_connections_active Active client connections # TYPE nginx_connections_active gauge nginx_connections_active 1 ... # TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total { code="200"} 1 promhttp_metric_handler_requests_total { code="500"} 0 promhttp_metric_handler_requests_total { code="503"} 0 Konfigurasi agen untuk solusi ini Metrik yang dikumpulkan oleh agen ditentukan dalam konfigurasi agen. Solusi ini menyediakan konfigurasi agen untuk mengumpulkan metrik yang direkomendasikan dengan dimensi yang sesuai untuk dasbor solusi. Langkah-langkah untuk menerapkan solusi dijelaskan nanti di Menyebarkan agen untuk solusi Anda . Informasi berikut dimaksudkan untuk membantu Anda memahami cara menyesuaikan konfigurasi agen untuk lingkungan Anda. Anda harus menyesuaikan beberapa bagian agen dan konfigurasi Prometheus untuk lingkungan Anda seperti nomor port yang digunakan oleh eksportir Prometheus. Port yang digunakan oleh eksportir Prometheus dapat diverifikasi menggunakan perintah berikut: sudo netstat -antp | grep nginx-prom Contoh berikut menunjukkan respons (lihat nilai port 9113): tcp6 0 0 :::9113 :::* LISTEN 76398/nginx-prometh Konfigurasi agen untuk host NGINX CloudWatch Agen dengan pemantauan Prometheus membutuhkan dua konfigurasi untuk mengikis metrik Prometheus. Setiap konfigurasi akan disimpan sebagai parameter terpisah di Parameter Store SSM, seperti yang dijelaskan nanti. Langkah 2: Simpan file konfigurasi CloudWatch agen yang direkomendasikan di Systems Manager Parameter Store Konfigurasi pertama adalah untuk eksportir Prometheus, seperti yang didokumentasikan dalam dokumentasi scrape_config Prometheus. Konfigurasi kedua adalah untuk CloudWatch agen. Konfigurasi Prometheus Ganti port-number dengan port server Anda. global: scrape_interval: 30s scrape_timeout: 10s scrape_configs: - job_name: 'nginx' metrics_path: /metrics static_configs: - targets: ['localhost: port-number '] ec2_sd_configs: - port: port-number relabel_configs: - source_labels: ['__meta_ec2_instance_id'] target_label: InstanceId metric_relabel_configs: - source_labels: ['__name__'] regex: 'nginx_up|nginx_http_requests_total|nginx_connections_.*' action: keep CloudWatch konfigurasi agen Sesuai konfigurasi CloudWatch agen sebelumnya, metrik ini diterbitkan melalui CloudWatch Log menggunakan format metrik tertanam (EMF ). Log ini dikonfigurasi untuk menggunakan grup log nginx . Anda dapat menyesuaikan log_group_name dengan nama berbeda yang mewakili CloudWatch log. Jika Anda menggunakan Windows Server, atur prometheus_config_path konfigurasi berikut ke C:\\ProgramData\\Amazon\\AmazonCloudWatchAgent\\prometheus.yaml . { "agent": { "metrics_collection_interval": 60 }, "logs": { "metrics_collected": { "prometheus": { "log_group_name": "nginx", "prometheus_config_path": "/opt/aws/amazon-cloudwatch-agent/etc/prometheus.yaml", "emf_processor": { "metric_declaration_dedup": true, "metric_namespace": "CWAgent", "metric_declaration":[ { "source_labels":["InstanceId"], "metric_selectors":["nginx_up", "nginx_http_requests_total", "nginx_connections*"], "dimensions": [["InstanceId"]] } ] } } } } } Menyebarkan agen untuk solusi Anda Ada beberapa pendekatan untuk menginstal CloudWatch agen, tergantung pada kasus penggunaan. Sebaiknya gunakan Systems Manager untuk solusi ini. Ini memberikan pengalaman konsol dan membuatnya lebih mudah untuk mengelola armada server yang dikelola dalam satu AWS akun. Petunjuk di bagian ini menggunakan Systems Manager dan ditujukan untuk saat Anda tidak menjalankan CloudWatch agen dengan konfigurasi yang ada. Anda dapat memeriksa apakah CloudWatch agen berjalan dengan mengikuti langkah-langkah di Verifikasi bahwa CloudWatch agen sedang berjalan . Jika Anda sudah menjalankan CloudWatch agen di EC2 host tempat beban kerja diterapkan dan mengelola konfigurasi agen, Anda dapat melewati instruksi di bagian ini dan mengikuti mekanisme penerapan yang ada untuk memperbarui konfigurasi. Pastikan untuk menggabungkan CloudWatch agen baru dan konfigurasi Prometheus dengan konfigurasi yang ada, lalu terapkan konfigurasi gabungan. Jika Anda menggunakan Systems Manager untuk menyimpan dan mengelola konfigurasi CloudWatch agen, Anda dapat menggabungkan konfigurasi ke nilai parameter yang ada. Untuk informasi selengkapnya, lihat Mengelola file konfigurasi CloudWatch agen . catatan Menggunakan Systems Manager untuk menerapkan konfigurasi CloudWatch agen berikut akan menggantikan atau menimpa konfigurasi CloudWatch agen yang ada pada instans Anda. EC2 Anda dapat memodifikasi konfigurasi ini agar sesuai dengan lingkungan unik atau kasus penggunaan Anda. Metrik yang ditentukan dalam konfigurasi adalah minimum yang diperlukan untuk dasbor yang disediakan solusinya. Proses penyebaran mencakup langkah-langkah berikut: Langkah 1: Pastikan bahwa EC2 instance target memiliki izin IAM yang diperlukan. Langkah 2: Simpan file konfigurasi agen yang direkomendasikan di Systems Manager Parameter Store. Langkah 3: Instal CloudWatch agen pada satu atau lebih EC2 contoh menggunakan CloudFormation tumpukan. Langkah 4: Verifikasi pengaturan agen dikonfigurasi dengan benar. Langkah 1: Pastikan EC2 instance target memiliki izin IAM yang diperlukan Anda harus memberikan izin kepada Systems Manager untuk menginstal dan mengkonfigurasi CloudWatch agen. Anda harus memberikan izin kepada CloudWatch agen untuk mempublikasikan telemetri dari EC2 instans Anda ke. CloudWatch Anda juga harus memberikan akses EC2 baca kepada CloudWatch agen. EC2 akses baca diperlukan EC2 InstanceId agar ditambahkan sebagai dimensi metrik. Persyaratan tambahan ini didorong oleh prometheus.yaml seperti yang dijelaskan di atas karena digunakan __meta_ec2_instance_id melalui EC2 Service Discovery. Pastikan bahwa peran IAM yang dilampirkan pada instans memiliki kebijakan EC2 ReadOnlyAccess IAM CloudWatchAgentServerPolicy , Amazon SSMManaged InstanceCore , dan Amazon yang dilampirkan. Setelah peran dibuat, lampirkan peran ke EC2 instance Anda. Untuk melampirkan peran ke EC2 instance, ikuti langkah-langkah di Lampirkan peran IAM ke instance . Langkah 2: Simpan file konfigurasi CloudWatch agen yang direkomendasikan di Systems Manager Parameter Store Parameter Store menyederhanakan penginstalan CloudWatch agen pada sebuah EC2 instance dengan menyimpan dan mengelola parameter konfigurasi dengan aman, menghilangkan kebutuhan akan nilai hard-code. Ini memastikan proses penyebaran yang lebih aman dan fleksibel, memungkinkan manajemen terpusat dan pembaruan konfigurasi yang lebih mudah di beberapa instance. Gunakan langkah-langkah berikut untuk menyimpan file konfigurasi CloudWatch agen yang direkomendasikan sebagai parameter di Parameter Store. Untuk membuat file konfigurasi CloudWatch agen sebagai parameter Buka AWS Systems Manager konsol di https://console.aws.amazon.com/systems-manager/ . Verifikasi bahwa Wilayah yang dipilih di konsol adalah Wilayah tempat NGINX berjalan. Dari panel navigasi, pilih Manajemen Aplikasi , Parameter Store Ikuti langkah-langkah ini untuk membuat parameter baru untuk konfigurasi. Pilih Buat parameter . Di kotak Nama , masukkan nama yang akan Anda gunakan untuk mereferensikan file konfigurasi CloudWatch agen di langkah selanjutnya. Misalnya, AmazonCloudWatch-NGINX-CloudWatchAgent-Configuration . (Opsional) Dalam Deskripsi kotak, ketikkan deskripsi untuk parameter. Untuk tingkat Parameter , pilih Standar . Untuk Type , pilih String . Untuk tipe Data , pilih teks . Di kotak Nilai , tempel blok JSON yang sesuai yang terdaftar di Konfigurasi agen untuk host NGINX . Pastikan untuk menyesuaikan sesuai kebutuhan. Misalnya, yang relevan log_group_name . Pilih Buat parameter . Untuk membuat file konfigurasi Prometheus sebagai parameter Buka AWS Systems Manager konsol di https://console.aws.amazon.com/systems-manager/ . Dari panel navigasi, pilih Manajemen Aplikasi , Parameter Store Ikuti langkah-langkah ini untuk membuat parameter baru untuk konfigurasi. Pilih Buat parameter . Di kotak Nama , masukkan nama yang akan Anda gunakan untuk mereferensikan file konfigurasi di langkah selanjutnya. Misalnya, AmazonCloudWatch-NGINX-Prometheus-Configuration . (Opsional) Dalam Deskripsi kotak, ketikkan deskripsi untuk parameter. Untuk tingkat Parameter , pilih Standar . Untuk Type , pilih String . Untuk tipe Data , pilih teks . Di kotak Nilai , tempel blok YAMM yang sesuai yang terdaftar di. Konfigurasi agen untuk host NGINX Pastikan untuk menyesuaikan sesuai kebutuhan. Misalnya, nomor port yang relevan sesuai targets . Pilih Buat parameter . Langkah 3: Instal CloudWatch agen dan terapkan konfigurasi menggunakan CloudFormation templat Anda dapat menggunakan AWS CloudFormation untuk menginstal agen dan mengonfigurasinya untuk menggunakan konfigurasi CloudWatch agen yang Anda buat di langkah sebelumnya. Untuk menginstal dan mengkonfigurasi CloudWatch agen untuk solusi ini Buka wizard CloudFormation Quick create stack menggunakan link ini: https://console.aws.amazon.com/cloudformation/home? #/ stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw - agent-installation-template-with -prometheus-config-1.0.0.json . Verifikasi bahwa Wilayah yang dipilih di konsol adalah Wilayah tempat beban kerja NGINX berjalan. Untuk nama Stack , masukkan nama untuk mengidentifikasi tumpukan ini, seperti CWAgentInstallationStack . Di bagian Parameter , tentukan yang berikut ini: Untuk CloudWatchAgentConfigSSM , masukkan nama AWS Systems Manager parameter untuk konfigurasi agen yang Anda buat sebelumnya, seperti AmazonCloudWatch-NGINX-CloudWatchAgent-Configuration . Untuk PrometheusConfigSSM , masukkan nama AWS Systems Manager parameter untuk konfigurasi agen yang Anda buat sebelumnya, seperti AmazonCloudWatch-NGINX-Prometheus-Configuration . Untuk memilih instance target, Anda memiliki dua opsi. Untuk InstanceIds , tentukan daftar instance IDs daftar instance yang dibatasi koma IDs di mana Anda ingin menginstal CloudWatch agen dengan konfigurasi ini. Anda dapat membuat daftar satu contoh atau beberapa contoh. Jika Anda menerapkan pada skala besar, Anda dapat menentukan TagKey dan yang sesuai TagValue untuk menargetkan semua EC2 instance dengan tag dan nilai ini. Jika Anda menentukan a TagKey , Anda harus menentukan yang sesuai TagValue . (Untuk grup Auto Scaling, tentukan aws:autoscaling:groupName TagKey dan tentukan nama grup Auto Scaling untuk digunakan ke semua TagValue instance dalam grup Auto Scaling.) Tinjau pengaturan, lalu pilih Buat tumpukan . Jika Anda ingin mengedit file templat terlebih dahulu untuk menyesuaikannya, pilih opsi Unggah file templat di bawah Buat Wisaya Tumpukan untuk mengunggah templat yang diedit. Untuk informasi selengkapnya, lihat Membuat tumpukan di CloudFormation konsol . Anda dapat menggunakan tautan berikut untuk mengunduh templat: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/CloudWatchAgent/CFN/v1.0.0/cw- agent-installation-template-with -prometheus-config-1.0.0.json . catatan Setelah langkah ini selesai, parameter Systems Manager ini akan dikaitkan dengan CloudWatch agen yang berjalan dalam instance yang ditargetkan. Ini artinya bahwa: Jika parameter Systems Manager dihapus, agen akan berhenti. Jika parameter Systems Manager diedit, perubahan konfigurasi akan secara otomatis berlaku untuk agen pada frekuensi terjadwal yaitu 30 hari secara default. Jika Anda ingin segera menerapkan perubahan pada parameter Systems Manager ini, Anda harus menjalankan langkah ini lagi. Untuk informasi selengkapnya tentang asosiasi, lihat Bekerja dengan asosiasi di Systems Manager . Langkah 4: Verifikasi pengaturan agen dikonfigurasi dengan benar Anda dapat memverifikasi apakah CloudWatch agen diinstal dengan mengikuti langkah-langkah di Verifikasi bahwa CloudWatch agen sedang berjalan . Jika CloudWatch agen tidak diinstal dan berjalan, pastikan Anda telah mengatur semuanya dengan benar. Pastikan Anda telah melampirkan peran dengan izin yang benar untuk EC2 instance seperti yang dijelaskan dalam Langkah 1: Pastikan EC2 instance target memiliki izin IAM yang diperlukan . Pastikan Anda telah mengkonfigurasi JSON dengan benar untuk parameter Systems Manager. Ikuti langkah-langkah di Memecahkan masalah pemasangan agen dengan CloudWatch CloudFormation . Jika semuanya diatur dengan benar, maka Anda akan melihat metrik NGINX dipublikasikan. CloudWatch Anda dapat memeriksa CloudWatch konsol untuk memverifikasi bahwa mereka sedang dipublikasikan. Untuk memverifikasi bahwa metrik NGINX sedang dipublikasikan ke CloudWatch Buka CloudWatch konsol di https://console.aws.amazon.com/cloudwatch/ . Pilih Metrik , Semua Metrik . Pastikan Anda telah memilih Wilayah tempat Anda menerapkan solusi, dan pilih Ruang nama khusus ,. CWAgent Cari metrik seperti nginx_http_requests_total . Jika Anda melihat hasil untuk metrik ini, maka metrik sedang dipublikasikan ke. CloudWatch Buat dasbor solusi NGINX Dasbor yang disediakan oleh solusi ini menyajikan metrik beban kerja NGINX dengan menggabungkan dan menyajikan metrik di semua instance. Dasbor menunjukkan rincian kontributor teratas (10 teratas per widget metrik) untuk setiap metrik. Ini membantu Anda mengidentifikasi outlier atau instance dengan cepat yang berkontribusi secara signifikan terhadap metrik yang diamati. Untuk membuat dasbor, Anda dapat menggunakan opsi berikut: Gunakan CloudWatch konsol untuk membuat dasbor. Gunakan AWS CloudFormation konsol untuk menyebarkan dasbor. Unduh AWS CloudFormation infrastruktur sebagai kode dan integrasikan sebagai bagian dari otomatisasi integrasi berkelanjutan (CI) Anda. Dengan menggunakan CloudWatch konsol untuk membuat dasbor, Anda dapat melihat pratinjau dasbor sebelum benar-benar membuat dan mengisi daya. catatan Dasbor yang dibuat dengan CloudFormation solusi ini menampilkan metrik dari Wilayah tempat solusi diterapkan. Pastikan untuk membuat CloudFormation tumpukan di Wilayah tempat metrik NGINX Anda diterbitkan. Jika Anda telah menentukan namespace khusus selain CWAgent dalam konfigurasi CloudWatch agen, Anda harus mengubah CloudFormation template untuk dasbor untuk diganti CWAgent dengan namespace khusus yang Anda gunakan. Untuk membuat dasbor melalui CloudWatch Konsol Buka Dasbor Buat CloudWatch Konsol menggunakan tautan ini: https://console.aws.amazon.com/cloudwatch/beranda? #dashboards? DashboardTemplate= 2&referrer=os-catalog NginxOnEc . Verifikasi bahwa Wilayah yang dipilih di konsol adalah Wilayah tempat beban kerja NGINX berjalan. Masukkan nama dasbor, lalu pilih Create Dashboard . Untuk membedakan dasbor ini dengan mudah dari dasbor serupa di Wilayah lain, sebaiknya sertakan nama Wilayah di nama dasbor, seperti. NGINXDashboard-us-east-1 Pratinjau dasbor dan pilih Simpan untuk membuat dasbor. Untuk membuat dasbor melalui CloudFormation Buka wizard CloudFormation Quick create stack menggunakan link ini: https://console.aws.amazon.com/cloudformation/home? #/ stacks/quickcreate?templateURL=https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NGINX_EC2/CloudWatch/CFN/v1.0.0/dashboard -template-1.0.0.json . Verifikasi bahwa Wilayah yang dipilih di konsol adalah Wilayah tempat beban kerja NGINX berjalan. Untuk nama Stack , masukkan nama untuk mengidentifikasi tumpukan ini, seperti NGINXDashboardStack . Di bagian Parameter , tentukan nama dasbor di bawah DashboardName parameter. Untuk membedakan dasbor ini dengan mudah dari dasbor serupa di Wilayah lain, sebaiknya sertakan nama Wilayah di nama dasbor, seperti. NGINXDashboard-us-east-1 Akui kemampuan akses untuk transformasi di bawah Kemampuan dan transformasi. Perhatikan bahwa CloudFormation tidak menambahkan sumber daya IAM apa pun. Tinjau pengaturan, lalu pilih Buat tumpukan . Setelah status tumpukan CREATE_COMPLETE , pilih tab Resources di bawah tumpukan yang dibuat dan kemudian pilih tautan di bawah Physical ID untuk pergi ke dasbor. Anda juga dapat mengakses dasbor di CloudWatch konsol dengan memilih Dasbor di panel navigasi kiri konsol, dan menemukan nama dasbor di bawah Dasbor Kustom . Jika Anda ingin mengedit file templat terlebih dahulu untuk menyesuaikannya, pilih opsi Unggah file templat di bawah Buat Wisaya Tumpukan untuk mengunggah templat yang diedit. Untuk informasi selengkapnya, lihat Membuat tumpukan di CloudFormation konsol . Anda dapat menggunakan tautan berikut untuk mengunduh templat: https://aws-observability-solutions-prod-us-east-1.s3.us-east-1.amazonaws.com/NGINX_EC2/CloudWatch/CFN/v1.0.0/dashboard-template-1.0.0.json . Memulai dengan dasbor NGINX Berikut adalah beberapa tugas yang dapat Anda coba dengan dasbor NGINX baru. Tugas-tugas ini memungkinkan Anda untuk memvalidasi bahwa dasbor berfungsi dengan benar dan memberi Anda pengalaman langsung menggunakannya untuk memantau beban kerja NGINX. Saat Anda mencobanya, Anda akan terbiasa dengan menavigasi dasbor dan menafsirkan metrik yang divisualisasikan. Tinjau metrik koneksi Di bagian Koneksi , Anda dapat menemukan beberapa metrik utama yang memberikan wawasan tentang penanganan koneksi klien server NGINX Anda. Memantau metrik koneksi ini dapat membantu Anda mengidentifikasi potensi kemacetan, masalah koneksi, atau pola koneksi yang tidak terduga. Koneksi klien yang diterima Koneksi klien aktif Koneksi klien yang ditangani Koneksi membaca permintaan Koneksi klien menganggur Koneksi menulis tanggapan Menganalisis volume permintaan HTTP request Metrik di bagian Permintaan HTTP menunjukkan jumlah total permintaan HTTP yang ditangani oleh server NGINX. Melacak metrik ini dari waktu ke waktu dapat membantu Anda memahami beban lalu lintas keseluruhan pada infrastruktur NGINX Anda dan merencanakan alokasi dan penskalaan sumber daya yang sesuai. Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Beban kerja JVM pada EC2 Beban kerja GPU NVIDIA aktif EC2 Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:29:26 |
https://www.linkedin.com/products/technarts-redkit/?trk=products_details_guest_other_products_by_org_section_product_link_result-card_full-click | Redkit | LinkedIn Skip to main content LinkedIn TechNarts-Nart Bilişim in Asan Expand search This button displays the currently selected search type. When expanded it provides a list of search options that will switch the search inputs to match the current selection. Jobs People Learning Clear text Clear text Clear text Clear text Clear text Join now Sign in Redkit Software Configuration Management (SCM) Tools by TechNarts-Nart Bilişim See who's skilled in this Add as skill Contact us Report this product About Redkit is a zero-touch network configuration management solution designed to increase network efficiency and ensure a faultless activation process. As a web-based application, it offers a user-friendly interface and seamless integration capabilities, enabling the live network configuration changes and advanced analysis through the monitoring of network parameters. Its vendor-agnostic structure supports configuration creation and execution, pre- and post-operation validations, consistency checks to ensure fail safe zero touch operations. Redkit also provides a suite of advanced tools, including a service explorer, shared resource analysis, QoS checks, and LSP planning. Featured customers of Redkit Vodafone Telecommunications 2,488,540 followers Similar products HashiCorp Terraform HashiCorp Terraform Software Configuration Management (SCM) Tools Nix Nix Software Configuration Management (SCM) Tools ARCON | Drift Management ARCON | Drift Management Software Configuration Management (SCM) Tools Tripwire Enterprise Tripwire Enterprise Software Configuration Management (SCM) Tools CFEngine CFEngine Software Configuration Management (SCM) Tools Config Master for Salesforce Config Master for Salesforce Software Configuration Management (SCM) Tools Sign in to see more Show more Show less TechNarts-Nart Bilişim products Inventum Inventum Network Monitoring Software MoniCAT MoniCAT Network Monitoring Software Numerus Numerus IP Address Management (IPAM) Software Star Suite Star Suite Network Monitoring Software TART TART Network Traffic Analysis (NTA) Tools Show more Show less LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines English (English) Language | 2026-01-13T09:29:26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.